by anonymous contributor
Global Circulation Models (GCMs) have long been the primary tools for climate prediction, driving political and policy decisions. However, GCMs have consistently run hot, predicting more warming than has been observed. A recent paper titled “The Overlooked Sub-Grid Air-Sea Flux in Climate Models” by Julius J.M. Busecke et al. exposes a significant deficiency in these models: their handling of small-scale air-sea interactions. Let’s explore the findings and implications of this study, highlighting the potential for improved modeling techniques to enhance climate predictions, though without guaranteed results.
Understanding Air-Sea Interactions
Air-sea interactions are critical for regulating the Earth’s climate. These processes involve the exchange of heat, momentum, and gases between the ocean and the atmosphere, affecting weather patterns, ocean circulation, and climate variability. The ocean absorbs about 90% of the excess heat due to human activities, playing a central role in global climate dynamics.
Complexities in Modeling Air-Sea Interactions
Accurately representing air-sea interactions in climate models is challenging due to their complex and variable nature. These interactions occur across a wide range of spatial and temporal scales, from short-term processes like boundary layer turbulence and hurricane formation to long-term phenomena such as the El Niño-Southern Oscillation. The representation of these processes is hampered by the resolution of the models and the inherent nonlinearity of the coupling formulae used to simulate them.
Limitations of Coarse-Resolution Models
The primary issue highlighted by Busecke et al. is the coarse resolution of most current GCMs, typically around 1° or larger. These models fail to capture small-scale structures and processes at the air-sea interface, leading to significant biases in the simulation of sea surface temperatures (SSTs) and air-sea heat fluxes. The study states:
“Coarse-resolution climate models do not resolve small-scale structures in the air-sea state, which, due to strong nonlinearities in the coupling formulae, can impact the large-scale air-sea exchange—a mechanism that has received little attention.”
https://www.researchgate.net/publication/380723812_The_Overlooked_Sub-Grid_Air-Sea_Flux_in_Climate_Models
This oversight results in a systematic cooling of the ocean by about 4 W/m² globally, with significant regional variations. These biases contribute to the tendency of GCMs to overestimate future warming, casting doubt on their reliability.
The Role of High-Resolution Simulations
To address this deficiency, the researchers employed high-resolution coupled climate simulations with a resolution of 1/10°. These simulations allowed them to analyze the effects of small-scale heterogeneity on air-sea heat fluxes, revealing that such heterogeneity can significantly alter large-scale fluxes.
Methodology
The researchers used a method involving spatial filtering and offline computation of heat fluxes to quantify the impact of small-scale processes. They defined the small-scale turbulent heat flux (Q*) as:
“Q* = Q – Qc, where Q is the flux computed using the high-resolution fields, and Qc is the flux computed using the low-resolution surrogate fields.”
https://www.researchgate.net/publication/380723812_The_Overlooked_Sub-Grid_Air-Sea_Flux_in_Climate_Models
This approach isolates the net impact of small-scale variability on large-scale fluxes, which is often missing in coarse-resolution models.
Key Findings
The study found that small-scale air-sea fluxes show strong spatial and temporal variability, locally reaching values up to 100 W/m². On average, these fluxes result in a global cooling effect of approximately 4 W/m², with some regions experiencing even higher values.
Atmospheric vs. Oceanic Contributions
One striking finding is the differentiation between atmospheric and oceanic contributions to these small-scale fluxes. The atmospheric component predominantly leads to cooling, while the oceanic component is more variable, causing both warming and cooling depending on the region. This variability is especially pronounced in dynamically active areas such as western boundary currents and the Antarctic Circumpolar Current.
The study explains:
“The contribution to the sub-grid flux (Q*) due to small-scale atmospheric features (Q*,A) produces a spatially smooth cooling effect over much of the ocean… In contrast, the contribution from small-scale oceanic features (Q*,O) is highly spatially variable and results in both warming and cooling of the ocean.”
https://www.researchgate.net/publication/380723812_The_Overlooked_Sub-Grid_Air-Sea_Flux_in_Climate_Models
Regional Impacts
The impact of small-scale heterogeneity is not uniform across the globe. Regions with high dynamic activity, such as the western boundary currents (e.g., the Gulf Stream and the Kuroshio Current) and the Agulhas retroflection, exhibit the strongest cooling effects, with long-term averages exceeding 20 W/m². In contrast, areas near the equator and the more energetic parts of the Antarctic Circumpolar Current sometimes show warming effects due to small-scale oceanic features.
The researchers found that around 70% of daily average values for the small-scale flux enhance the large-scale flux, with over 20% of these values showing an enhancement exceeding 10% of the magnitude of the large-scale flux. In dynamically active regions, this enhancement is even more pronounced, highlighting the critical role of small-scale processes in shaping large-scale climatic patterns.
Implications for Climate Modeling
The implications of these findings are significant. The study underscores the need for GCMs to incorporate parameterizations that account for small-scale heterogeneity. The current generation of models, as used in the Coupled Model Intercomparison Project (CMIP), exhibits substantial biases that have led to inaccurate predictions and, consequently, questionable policy decisions based on these models.
Moving Towards Improved Models
Future climate models need to integrate high-resolution data and develop robust parameterizations for small-scale processes. As the paper suggests:
“By identifying an overlooked contribution to air-sea heat flux in climate models, we open a promising new direction for addressing biases in climate simulations and thus improving future climate predictions.”
https://www.researchgate.net/publication/380723812_The_Overlooked_Sub-Grid_Air-Sea_Flux_in_Climate_Models
However, it’s crucial to acknowledge that these improvements are not guaranteed to resolve all the inaccuracies in current climate models. While the study highlights a significant oversight, the path to fully accurate climate predictions remains uncertain.
The Need for Comprehensive Parameterizations
Developing comprehensive parameterizations that accurately represent the impact of small-scale heterogeneity in coarse-resolution models is a complex but essential task. This involves not only heat fluxes but also momentum and gas exchanges, which play critical roles in the climate system.
The study emphasizes the importance of accounting for the variability due to sub-grid flows using stochastic approaches, as well as the need for parameterizations that address the impacts of spatial heterogeneity at the air-sea interface. While some parameterizations exist for temporal variability (e.g., gustiness), no comprehensive parameterization currently accounts for all components of spatial heterogeneity.
Challenges and Future Directions
While the study provides a crucial step forward, it also acknowledges several limitations. The reliance on high-resolution simulations means that results are sensitive to the resolution and scale of filtering used. Additionally, more work is needed to understand how these small-scale fluxes interact with other processes and influence large-scale circulation and energetics.
Addressing Scale-Dependence
One major challenge is the scale-dependence of the estimated fluxes. The researchers note that while they do not believe the qualitative results of their study would change with different resolutions, building quantitative confidence will require higher-resolution coupled simulations and a thorough investigation of scale-dependence.
Integrating Observations and Models
A promising direction for future research is the integration of high-resolution observational data with model simulations. Upcoming satellite missions, like ODYSEA, and field campaigns conducting high-resolution surveys of the air-sea transition zone could provide valuable data to validate and refine model parameterizations. These efforts could help bridge the gap between high-resolution simulations and coarse-resolution climate models.
Extending the Study to Other Fluxes
While this study focuses on turbulent heat fluxes, the researchers suggest that future work should also consider the effects on momentum and gas fluxes. These fluxes are equally important for understanding the dynamics of the climate system and could reveal additional biases and deficiencies in current models.
Conclusion
The paper by Busecke et al. highlights a significant shortcoming in current climate models, emphasizing the need for greater attention to small-scale air-sea interactions. Addressing this gap is crucial for improving the accuracy of climate predictions and informing more reliable policy decisions. Integrating high-resolution data and refining model parameterizations will be essential steps toward a more accurate and reliable understanding of our changing climate.
In summary, while GCMs have provided a basic framework for understanding climate dynamics, it is imperative to recognize and address their limitations. By incorporating insights from studies like this one, we can develop more robust models that better capture the complexities of the Earth system, leading to more informed and effective climate policies.
The journey towards more accurate climate models is ongoing, and acknowledging the deficiencies in current approaches is a critical step. As we enhance our understanding of small-scale processes and their impacts, we might move closer to developing climate models that can truly reflect the intricacies of the Earth’s climate system. However, it’s essential to remain cautious and critical, as the path to reliable climate predictions is fraught with challenges and uncertainties.
The full pre-print can be accessed here.
H/T Judith Curry and Friends of Science Society, Ken Gregory Director
Sorry, couldn’t read past that…….
Perhaps the “excess heat due to human activities” is extremely trivial- and 90% of that is absorbed in the ocean. So, he/she/it could be right but without believing in any “climate emergency”.
Lol.
Old joke from 1st year Latin.
You took the words out of my mouth! Just because something is true, doesn’t prove that it’s significant. There is NO CLIMATE EMERGENCY!
But also to expublican’s point, that 90% is a mammoth assertion held up only by the buoyancy of the hot air used to express it.
When the oceans represent 71% of the earth’s surface area why would any global effect be disproportionately acting on them? It might be true, but there should have been a few words devoted to explaining that.
I’d venture a guess that ocean albedo vs land albedo leads to more solar warming on the seas than on land.
The effect of human activities, presumably refers to the enhancement of the natural greenhouse effect. That effect is not heating, but insulating, that is reducing cooling.
So if it is accurate that 90% of the heating from the sun occurs in the oceans, then maybe 90% of the heat retained as a result of reduced cooling due to enhancing the greenhouse effect may be occurring in the oceans.
But the amount of warming we have seen, even if it is entirely due to CO2 enhancing the GHE, is much lower than the models predict. The author’s point isn’t to assert catastrophic warming due to human activities. The author’s point is that the models’ resolution is too course and cannot account for complex interactions between the oceans and the atmosphere.
With 90% of the effect occurring at sea, it is extremely relevant to better model those interactions to understand the warming effect. But not because the mostly or wholly beneficial warming is a problem. There is NO CLIMATE EMERGENCY!
Hard to see how increased GHG LWIR radiation hitting the surface of the ocean “skin” and at the most accelerating evaporation slightly, (which could slightly result in warming or cooling feedback) is cause to ocean warming? Especially true as any increase in WV, results in a decrease in SW radiation penetrating the ocean surface, where energy residence time is far greater then any atmospheric residence time.
Would like to see an engineer type printout of how this “90 percent of human activity caused warming ends up in the oceans” happens.
I, too, paused at that statement. Excess over what? And if the heat is from human activity, then it must be longwave IR, which penetrates only a few microns into ocean water. So how does this absorption work?
Short answer, it doesn’t. So that’s 70% of the “globe” that cannot be “warmed” (or “heated”) by any “back radiation” from CO2.
Maybe some trivial amount right at the surface, but isn’t the average depth of the oceans of this planet some thousands of feet and much of it is near freezing?
That trival amount at the surface is questionable, as there is energy used in the phase transition from liquid to gas, and that small increased LWIR back radiation could only slightly accelerate the surface evaporation, and that said small w/v increase, may well have a negative effect on SW radiation reaching the below the surface ocean, where the residence time is far greater. Difficult to see miniscule increased LWIR warming the oceans.
That’s a misinterpretation of so-called back radiation, RHS. In any radiative heat transfer, the net heat transfer is to the colder body. The (normally) warmer ocean surface can’t be heated by the colder clouds. It cannot cause the surface temperature to RISE.
It is essentially only the sun that heats the ocean (or the land). Back radiation does not warm the ocean (or the land) under most circumstances. (I can’t be absolute in that statement because there are times when the atmosphere is warmer than the surface. Also, for a completely rigorous statement, there is a small amount of geothermal heat escaping from the interior, warming the surface).
What back radiation does is REDUCE the rate of cooling which has the effect that the surface remains warmer overnight, and is thus warmer at sunrise when the sun resumes heating, than it would have been had there been less back radiation.
The natural greenhouse effect exists. Even if earth were a lifeless planet devoid of CO2, the natural GHE would still be maintaining a warmer surface temperature than would be the case in the absence of the primary greenhouse gas, water vapor.
In fact we owe our existence to the natural GHE. To deny that there is such a thing as the GHE is unscientific.
The enhancement of the GHE is also real. But that it is not to ‘admit’ that adding a little CO2 to the atmosphere is dangerous or anything but beneficial. It is certainly beneficial in its impact on plant productivity. Whether it actually leads to significant warming is less clear, because there are emergent phenomena that MAY largely negate the theoretical warming. Or there may be a small warming effect which would certainly add to the benefits of enhancing the life-giving CO2 concentration of the atmosphere.
In any case, there is NO CLIMATE EMERGENCY!
Rich, yes, maybe, yet this “What back radiation does is REDUCE the rate of cooling” only applies to the atmosphere, not the entire land, ocean, atmosphere, thus an increase of energy in the atmosphere, may or may not be an increase or decrease in the land or oceans.
Residence time is the key, (essentially “back radiation” is increasing atmospheric energy residence time, while energy input remains constant) and said LWIR, striking the “skin” of the ocean, expends that energy in accelerating the phase transition from liquid to gas, and that small increased in w/v, to an already GHG saturated atmosphere, further reduces SW insolation entering the ocean surface, ( w/v intercepts a portion of SW insolation in the atmosphere above the surface) where the residence time is far greater. SW energy entering the oceans has tremendous residence time, some of it is decades long.
All the effects on energy residence time matter. There are only two ways to alter the energy within a “system” in a radiative balance. Either change the input, or change the residence time of energy entering the system.
Our earth “system” contains the energy in the land oceans and atmosphere. And yes, I like that you included geo thermal. Although the input is very small, the residence time is very long. Some of the geo thermal that flows into the oceans, has a residence time that is in the centuries. It would be interesting to know how much geo thermal is in the oceans, from a minor heat flow, some of which has been accumulating for centuries.
One can only guess at what the author was eluding to.
For example, when we burn fossil fuels, most of the energy is lost as heat. Or perhaps ‘excess heat’ means any heat created by human activities.
Let’s put that “excess heat due to human activities” into its proper perspective:
Based on an average of 19.6 TW of continuous power used by mankind in 2021 (all forms of energy equivalent . . . see https://en.wikipedia.org/wiki/World_energy_supply_and_consumption) versus average TOA solar insolation of 1361 W/m^2 and albedo of 0.30, the Earth actually receives roughly 6,200 times as much power from the Sun as that produced by mankind.
At a ratio of 6,200:1 we have the end result that human production of energy—ending up as waste heat in the environment—is insignificant (<0.02%) compared to the energy it receives from the Sun.
Bottom line: it is absurd to think that “excess heat due to human activities” has ANY significant effect on Earth’s climate.
That’s a mistaken premise. Of course it’s true that the heat released by human activity is insignificant compared to insolation.
There is no alarmist loon claiming that the heat released during the use of fossil fuels is a source of any significant warming of the entire planet. The claim is that the enhancement of the natural greenhouse effect traps a portion of the solar energy far in excess of the sensible heat generated by combustion.
Having said that, the heat released during the use of fossil fuels is NOT completely irrelevant. It is the primary source of the urban heat island effect. And the UHI effect is arguably a primary reason why the alarmists have their panties in a bunch. They are misinterpreting UHI to be the effect of the enhanced GHE. (Of course some of the alarmists are probably intentionally misinterpreting and denying the true effects of UHI on the temperature record because of their political agenda).
Rich, here is a direct quote from the above article’s second paragraph:
“The ocean absorbs about 90% of the excess heat due to human activities, playing a central role in global climate dynamics.”
(my bold emphasis added)
Of course, the “heat released during the use of fossil fuels” is less than the total heat released by all of mankind’s activities over any given time period.
What else can I say?
Well, yes, air-sea flux is complicated, and probably improvements can be made. But I’d note that this paper is not yet peer-reviewed, and it is not clear that the authors really do know much about GCMs or how it is currently done. The lead author is a post-doc at Geosciences at Pronceton, and the second is a junior at the Courant Institute at NYU, excellent in maths, but not known for climate research. They may be on to something, but I’d wait until it is actually published.
NS,before you comment, you really should check your facts. For your neck of the woods for CMIP6, see research.CSIRO.AU. The published CMIP required submissions have been out there publicly for many years.
I know the facts. If you know of such a requirement, please link or quote.
You “think” you know the facts..
A very different thing !!
The CSIRO model is considered one of the very worst models out there.
CSIRO is science made to order.
NO ! CSIRO is mostly NOT science.
In the past, yes, but they have been taken over by the anti-science of the greenie cabal.
Sorry about late reply. Significant other is having a medical emergency, so I am spending much time in her hospital ICU.
You do not know the facts, Nick. I do. Below is the educating reference you requested and could have found yourself. It took me five minutes using googlefu.
The CMIP6 experimental design is in a long paper by Eyring et. al. available free at http://www.geosci-models-dev.net/9/1937/2016. The key mandatory ‘Core’ ‘DECK= 4 ‘experiments’’: (a) ‘amip’, (b) preindustrial control, (c) 4x [ECS], (d) 1%/yr to 2x [TCR]. First amip ‘experiment’ is specified on line 1 of table 2. It requires a historical hindcast from a start in 1979 to an end in 2014. 35 years, up from 30 in CMIP4 and 5.
Hi Rud,
Thanks for your response in difficult circumstances – best wishes for your partner’s recovery.
The link didn’t work for me, but I think you are talking about this paper with this Table 2:
It only requires that they do a simulation over those years. It doesn’t say anything about tuning. Tuning is not mentioned.
Thanks Nick for the your kind late reply under my difficult circumstances. Your unarticulated problem is that if the models did not ‘tune’ parameters they would go wildly divergent to reality results. The example in my old ‘Troubles with Climate Models’ post was a divergent two stream ITCZ after a few days when we know none in reality exists ever. So models are tuned to Hadley cell reality.
Sorry link failed. You found it ok.
It didn’t work for me either, Rud.
So they have absolutely ZERO idea of any credibility or validation.
All based on the anti-science of CO2 forcing.
Just mindless, assumption driven computer games.
…. and not remotely related to any actual real science.
Thanks for point that out, Nick !!
“Tuning is not mentioned.”
And yet they use “historic” data, and “pre-industrial control” which one assumes they try to match or TUNE to !!
Just because they don’t mention “tuning” as such, doesn’t mean it isn’t implicit in the modelling proposed.
Just curious if you’re as nitpicky and critical of climate studies by authors with backgrounds in such critical thinking areas as communication, science journalism, geography, sociology, etc. I would guess more than a few of them don’t “really do know much about GCMs or how it is currently done.”
‘The lead author is a post-doc at Geosciences at Pronceton, …’
Good thing the lead author is a guy from some podunk school no one’s heard of. Now, if was the usual alarmism out of, say, GFDL at Princeton, we’d give their findings a lot more credence.
/s
Well, he’s not from GFDL. Actually, according to his home page, he left Princeton in 2020.He says he is currently a “AFFILIATE GRADUATE FACULTY UNIVERSITY OF HAWAII”
And Nick is a clueless hack from central Victoria.
It is doubtful he knows anything except nit-picking.
‘Well, he’s not from GFDL.’
I didn’t say he was. And does it really matter what year he left Pronceton (sic)?
The question is, what do we know? We have an anonymous review of an unpublished paper. I’m sure no-one here has read it (I haven’t either, you have to request from the author). So it all comes down to the reputation of the author.
As I said, I’m open to the idea that the authors are onto something. But I’ll await actual publication.
“The question is, what do we know?”
What do you know??
Basically nothing !!
I notice that you predictably ignored Phil R’s question.
We have NEVER seen you cast doubt on ANY alarmist attribution study based on it not having been performed by a quote-unquote “real” climate scientist.
Why is that Nick?
Glad you added the /s.
I had drafted a reply that went –
“what t.f. does it matter WHO presents a hypothesis, it only matters WHAT that hypothesis poses”
On 2nd thought, I’ll leave my reply here as drafted.
A reply you should direct to Nick.
Yes.
It is PAL review for climate “science” papers.
“Climate science” pal-review for journal publication is totally meaningless from any real scientific stand-point.
Considering the very different outcomes projected by the GCMs it is not clear that the GCM authors really do know much about GCMs or how it is currently done.
That reads like an ad hom. Let’s discuss the science, no matter who does it. Incidentally, I’m very happy with “excellent in maths”.
But how are you going to discuss the science?No-one seems to be inclined to read the paper, which isn’t easy to get.
Reputable journals are there for a purpose. Their reviewers should know whether the suggestion is novel, whether it is really not done in the GCMs etc. No-one here does.
Good question. My take – and others may disagree of course – is that the minutiae as per this paper are very interesting for understanding Earth’s processes but useless for predicting climate because climate cannot be predicted deterministically. Here’s my formal contribution:
https://wjarr.com/content/general-circulation-models-cannot-predict-climate
Nick, you place far too much value on “peer review” as it is practiced today.
Then too, a mere patent clerk authored four incredibly important scientific papers in 1905— on the photoelectric effect, on Brownian motion, on a new (special) theory of relativity, and on the equivalence of mass and energy—none of which were subjected to peer-review.
Einstein’s papers were published in a leading physics journal, Annalen der Physik. The editor for his papers was Professor Drude.
Climate science is not scientific “peer-review” it is pal-review for journal publication.
Thank you for the information. However, that does not meet the criteria that is required today for what is called “peer reviewed.” I have had a number of people tell me that they won’t read anything published here on WUWT, despite it being vetted by Charles (And read by the likes of yourself.). Charles isn’t exactly Professor Drude, but he does have the same control over what gets published as Drude did. In those days, the act of publication itself resulted in peer review, which was what it was all about! Albeit, some 100 physicists were probably quite unhappy with Professor Drude taking a chance on the young upstart Einstein. In reality, ‘peer review’ today amounts to a gate keeping function to maintain the reputation of the journals. While they weed out the wackos, they probably also miss out on some significant research that the paradigm promoters aren’t ready for.
A journal having an editor is not at all the same thing as a journal have a peer review process . . . don’t your understand that simple fact?
“Albert Einstein only had one anonymous peer review in his career — and the paper was rejected. This happened in 1936. A decade and a half earlier in 1905, Einstein’s annus mirabilis (remarkable year), he had published four breathtaking papers. One introduced the world to special relativity.”
— https://mindmatters.ai/2020/05/einsteins-only-rejected-paper/
Otherwise, thank you for that bit of obscure trivia.
Another paper showing the hopelessness of climate models. To actually model ‘the physics’ grids need to be no more than about 4km per side at the equator ( the big issue is Navier-Stokes physics of convection cells). See old post ‘The Trouble with Climate Models’ for illustrations. The finest resolution CMIP6 is 100km. The difference is caused by the math CFL theorem constraint on numerical solutions to partial differential equations. UCAR says doubling resolution by halving grid size causes computation to go up ~10x.
Several orders of magnitude computational gap when a typical 180km/side CMIP6 model takes about 2 months to go out to 2100 on the best supercomputers. At a 4km ‘physics grid’ we would go through several ice ages before the model completed one run.
So, as this post proposes concerning better air-sea interfaces, the models have to be parameterized. CMIP requires they be tuned to best hindcast 30 years. There are two basic ways to tune. The problem with both is they drag in the attribution problem concerning natural variation. We know it exists, because even AR4 WG1 SPM fig 4 said the warming from 1920-1945, indistinguishable from the warming 1975-2000, was natural—officially per IPCC just not enough change in CO2 ‘forcing’ for any other explanation. The model parameter tuning problem is that natural variation did not magically stop in 1975–thereafter is NOT just a CO2 control knob. Assuming it is causes climate models to run provably hot.
“CMIP requires they be tuned to best hindcast 30 years. “
Just untrue,
“the models have to be parameterized”
Surfaces always have to be “parameterized”, even in engineering CFD, where they are called wall models. They work very well.
Trouble is there is no worthwhile real temperature data to hindcast to.
Parameters.. mean they can make the model give the answers they want it to give.
The whole mess is completely an utterly meaningless on a scientific basis.
Nevertheless……
See above. I provided the specific CMIP6 requirements you requested.
At issue is that the main parameterization is the energy exchanges involving clouds. The parameterization involves subjective simplification of the processes and energy estimates. So, most of the modeling is doing real physics while the clouds are a best guess. That is like E = mc^2 +/-epsilon, where epsilon is a best guess.
I think there many unknowns besides clouds. Just one example is we do not have precise understanding of the residence time of disparate SW radiation entering the oceans below the surface, where the residence time is far greater then any atmospheric residence time. In effects the oceans are a liquid GHG, in that the hold at least 1000 times the energy of the atmosphere, and greatly increase the residence time of solar insolation.
Therefore we do not know the effect of disparate 11 year solar cycles where the insolation w/l changes more then the total TSI flux. We do not know the total energy system (earth water atmosphere) effect of increased atmospheric w/v, even clear sky, here said w/v prevents disparate SWR, from reaching and entering the oceans. There are so many unknowns, they overwhelm a flawed system, that is worthless just on coarse oversimplification of what we do know. And all of those shortcomings, based on a deeply flawed historic record that has been significantly altered. Climate scinence today, is not science.
“Another paper showing the hopelessness of climate models.”
There you have it. It really is just that plain to see.
The CESM model uses a temperature-independent latent heat of vaporization. That overestimates energy transfer by evaporation from tropical seas by 3%. For a perspective, a 3% error in absolute temperature of tropical seas is 9 degrees Kelvin, or 16 degrees F.
Another problem with (most?) models is their use of latitude-longitude grid. As Rud notes, at the equator the grid “width” is some 4 km (2.5 miles). Near the pole, the width goes under 8 feet. Still, the supercomputer spends the same time for each grid cell. With billions spent for climate models, the failure to design a better grid is just amazing.
You mis-read Rud. He said they should be 4 km at the equator. They are in fact 100km.
“Another problem with (most?) models is their use of latitude-longitude grid.”
Not true. GFDL in 2007 switched to a cubed sphere grid which does not have this issue. most have changed, some use the even better icosahedral grid.
Details here
Nicks got a new box of finger paints….. whoopy !!
It isn’t mine. I have much better colors
More meaningless petty colours from Nick.
He has finally found his niche in life. !
It’s also impossible, as I understand it you cannot construct a sphere solely from pentagons, see Stand Up Maths soccer ball fiasco.
Look carefully. There are 12 pentagons and the rest are hexagons.
Ooooooo.. very pretty !!
But totally meaningless. !
The proof is in the pudding. Models run hot. It is obvious that changing the grid shapes has not solved that problem. It is just more evidence of job security with no penalties for failing.
You are correct. Simulations require sufficient mesh density, else they return false results, or gibberish. I worked for a decade doing simulations with high end software and hardware. 2D and 3D in two different fields. One electromagnetics, designing and then bench testing motors and generators, and another designing a vertical axis wind turbine using aerodynamic simulation software. Both of these fields, the watch phrase is, “it’s the mesh, stupid”. If you do not have sufficient mesh density, you will not be able to attain results that reflect reality. But unlike climate astrology models, the sims I worked with we actually tested them on the bench, in reality, and in the case of the wind turbine, in a field in full scale reality after some bench models. So you could check and correct the sim by feedback from actual data. In the early days we had all kinds of false eureka moments, only to discover the false positives were in fact due to insufficient mesh density. Once finer mesh was used, the false results stopped and the sim results then verified on the bench.
So if you cannot make the grid or mesh density fine enough for the climate simulations, you CANNOT trust their results, no matter how many matholes derive excuses or fudge factors to try to adjust for the fact you cannot have fine enough resolution. (note “mathole” is meant to be pejorative, describing persons who believe math supersedes reality, when in fact math is a useful tool to represent reality)
I personally doubt even a 4 km grid is fine enough mesh to accurately model the global climate. Huge heat engines rapidly form and seriously affect the climate on scales much smaller than 4 km x 4 km. Such as daily thunderstorms in the tropics and sub tropics.
At last – a first-hand report from the front lines of the real world!
While the climate “generals” in their bunkers move phantom brigades of “data” around on their maps / models (as Adolf did in his last stanza), the realists like D Boss at the front lines of discovery are painstakingly improving their positions and lines of fire.
If only the climate front-liners could replace the “generals”, the whole climate “war” would soon come to be seen in the same light as Adolf’s pointless global dominance folly.
Story tip: Climate Change Concerns Dip
https://wattsupwiththat.com/2024/05/09/a-dip-in-climate-change-concerns-latest-monmouth-university-poll/
oh, sorry! I should have done due diligence and checked.
np
That’s a depressingly high number of propagandized youth; in effect not all that much lower than the percentages for the Young Pioneers or Hitler Youth, especially considering that membership in these was compulsory.
“high-resolution coupled climate simulations”
Well, heck, might as well go for super duper, extreme high-resolution climate simulations- then you’ll know everything we need to know!
We are living in the bullshitopocene.
“The study underscores the need for GCMs to incorporate parameterizations that account for small-scale heterogeneity.”
uh… I think that long, fancy word means… fudge factors?
Better unavoidable parameterizations are a desirable model path forward.
The only CMIP6 model that does NOT produce a spurious tropical troposphere hotspot is INM CM5. INM CM5 parameterized its ocean rainfall to what ARGO showed! They published a paper explained what they did, why, and their convincing model result.
And their results were ignored by those whose jobs depended on their models being wrong and needing further refinements.
Words can affect out perceptions. That is why liberals are busy re-defining words long in use, or inventing new words. The problem is, people soon learn to associate the new word with the same negative characteristics that prompted the re-naming, and a new word has to be thought of.
“The journey towards more accurate climate models is ongoing”.
How can they know that a new model is better than the old one? Does a “better model” mean climate science has advanced?
No, climate science is settled.
Oh, right- otherwise, no same person would want to spend hundreds of trillions to fix the problem. I shoulda realized that. /s
Yes, so we’ve been told over and over by our “betters”.
So you and all of us no longer need to be Curious, George.
🙂
I’m afraid it means they will provide higher resolution garbage.
The need to consider regional impacts is perhaps illustrated by the situation with Australia’s BOM late last year forecasting an El Nino sending livestock markets into a dive as farmers began to reduce stock numbers in anticipation of impending dry or drought conditions.
The problem was that at that particular time SST in the waters surrounding Australia were typical of a La Nina and as eventuated, the increased evaporation bought rain instead of drought.
Now BOM are expecting a La Nina to develop.
A long time BOM employee Ian Holton who left their employ in the late 1990’s feeling the BOM had been placing too much emphasis on El Nina at the expense of local conditions, and set up his own successful forecasting service that included data from all the waters surrounding Australia and found a market mainly with the agricultural sector. There are now other forecasters who now take a similar approach as Ian Holton who has since retired, but is seems that BOM are still lagging.
Outfits like BoM all around the world are locked into “agenda forecasting”.
None of them are ever going to put out anything that can’t be aligned with the established “mankind changing climate” narrative.
The IPCC models don’t take into account the Sun where 99+ percent of the Earth’s heat comes from and its variability, the clouds that are influenced by the Sun and can reflect up to 90 percent of the solar energy, or the oceans where almost all the heat is stored.
They would probably get a better forecast of the temperature by only using how many people live in cities to forecast the temperature.
hmmmmm…. me, dumb woodsman, just trying to learn duh climate science thing— hmmm… they ignore the sun, that thing out there a million times larger than the Earth, where all our energy ultimately came from and continues to come from, they can’t model clouds and have minimal understanding of the oceans—- but they say “the science is settled”- me thinks there’s something wrong here- must be that all we deplorables are just lacking in “the faith”.
Are the UN/IPCC models public so independent organizations can check them?
Just who is going to be willing to wade through millions of lines of parallelized Fortran code and buy a supercomputer to run it on?
“The current generation of models, as used in the Coupled Model Intercomparison Project (CMIP), exhibits substantial biases that have led to inaccurate predictions”
Poor understanding of the purpose of climate models, which I call climate confuser games.
The purpose of the confuser games is to support the 1979 Charney Report consensus that global warming will average +0.3 degrees C. per decade for hundreds of years (aka climate scaremongering)
So far actual warming since 1975 average about +0.2 degrees C. per decade for surface statistics and +0.15 degrees C. per decade for UAH satellite statistics.
But since 2006, the global warming has averaged about +0.3 degrees C. per decade (even higher in the US per USCRN at +0.34 degrees C. per decade).
The main problem with multi-hundred year climate predictions is you can’t tell if they are right or wrong in just the first 48 years since the 1979 consensus was published,
Inaccurate predictions for the first 48 years are not that far from reality to declare the current climate models to be worthless. The actual warming has been within the 1979 range of +0.15 to +0.45 degrees C. per decade.
We could claim this was a lucky guess from publishing such a wide range.
A better argument would be that the climate in 100 years is not preictable even with better models.
The rate of goba warming since 1975 is faster than expected from 100% natural causes and unusually fast for a 50 year period when compare with 50 year periods in the ice core era.
The Honest Climate Science and Energy Blog: People in the US have been living with a “catastrophic” warming rate since 2005
Models predict what their owners want predicted
Their owners face strong political pressures to predict a high rate of warming blamed on humans
No internal programming changes will reverse that desired prediction.
The Russians have the only model that gives the illusion of trying to make accurate predictions because the Russians are not politically correct.
While I am pleased to see evidence of the inadequacy of climate models I am disappointed that the authors didn’t come out and say climate models are not suited as a guide for policy decisions. There may or may not be a place for climate models but not for making policy decisions so impactful to the nation and it’s population.
>> while GCMs have provided a basic framework for understanding climate dynamics
Repeating a claim over and over does not make it true all of the sudden!
It seems that quite a lot of anti-science was justified with GCM results.
Maybe they are really worth nothing and J. Vinos gatekeeper hypothesis is closer to the truth.
My point is you don´t know that and no one else knows it either, so your claim is not correct!
A couple of decades ago my father was living on a boat and doing some research. He found that humidity levels inside a ripples were about three time higher than at the same distance above the ripple.
Higher in the u.
Lower over the n.
The impact of ripples on seas, ponds, lakes and puddles is not modelled at all.
They just don’t work.
See also clouds.
If you look at a road map of scale three miles to the inch the detail of the roads is much less efficient and effective as it is for a map of scale one inch to the mile. This rule is true of all attempts to resolve something at scale whatever that scale may be.
A computer becomes more helpless and hopeless at doing things as resolutions become closer and clearer until it reaches a point where it cannot resolve detail any more clearly. It is the kind of problem tackled when large scale detail had to be resolved to smaller scale photographic precision to produce smaller computer chips without cooling issues or problems (e.g. explosions).
To my mind climate science has readily misled itself to doing stuff which simply cannot be replicated without much better understanding of what they are attempting to get a computer to emulate and why certain things may be well beyond available technology – period.
Analogue music recording is still supremely better quality and reproduction than any digital representation of the same simply because nothing is lost in the technique involved. Digital sampling loses whole hunks of stuff which whilst tolerable over radio or the internet is not as involving as live analogue sound. As our ears can tell the difference in sound quality so can our brains compute the dangers of trying to make computers do stuff they just cannot manage without us thoroughly understanding what it is we are asking them to do. If we know that then the output is either useful or useless and we know the answer to that already with climate models.
When I was learning about computers this was the single most important takeaway I had. As Einstein said about explaining something you understand to a very young person – if you cannot make it simple then you do not understand it yourself – period.
“Analogue music recording is still supremely better quality and reproduction than any digital representation of the same simply because nothing is lost in the technique involved.”
As an audiophile since 1965, who has participated in four double blind audio component tests conducted by audio engineers, and published in audio journals, I declare that you have no idea what you are talking about.
Audiophiles in general have strong biases and often imagine hearing things they can never prove they are really hearing under blind conditions.
The compact disc provides easily audible and better reproduction of the master tape from the recording studio. One double blind test was of a record and a CDD compared with their source (original analog master tape) AT a recording studio.
If a master tape was digital. the use of an analog vinyl record is meaningless.Most releases from the last 15-20 years have been mastered from digital as the main mastering format since the late 90s has been DAT (and lately straight files), even when the music was first recorded on reels. The mixdown was mostly done on DAT. Exceptions apply.
There are more audiophile Nutters than climate Nutters in my many decades of experience/
“Hear – hear” – so to speak. Proper digital data are far better than even the best analog recordings. I threw out all of my LP’s decades ago Including some now rare original Beatles albums. They were worn out mainly from ‘playing’ them on a low tech ’50s turntable. My hearing is also ‘worn out’, profoundly. This was likely helped along by listening to them with the volume too high. Now my granddaughter is into vinyl recordings, because they “sound better”, even though she’s also using a low-tech turntable. Cults are hard to avoid at that age. Cults can exist at any age. I disagree, though that there are more audio nutters than climate nutters. Climate mal-education is pervasive in the entire K-Post Doc. education community.
My grandson informed me that he is saving up to buy a vinyl edition of his favourite group.
I was encouraged by this development.
Until I learned it was a Rapper.
Sacrilege!
Rapper? Low-tech!
Wait for it.
I have actually worked in audio studios, also did live music mixing for many years.
Also what could be called an audiophile.. and….
I agree with RG. !
Vinyl does have a certain “character” to them, which some people find preferable, but that doesn’t mean the sound quality is better.
I have some 400 LP’s which I sometimes listen to on reasonably expensive turntables with expensive cartridges, but I generally prefer the music from a quality CD player.
Nobody that I know of talks about fractal dimensions with respect to climatology.
Can the models, ever, be better, than a crude approximation?
“The study underscores the need for GCMs to incorporate parameterizations that account for small-scale heterogeneity.“. Pie in the sky. You can fix any number of these defects, but in the end, the GCMs will still not be able to get the climate right. The real problem is that the GCMs are deterministic, and a deterministic model can never work, Edward Lorenz, in 1969 (yes that long ago) demonstrated that a deterministic system could be “observationally indistinguishable” from a non-deterministic one in terms of predictability. Earth’s climate is one of those systems, and a GCM can no more predict climate a few years ahead than a weather model can predict the weather a fortnight ahead. And we don’t even know for sure that Earth’s climate is deterministic.
Actually, a GCM hasn’t a hope of getting as far as a few years, but I was playing safe.
The climate is very complex and we are far from understanding what factors are involved or how the components interact. There is no evidence CO2 is a control switch.
If a butterfly using its wings in Argentina can affect the weather in London (please allow some artistic license), then imagine what a cloud in the Philippines can do to weather in New York City!
One must ask themselves several questions. Are we measuring enough items to allow for accurate computations? Are we measuring in enough detail? Are we measuring in sufficient precision? Can one ever use Tavg to adequately assess what is occurring? Why are temperatures measured at Tmax and Tmin and then averaged to a “global” value? Why not have a simple “Tglobal” measured at the same time everywhere, say 0000, 0600, 1200, 1800 GMT?
It seems to me that the grid scale problem is even more cloudy (sorry for the bad pun) for managing clouds than for heat/mass/momentum transfer at the sea/ocean interface.
I also often fall back to my (many decades ago – the computers were wood-fired) modeling experience – for chemical separation processes (flash and distillation). The Buckingham-Pi theorem specified the proper dimension (basic property of the vector space) for those separation problem models.
I posit that we don’t really know the dimension of the problem for modeling the entire climate system. Why is that important? If the models are “over-specified” (they employ just one too many independent variables) then the models will hindcast spectacularly, but be totally useless for prediction purposes.
Shoot me down if I’m somehow kidding myself, though I have additional technical issues with the whole climate-modeling idea.
The implications for climate models with their diverging projections is that at best only one is right, if not, then all are wrong.
Even if they are all ‘wrong,’ there may still be one best model. However, averaging that one best model with all the other models with less skill will result in a nominal ensemble that has less skill than the best — but wrong — model.
Very true, so then would comparing a nominal ensemble with the one best model equate to comparing the “97% consensus” with the 3% outsiders?
In the words of Einstein, “Why 100 authors when it would only take one to prove me wrong?”
Improved parameterizations can help, but that’s secondary to the bigger problem, none of the climate models can replicate past climate change.
Not mentioned specifically in the above article, but a direct result of its conclusions:
The false claim that a ‘global average” for lower atmosphere temperature (GLAT), independent of sea surface temperature variability, is a realistic climate metric representing Earth’s energy balance.
The energy exchange ratio between atmosphere and ocean surface likely varies temporally and spatially by several orders of magnitude, depending on many complex variables (local wind velocity, local sea state, local air-sea temperature difference, local rainfall, etc., etc.)