Guest Post by Willis Eschenbach
Well, since I was on a roll with my last post Rainergy, I thought I’d look further at the Copernicus global rainfall dataset. I started by looking at the change in global rainfall over time.

Figure 1. Global monthly rainfall, 1979 – 2022.
Well, that’s interesting. Overall, despite endless hype about increasing floods, there’s no significant trend in rainfall. The main feature is the dropoff in rain from the 2016 peak. Being curious about that drop, I thought I might look at the hemispheres separately to see where it’s happening. Here’s that data:

Figure 2. CEEMD smooths, northern and southern hemisphere monthly rainfall
Zowie, sez I … do you see what I see?
The two hemispheres are basically mirror images! When one is wetter, the other is dryer, and vice versa. And as to why that would be, my only guess is that it’s from the very rainy Intertropical Convergence Zone (ITCZ) wandering above and below the Equator. Other than that, I fear I have no answer except for the quote below.

Figure 3. Quote from Richard Feynman, one of the most outstanding physicists of our time
Seeing the inverse relationship between the northern and the southern hemispheres made me wonder how well the models managed to hindcast the rainfall over the same period, and whether the models found the same mirroring of the NH and SH. For example, in the real world the northern hemisphere (blue line in Figure 2 above) is wetter than the southern (red line) … do the models find this difference?
So I went to the marvelous KNMI website and got the CMIP6 model average data. And when I graphed it up, my eyebrows went up to my hairline and I busted out laughing …

Figure 4. CEEMD smooths of modeled hemispheric rainfall, CMIP6 model average. This model average is created by first averaging all of the model runs of each model, and then averaging the model averages. This is to prevent overweighting the models with lots of runs.
I am totally gobsmacked. I don’t know what I expected, but it sure wasn’t this … in total contradiction to the real-world observations, in modelworld the southern hemisphere is wetter than the north, the northern hemisphere is getting much wetter over time, total annual rainfall is about 75mm (3 inches) or about 8% too large, and there’s no mirroring …
But wait, as they say on TV, there’s more! Here’s the CMIP6 SSP245 model average global rainfall from 1850 to 2100. It is hindcasting using real data up to 2014, and forecasting after that.


Figure 5. Modeled global rainfall, CMIP6 model average, SSP245 scenario. The graphs are taken directly from the KNMI website. Upper panel is full data, lower panel shows residual after removing seasonal variations. This CMIP6 model average is created by first averaging all of the model runs of each model, and then averaging the model averages. This is to prevent overweighting the models with lots of runs.
Seriously? Does that look real to anyone?
And there’s another oddity. Recall from my post Rainergy that evaporating water to create rainfall cools the surface. The modeled rainfall shown above claims that by 2100, the rainfall will have increased from the 20th Century average by ~60 mm. The evaporation necessary to produce this increased rainfall would cool the surface by an additional 4.8 W/m2 … which per IPCC calculations would offset the theoretical increase in forcing resulting from CO2 increasing from 400 ppmv to 980 ppmv.
Right … that’s totally believable …
These are the Tinkertoy™ models that our noble climate cognoscenti are using to predict the climate in the year 2100? We’re abandoning the world’s reliable energy sources based on these ludicrous models??? …
Madness. Tragic madness.
I fear that’s all for today. Although I’m sure that there’s more to be learned from the Copernicus rainfall data, at this moment I’m laughing and crying too hard to do any more mathematical analysis.
My best to all,
w.
Yeah, yeah, you heard it before, but I gotta say it again: When you comment, please quote the exact words you are discussing. I can defend my words. I cannot defend your interpretation of my words.
And if you wish to show that I’m wrong, here are complete instructions on how to show Willis is wrong.
The contrast between fig 2 and fig 4 is remarkable. What planet are they modeling, again?
Payola.
Shinola
Climate “scientists” don’t know the difference.
I trust Willis but I hope he didn’t make some kind of major error.
If this analysis is correct, then I think it is the biggest disconnect I have ever seen between a climate model and real observations.
Where are our hair-on-fire alarmists? Tell us why this is not the BFD that it appears to me to be!
Rich, Figure 5 shows the graphs directly from KNMI … so if there’s a mistake, it’s not mine.
w.
Did I miss the cogent rebuttals from Nick and mosh?
🦗🦗🦗
All right Willis, who you gonna believe, real data or models dreamed up by “climate scientists?”
Uh, if the “you” is the vast majority of our uninformed, under educated and gullible western civilization population, that would be “climate scientists”.
Willis, I love your stuff. Have 2 sons living in your neck of the woods. Would love to buy you a beer sometime, but can’t bring myself to want to visit the Peoples Insane Republic. I do have a question: Is rainfall over the oceans part of this information? How would it be measured. Sorry if I should know.
I had a similar thought, is it true global including the oceans or is it just over land, and is the model data subject to the same conditions as the measurements?
Percent of surface covered by ocean:
NH: 61%
SH: 81
Land in the SH is also icier. Antarctica is more than six times larger than Greenland.
Peak in 2016 suggests connection with ENSO and solar cycles.
if last peak were 2014, current cycle might top out next year.
ORG, the answers are 1) Yes, 2) Satellite, and 3) you’re welcome any time.
w.
Thank you Willis. I’m going to have to read up on how satellites measure rainfall.
How do satelitte rain measurements compare to actual rain gauge measurements ? An interesting data comparison up Willis’ alley. I’ve always assumed radar rainfall sensing to be fairly flakey….but then again my rain gauge and my neighbor’s often differ by 10% (either way inconsistently).
The CMIP6 models include ocean rainfall. The reason only INM CM5 does not produce a spurious tropical troposphere hotspot is that the Russians parameterized their ocean rainfall to correspond with what ARGO is finding (one of three main ARGO design objectives, estimated indirectly via salinity change in ocean ‘fresh water storage’), thereby reducing tropical water vapor feedback and resulting in an ECS ~1.8.
Thanks, Rud. I often wondered what those tricksy Russians were up to in order to make their model conform to reality better than others.
Indeed…
Sneaky little Russianses, wicked, tricksy, false
INM’s excellent paper on CM5 is Volodin et al in Climate Dynamics, downloadable free at DOI 10.1007/s00383-017-3539-7. I have it archived on my main computer. The no tropical troposphere hotspot graphic is very special.
G’Day Michael,
“…what those tricksy Russians were up to…”
They set up a whole bunch of weather reporting stations along their Artic coast for the International Geophysical Year, 1957/58. I believe they continued to monitor weather in that area even after the “Year” was over. Probably have a lot of data to work on.
I wondered about a report published a year or so ago. “The Russians now have four large atomic powered icebreakers in the Artic”. Global “warming”? What have they figured out that we haven’t?
No such thing as “artic”.
Lorry – or ‘truck’.
Auto.
I know … little minds
Well Rud, don’t you know that Russians only spew disinformation!
How can we stoop to believing their crappy model just because it randomly happens to get the temperature and the rainfall closer than any ‘trustworthy’ country’s model for many years in a row? If we do that, Putin wins!
What does the Ukrainian model say?
“What does the Ukrainian model say?”
Send more money!
Funny thing. The Russian/Ukraine war has exposed much corruption on BOTH sides.
The biggest corruption is not on either side of the Donets river. Look across the Potomac instead.
Uniparty politicians in Washington look pretty dirty to me. What sense does it make that Democrats, the perennial anti-war party, are so hot on sending money borrowed from China over to Ukraine? Coincidentally there’s difficulty in accounting for all the money. An unrelated problem of course.
Corruption is everywhere, where power has no alternative, including US. But while in US corruption is still considered an aberration, an abuse of power, in Russia and Ukraine corruption is the backbone and the purpose of power.
I think she says something like “$500 and we can go to your room”
I’m glad you asked that. I was going to after the first article.
Rainfall is often sporadic and the Pacific Ocean is very big. Land-based measurements few and far between. I expect Argo buoys don’t stay in one place for very long so they have to do some jiggery-pokery with the raw data.
“in total contradiction to the real-world observations, in modelworld the southern hemisphere is wetter than the north”
Real-world observations .. aka the Copernicus dataset? I trust that you looked at it carefully.
Congratulations for the discovery of the Nort-South mirroring of rainfall. Many years ago I noticed that El Nino meant a lot of rain in California and a drought in Venezuela (even though it is technically in the NH), but now it looks like a part of a bigger picture.
A total contradiction to real-world observations SHOULD be a huge red flag.
At the very least, the effect that rainfall has on temperatures cannot be accurately incorporated into the models, if the models do not hindcast or forecast rainfall accurately and also have the effect reversed spatially.
In the worst case for the models, it indicates that some fundamental physics is wrong in a big way.
And is it just me misremembering, or weren’t we told for years that global warming was going to lead to permanent droughts? Now it’s apparently permanently flooding. Of course it’s always whatever weather is prominent in the news at the moment.
RD, a quick comment. Climate modelers claim their models represent ‘fundamental physics’, but that is simply false. See my old post here, ‘The Trouble with Climate Models’ for details.
They have been lying to us from the beginning for an inescapable mathematical reason—the CFL theorem.
Worse, they knew or should have known this—the legal standard for felony fraud.
The programmers and statisticians have never seen a piece of data with uncertainty. Stock prices -nope. Poll answers – nope. Populations – nope. All arrive as exact values to be manipulated in any way you want.
As long as the output is “reasonable and expected”, it’s a go! /S
Criticizing the confuser game models is a waste of time since they are forced by political pressures to support the CAGW narrative.
Therefore any subset of the GAT prediction is not likely to be realistic because that’s what’s necessary to support the politically forced GAT prediction. The Russian INM model may be the one exception.
The Northern Hemisphere is warming much faster than the Southern Hemisphere so more evaporation and more rain is expected there.
The fact that increased evaporation cooling can offset CO2 greenhouse warming is good news.
But it is the additional unknown amount of greenhouse warming from a positive water vapor feedback that causes MOST of the global warming predicted for the next few centuries by the Climate Howlers. Not just CO2 warming. CO2 warming plus water vapor amplification = the alleged catastrophic warming rate.
The Earth is still in a 2+ million-year ice age with 200,000 glaciers, ice caps, and snow somewhere every day. Warming is good in an ice age, more cooling would be a disaster.
The model programmers aren’t forced by any political pressure. They’re in on the game. They’re so convinced that CO2 is the catalyst for all climate evil that they build their biases into the models, unable to see that the models are flawed, and that they cannot predict anything useful with any meaningful accuracy. As the IPCC glossary reminds us:
Because knowledge of the climate system’s past and current states is generally imperfect, as are the models that utilize this knowledge to produce a climate prediction, and because the climate system is inherently nonlinear and chaotic, predictability of the climate system is inherently limited. —p.1460 IPCC AR5 (2013) Annex III: Glossary
They think they can accurately model something which is inherently impossible to model. It doesn’t include all the inputs, doesn’t have an accurate and precise picture of the past and the present, and can’t incorporate all the seemingly chaotic, significant interactions that affect global climate. I’m bemused and sometimes depressed by skeptics attempting to assail the models with scholarly-sounding mumbo-jumbo that mimics the scientific-sounding mumbo-jumbo of the numerous studies supporting the models, attacking the minutiae, when the very foundation of the models is utterly flawed. Can CO2 affect global temperatures? Sure. But is it one of the most significant causes of warming and cooling, from which springs an endless variety of fantastical phenomena, from epic flooding to droughts to more violent weather to polar vortices to rivers and seas boiling and dogs and cats living together? Not according to the evidence we have so far.
nailed it!
“Can CO2 affect global temperatures? Sure.”
IMO, this is conceding too much.
Can CO2 affect global temperatures? If one hypothetically makes a perfectly dry Earth atmosphere 100% CO2, tens % CO2, a few % CO2 – sure. Changing from 280 ppm to 400 ppm in presence of unlimited water? A pure conjecture.
In a perfectly dry Earth scenario, adding CO2 might increase cooling since it is an excellent radiator. It would absorb heat via convective heat transfer, and then radiate the heat to space.
It very well may be. But not drowning in these details, I think we can all agree that huge changes in atmospheric composition would affect global temps in some way.
“Therefore any subset of the GAT prediction is not likely to be realistic because that’s what’s necessary to support the politically forced GAT prediction. The Russian INM model may be the one exception.”
No GAT prediction can possibly be realistic, because GAT is a fictitious construct, and utterly meaningless.
Richard Green writes
If evaporation is greater in the northern hemisphere then it should be cooling, not warming. It looks to me like the energy is accumulating in the southern hemisphere ocean and leaving in the north.
It is too complicated, several issues are entangled here
I agree its complicated, however I’d say
Evaporation moves energy from the lower atmosphere into the upper atmosphere where it can more easily be radiated to space. More evaporation means less sensible heat available at the ground to heat the air. That makes it cooler and is how an evaporative cooler works.
Its difficult to reconcile both heating and evaporation together unless the energy is coming from elsewhere IMO but YMMV.
When latent heat is released in the upper troposphere, it reduces the temperature gradient between the warmer surface and the colder upper troposphere. The reduction of this gradient should reduce the upward heat flux in the troposphere. The net result may be neutral.
I’m not sure about that. Latent heat is released by radiation and if that radiation continues to space, there is no warming of the upper atmosphere. The H2O molecule cools when it radiates. Unless there are enough molecules to intercept the radiated latent heat, and consequently warm, then a lot (most) will go into space. This shouldn’t warm the higher altitudes enough to make a big difference.
Remember, radiation is based only on an individual molecule’s temperature. Surrounding bodies do not affect what is radiated, only what is absorbed.
OK Jim, I will do you homework for you. 🙂
“does latent heat release increase air temperature”
AI-generated answer
The correct answer is: (a) increases air temperature. When latent heat is released, such as during the process of condensation, it causes the surrounding air to warm up.
Yes, latent heat release can increase air temperature. Latent heat is energy that is released or absorbed during a phase transition, such as condensation or evaporation, when the temperature of a substance remains constant. When water vapor in the atmosphere condenses into liquid water, it releases latent heat into the surrounding air, warming it and causing it to rise. This process is also known as precipitation.
First, don’t ever believe AI on scientific questions. Their training does not involve that kind of knowledge let alone the intuition to connect various theories.
Second, your definition of “latent heat” is way too vague. Latent heat is released as thermal energy, i.e., infrared radiation.
Infrared radiation can only be absorbed by certain molecules, not just surrounding “air”. Most of this radiation at altitude escapes to space, cooling the earth!
Can latent heat released as thermal energy warm the atmosphere? Sure, condensation at the surface, i.e., dew, releases latent heat as thermal energy and can be absorbed by CO2 that can then be thermalized. That short circuits the cooling of the atmosphere by raising Tmin.
AI is highly questionable on history, politics and ideology sort of stuff.
On this simple question it gets it right. You can try other sources and check with folks with Physics degrees.
Sorry dude, you’re the one that needs to do checking. Look at TimTheToolMan told you.
Here is what you posted that the AI told you.
I tried to tell you that the AI answer was incomplete. From my own AI testing it is unable to intuitively provide total and complete answers.
Condensation occurs in two locations, high in the troposphere and near the ground. High it forms rain, near the ground it forms dew. That is the correct and complete answer.
I’m sorry it destroys your belief in AI knowing its science 100%, but that’s life.
Every day essentially all the day’s energy from the sun passes through the atmosphere and is radiated away. CO2 is believed to slow that down but evaporation speeds it up because the latent energy simply bypasses any GHGs on its way up.
In general any atmospheric process that makes it faster or more efficient must cool.
When latent heat is released at and above the effective radiation level, it’s emitted to space on average. GHGs in the upper atmosphere have a cooling effect not a warming effect and so anything that can get the energy there faster will cool.
Interesting presentation of data versus models. Since 68% of the land mass is in the northern hemisphere, and only 32% in the southern hemisphere, I am tempted to think that sea water evaporation is the control knob for rainfall, and not heating of landmass. Factors of sea water evaporation are first, area (essentially a constant), and second wind and temperature tied. Has the wind speeded up or has the temperature of sea water increased enough? Complexity everywhere, no? yes?
“The evaporation necessary to produce this increased rainfall would cool the surface by an additional 4.8 W/m2 … which per IPCC calculations would offset the theoretical increase in forcing resulting from CO2 increasing from 400 ppmv to 980 ppmv.”
Wait a moment! First we need to distinuish between forcing and feedbacks. If those 4.8W/m2 were accurate, that would be a negative feedback, not a forcing. As Howard Hayden erroneously tried to point out, there would be a contradiction between a 3.7W/m2 forcing and some 16.8W/m2 increase in surface emissions with 3K warming. Again, that is based on confusing forcing with feedbacks and not understanding the GHE on top of it.
Yet it is obvious the 4.8W/m2 are only a fraction of said 16.8W/m2. In a way it would be a negative feedback fitting well into a larger overall warming, not nullifying it or so.
This negative feedback does exist for real and is basically accounted for by “climate science”, although technically it is not feedback. Anyway, it is called “lapse rate feedback”. Effectively how this mechanism works is by reducing the lapse rate, hence the name, by transporting heat from the surface up the atmosphere and thereby reducing the GHE all over. And interstingly with AR4 the IPCC had this negative feedback far too large with -0.84W/m2 as a central estimate.
Then the central estimate for ECS would have beeen..
1.1 / ( 1 – (1.8 – 0.84 + 0.26 + 0.69) x 0.3) = 2.58K
In AOGCMs, the water vapour feedback constitutes by far the strongest feedback, with a multi-model mean and standard deviation for the MMD at PCMDI of 1.80 ± 0.18 W m–2 °C–1, followed by the (negative) lapse rate feedback (–0.84 ± 0.26 W m–2 °C–1) and the surface albedo feedback (0.26 ± 0.08 W m–2 °C-1). The cloud feedback mean is 0.69 W m–2 °C–1 with a very large inter-model spread of ±0.38 W m–2 °C–1 (Soden and Held, 2006)
Why was it way too large? It is easy to see if we remove the -0.84 from the equation..
1.1 / ( 1 – (1.8 + 0.26 + 0.69) x 0.3) = 6.29K
With these parameters it would have reduced ECS by almost 60%, far more than “lapse rate feedback” can actually do. They got it wrong in many ways without understanding what they did. As a “hot fix” the IPCC just slashed LRF to -0.5W/m2 in AR5.
Anyway, those 4.8W/m2, if accurate, are just another way to look at said LRF and are nowhere beyond the limits of “consensus science”. The real problems with that are way more subtile..
https://greenhousedefect.com/the-holy-grail-of-ecs/vapor-feedback-ii-the-lapse-rate-and-the-feedback-catastrophe
Willis, very thought provoking. Thank you. We know about met. data collected over land, and we have Anthony’s surveys of the US land-based stations. How is the rainfall data over the broad expanse of the oceans collected? It strikes me that it must be very patchy and piecemeal, with massive gaps and horrendous unreliability away from shipping lanes and pinprick Islands.
Satellite, which is why it only goes back to 1979.
w.
We had complete, global sat coverage in 1979?
Would like to see global cloud cover in the same time line. That’s to see if its controlling ocean evaporation rate
Figure 5 reveals a major internal CMIP6 contradiction. The majority of warming (model world ECS with feedbacks ~ 3.2C) isn’t supposed to be from CO2, whose stand alone no feedbacks ECS is about 1.2C per Lindzen. It is from the positive water vapor feedback. If future rainfall is increasing, then WVF should be decreasing. But not in model world.
A model rain version of Mann’s hockey stick?
And it seems like when Europe is cold weather it is hot here in the US and vice versa.
Before global warming and back radiation were invented, we were taught in school that rainfall, surprisingly, contained more energy than sunlight. Thus snow and ice can last for weeks if it is sunny, but disappear in a day of heavy rain.
Global warming and the greeng house effect (back radiation) were discovered in the 1800s. They were not invented
In 1856 Eunice Newton Foote demonstrated that the warming effect of the sun is greater for air with water vapour than for dry air, and the effect is even greater with carbon dioxide. The term greenhouse was first applied to this phenomenon by Nils Gustaf Ekholm in 1901.
The claim that rainfall has more energy than sunlight makes no sense. The water cycle is driven primarily by the energy from the sun
The condensation of water vapor into rain releases energy in the form of heat.
When raindrops fall from the sky, they can produce a small amount of energy that can be harvested with rain panels and turned into electricity. It is a small-scale version of hydropower,
Snow has a high albedo, which means 90-99% of the sun’s energy gets reflected before it has a chance to cause warming. But there are always a few spots on the ground or on a roof that are darker in color. Those spots can absorb the sun’s energy better and cause peripheral melting.
Sounds like you went to a wacky school.
Ice melts faster in a class of water than it does in a glass of sunlight.
Rain has both kinetic energy and thermal energy. It is the high heat capacity of rain (water) that melts the snow and ice. The kinetic energy of rain is miniscule due to friction with the air.
The condensation of water vapor into rain releases energy in the form of heat.
This is actually backwards. Removing heat from water vapor causes condensation.
Willis, thanks very much for the Copernicus analysis. Was waiting for you to do just that, knew it was something too good to pass up…
Unfortunately as you noted the Tinkertoy™ models fail miserably, but their results are even now influencing local budgetary decisions. Here in Vermont the local road commissioners have begun installing extra cross drainage road culverts in anticipation of more rainfall, ala the 2011 TS Irene, based on the bogus model outputs of a UVM based consulting firm. They are using that as an excuse to spend on something which should have been planned for regardless.
To be fair, the state did get hit very hard, but it was more because of delayed maintenance than anything else. They payed the price for funding social programs ahead of capital investment and they are currently plowing right ahead. VT is really a mini Cali on the East Coast.
Yirgach, I would not complain too much about larger culverts. Where I live from measured data it appears that local flooding occurs when the monthly rainfall is greater than 500mm which can occur in any month although less likely in the winter months (July Aug & Sept) where 130 year average is in 60-100mm and more likely in the summer months (Jan Feb & Mar) where the 130 year average is 240-260mm). I have found monthly rainfall is close to a Poisson distribution (ie std dev. is close to the average). Two SD is common. I have found SD of over 4. Dec 2010 (when Aus BoM was predicting dry, the rainfall of 663 mm was a 127 year record and about 3.9 SD. on Jan 13 2011 there were severe floods which killed people through Brisbane and other towns in SE Qld (rain 1st to 13th 500mm). Back in 1893 there was 454 mm on Feb1, 241 mm on Feb 2, and 257 mm on Feb 3 (total over 3 days 952 mm) and monthly total Feb 1893 1891mm with huge floods and many dying. That is something like a SD of 7.5 I believe Vermont is not too far from the coast. I have been in a severe blizzard near Bathurst in New Brunswick. Floods can occur in your area.
Cement,
Many of the country roads in Vermont are gravel and are located on steep hillsides which normally tend to washout during heavier rain events. The fix is to grade the sides with drainage ditches and install cross culverts as well as fabric in the roadbase. Proper maintenance of gravel roads is more expensive than paved, especially with increased labor and materials cost.
As I tried to explain, that work should be part of a normal maintenance program but in Vermont it’s more up to the local town government rather than the state (Vermont has a minimal county government) to maintain the drainage system. TS Irene followed the normal storm track up the coast, but was a whopper, rain came in at over 6 inches/hr. washing out roads, stream beds, bridges, foundations you name it.The damage caused by deferred maintenance caught many towns with their pants down and now they are over reacting.
Yes, Vermont is near the ocean and weather patterns favor development of nor’easters which swing out over the Atlantic, pickup more moisture and drop it back on the land. In the winter the snow may have a bluegreen tint from the algae in the water.
My husband is at this moment drafting a statement of what our small community plans to do in the “coming” drought. Last week the news was full of drought, but they are quiet this week as it is p!ssing down.
UAH slope is 0.15 C per decade, about .6 degrees since 1980. At 7% increase in water vapor pressure per degree….works out to about 4% more water transport (roughly by Clausius-Clapeyron).
But average monthly of 82 mmm times 4% is only about 3 mm….insignificant in the vagaries of monthly rainfall stats anywhere on the planet of +/-50% or more….it’s really quite difficult to see a 4% increase and we dont have rain gauges over about 80% of the planet so are reliant on fairly flakey radar methods.
So basically floods or drought CC extremists are full of caca.
More rain makes sense to me ‘in a warming world’, and I do believe that we’ve seen a bit more rain than in my youth. My lawn stays green in August where it used to turn brown.
But am I wrong that the global warming propaganda of the 90s featured drought-stricken farmland and starving African babies? Not even that far into the memory hole, wasn’t California supposed to be in a permanent drought just a few years ago?
You seem to be on a roll!!
Please keep it up.
It’s a comical model precipitation trend, but it doesn’t look like a very big trend.
And generally, a hotter plant should rain more.
The NH/SH mirroring in real life data is maybe due to summer/winter?
I suspect these findings are the reason for this: Nonstop Rain In 2034, Say Weather Attribution Models
Great job. Thank you
Seeing the inverse relationship between the northern and the southern hemisphere …..
____________________________________________________________
Speaking of inverse relationships, yes it’s going to be off topic, there’s an inverse relationship between the atmospheric concentration of a greenhouse gas and its Global Warming Potential number:
CH4 1932 ppb GWP 86
N20 337 ppb GWP 273
CFC 4 ppb GWP ~8000
In other words, the GWP numbers have nothing to do with the absorption spectrums and global temperature.
____________________________________________________________
Thanks for bringing up Feynman’s beautiful phrase turning quote.
Very nice.
Are NOAA using a different set of facts? They claim…
“precipitation amounts are increasing”
https://www.ncei.noaa.gov/news/warming-earth-also-wetter-earth
The link says NOAA thinks it’s getting wetter in the U.S. not globally.CGW is a useful theory. It can be used to show that rain or snow are increasing or deceasing. However, there is a limit to its usefulness. For example, it can’t, or won’t, wash the dishes.
If they don’t outright say the world is getting wetter they certainly imply it. Just read the link and the first paragraph.
Yes. And drier. And the wind is getting windier, or less windy, or both nor neither. As a theory it does everything except wash the dishes.
Washing dishes will become more difficult as the globe warms, experts say.
Amazing stuff Willis, it seems you are the only one looking critically under the hood of all those lovely models!
And:
____________________________
I asked Chat GPT:
How many (please give a number) of arbitrary parameter,
When ‘arbitrary parameter’ is defined as a variable which is introduced at some point of a calculation whose value is not known independently, but which can be adjusted to modify the result
ChatGPT
Determining the exact number of arbitrary parameters in a climate model like CESM can be quite challenging due to the model’s complexity and the various configurations in which it can be run. However, a rough estimate would suggest that there could be several hundred to a few thousand arbitrary parameters in CESM.
These parameters often represent processes or phenomena that are not fully understood or are difficult to quantify precisely. They might include tuning parameters used to adjust the model’s behavior to match observational data, empirical coefficients used to represent sub-grid-scale processes, or parameters controlling the strength of various feedback mechanisms.
While these arbitrary parameters play a crucial role in tuning climate models to improve their performance, modelers strive to minimize their number and ensure that their values are physically plausible and consistent with available scientific knowledge.
Speaking of CESM, it disregards the temperature dependence of latent heat of water vaporization, leading to a 3% overestimation of energy transfer by evaporation from tropical seas (if they compute the evaporation correctly, which I doubt).
“ They might include tuning parameters used to adjust the model’s behavior to match observational data”
Data matching is *NOT* using fundamental physics in the model, it is just pure data matching. The biggest clue is that after about three years all the models output nothing but a linear trend with a positive slope. You can duplicate that with nothing but a linear equation with just one or two variables – no understanding of fundamental physics needed at all.
What you call data matching is extensively done in every field of Physics. The so called Standard Model (one of the two fundamental theories of Physics) has quite a number of parameters that are just “matched”. Relativity (the other fundamental theory) has the “cosmological constant” as an example. It’s a value that makes the equation match observations. Einstein didn’t like it but it’s there anyway.
I cannot imagine how it is possible that after so many years you are this clueless about what models output.
From your “expert”.
http://www.climate.gov/media/14532
You can claim these aren’t linear, but you won’t fool anyone!
I have to inform you that “It looks linear to your eyes” and “it is linear” are two different things. I don’t understand why I have to explain this to you. BTW glad to see you’ve given up on the “data matching” bs.
Great indicator of how well you read!
Another great indicator of how well you read. The horizontal axis is linear by time. The vertical axis is linear by ΔT. All three projections pretty much fit a straight line.
I hate to tell you, but that makes the outputs of the models linear. It does not matter how they were derived; the outputs result in linear projections.
Dr. Pat Frank published a paper on this very subject some time ago. You might look it up and see what holes you can punch into his math.
BTW, I added some straight lines to show how linear the model outputs are.
https://ibb.co/W6ynzyD
Well, two of them intersects each other twice 🙂 And how about the very visible acceleration that is already measured (observed) IRL, and that has been predicted by models? Does not sound particularly linear to me… Look, you genius, the “time” here is actually CO2 output, and CO2’s relationship to temperature is known to be nonlinear. What we don’t know is the feedbacks, we only have estimations from the models. But in this scale the graph may look linear to the naive eye, like yours. But it’s not.
Oh, a well known bs-monger is your reference. Congratulations.
Oops, back to “data matching”, as far as I can understand you don’t understand it. So what’s your opinion about the cosmological constant? This is a golden opportunity to tear Einstein into pieces!
Are you using your glasses? The units on each axis are pretty obvious and easily interpolated.
You are full of crap. You can dance all you want to hide it. You can make all the excuses you want but OUTPUTS are linear versus time. BTW, CO2 is an input to the models, not an OUTPUT.
The x-axis is time, the y-axis is ΔT. I didn’t create this, it is from .gov.
I don’t care what you ASSUME the inputs are, the output is obvious, ΔT!
Good for you, not only an ad hominem but an agumentatitive fallacy of Ipse dixit (an assertion made but not proved).
Again, two doubly intersecting lines are not linear even to the eye. But that’s only for eyeballing, we are better than that. A hint: if you magnify enough, a small section of any curve looks linear. Don’t be fooled by this.
I don’t understand how you thought I claimed it was an output. We always arrive back to the fact that you basically don’t understand these things.
Oh, now I understand your misunderstanding! You confuse the outputs of a model and the graph. Anyway, deltaT is eventually the result of deltaCO2, and this latter is more or less linear w/r/t time, and in this region this gives you an illusion of linearity w/r/t.
Well, it is proved. He is treated as a hysterical troll in any serious forum ‘cos anytime someone points out an error in his maths he instantly goes ad hominem.
A small section amounting to 110 years! 🤡. Don’t worry, I am not fooled.
Yet there are long pauses where CO2 keeps on doing its inexorable increase and ΔT changes not at all. There are localities and regions that have little to no growth.
Here is a graph of one.
So you rely on “serious forum(s)” to make your judgements for you. Same thing happened to the scientist who proposed the theory of moving plates on the Earth’s surface. Same with the scientist who had to infect himself with a bacteria to show they caused ulcers and could be treated with antibiotics.
Consensus is not science, it is pseudoscience and will prevent new discoveries as no one will be allowed to counter the consensus.
Well, 110 years in itself is meaningless if you don’t know the scales. You, Tim, evidently don’t know that. I don’t think I have to mention that 110 years on geological time scales is just nothing. Here, the time scale is perhaps less but it’s still in the 500-1000 years range.
Your inability to understand this is astonishing. FYI these things are all well known and do not contradict science. Eg. the mid 20th century decrease was the result of industrial scale sulfur pollution. This simple fact, used as an example here, has been pointed out to you, deniers, numberless times, and you keep coming up with it every time.
Exactly. This is what is expected. The Earth has a very complicated and chaotic heat transport system.
Well, no. It was treated as a hypothesis, ‘cos Wegener didn’t really have much evidence apart from the morphology of continents. Evidence is key in science. In a few decades we collected geological evidence to show that the morphological argument was plausible (ie. geological/fossil formations are similar where expected) but only in the late 50s we could actually prove the theory with the discovery of oceanic ridges etc.
nyolici writes
Fundamental constants apply to fundamental properties. GCMs aren’t defining fundamental properties, they’re defining straightforward, albeit effectively intractable, processes involving matter and energy.
GCMs use parameters as a cheat, a simplification that is no longer physics based and they do this because they’re making a calculation possible on a modern computer.
I expect you know this but are intentionally trying to mislead or justify the unjustifiable to yourself.
Is this your counter argument? It’s really painful to debate you deniers but for all the wrong reasons. I came up with the fundamentals to show that even there, where you would not dare bs about you would find constants.
Not “even there”. Only there. You’re right, it is painful to watch you trying to justify a fit as being physics based.
Oops, I didn’t know that 😉 sorry. I thought, for example, that the various coefficients (eg. friction) in Physics were like that. ‘Cos they were essentially observational values for getting equations right… But not fundamental constants… My bad 😉
Now seriously I love you deniers when you try to sneak out with your bs from these situations, and then you inevitably fall flat on your face. Then you try another angle, and voila, you’re again on your face.
Observations for getting equations right, eh. You do know what a fit is, dont you? Tell me what equation determines the coefficient of friction Fr and how it might be calculated for a material that is changing over time?
But lets have a look at an example used in a GCM where its even more obvious. Here are some values for viscosity in
(PDF) Failure analysis of parameter-induced simulation crashes in climate models (researchgate.net)
Can you find these values in a text book?
Yolo! 🙂 Again! There’s no day w/o a dose of denier bs! A little bit of goalpost moving, a little bit of Gish gallop…
Yeap. Essentially the same as what Tim Gorman calls a “data matching”. The way how eg. the friction coefficient is determined in practice. Among others. Just as I claimed. This is a widely used method in science. According to Tim Gorman, this is supposed to be the most unscientific thing on Earth. I just showed that from Einstein to practical engineering this is widely used.
Yeah, those moving goalposts 🙂 Mu = Fr/Fn, and I seriously doubt that there is a procedure to calculate in the general case. But this, of course, is irrelevant. You’re just throwing a fit.
By measurement. You didn’t understand the issue because your suggested formula still fundamentally requires precise knowledge of the material.
What isn’t physics and what I explicitly mentioned above about is when the material is changing so that the friction force is changing over time.
There is no formula for that. The equivalent in a GCM might be modelling clouds. We have a fit to describe them and no way of knowing what they’ll do under conditions we haven’t seen yet. They’re not a physics based solution.
Neither is viscosity. They tweak viscosity so that the model doesn’t crash from a range of choices. Thats simply not physics.
At last. So we have a “data matching” here, right?
It’s almost beside the point here that you’re bsing and diverting again. No, it is physics but in normal engineering practice the standard linear formula is an extremely good approximation.
There are a lot of things that cannot be expressed with one single, closed formula. We are basically always using numerical approximations. Actually, we approximate even when we have a formula, for obvious reasons related to measurement accuracy. When these calculations are complicated and multi-step, they are called models. Orbital mechanics is an excellent example. We’re using models for decades for that.
Yep. Just as they do it with the friction coefficient in the Ff=Mu*Fn example. Here we are lucky that the underlying mathematical model is much-much simpler so that we can use the Mu=Fn/Ff formula. If it was more complicated, we might be better off with tweaking. FYI engineers are happy to do that, numerical calculations are not always easy to reverse to express parameters. Furthermore, even when it’s possible, expressing parameters means implementing (ie. work with the associated problems like quality control) another function (however simple it may be). Engineers happily draw graphs, for example, to discover parameter dependence. Or at least the range, etc. I’ve done that in my engineering work.
Determining values by experimental observations and measurement is not “data matching”. Those values may lead one to devising a mathematical expression describing the why and how a value occurs.
You are trying to say using a Fourier transform to find the underlying frequencies that make it up is incorrect. It is not.
That doesn’t mean as more data is collected that further refinement won’t occur. It doesn’t mean that discoveries of the fundamental reasons for those underlying won’t occur. Science is never done.
Regression trends of temperature versus time will never reveal the reasons for the values that are measured. The best it can do is convince one that the underlying factors will never change so that the regression is a perfect forecast.
Orbital mechanics has very detailed components and the fundamental concepts are well defined. How else do you think we can forecast proper trajectories for rendezvous with asteroids and comets are calculated. Do you think they use fudged parameters as part of the calculations as GCM’s do?
LOL. Climate science has had 40+ years to reduce the range of ECS and as of the last IPCC go around they actually increased it! Not a good picture.
Well, determining parameters to mathematical models is data matching. Like the cosmological constant (determined mainly from the Hubble constant, itself an observational value). Or the friction coefficient.
A computer model, whether you like it or not, is essentially a mathematical expression. It is a big-big and complicated system of differential equations. The parameter tweaking is much more complicated but in essence not very different from how Einstein introduced the cosmological constant so that his system would show the observed expansion instead of collapse.
You’re kind of obsessed with regression trends. No one has claimed that they reveal the reason. They illustrate an already revealed reason very well.
Orbital mechanics on its proper time scale (millions of years) is just as chaotic as climate. In the 80s they developed a computer model (at that time they had to develop the hardware, too, for that), and the model showed that Pluto’s orbit would become completely unpredictable (ie. chaotic) after 140 million years. This is just an illustration.
Again, your fundamental problems with comprehension. I was talking about parameter ranges. ECS is model output. FYI orbital mechanics science is unable to reduce the range of Pluto’s orbit after 140 million years… I hope I don’t have to continue. And anyway, we are halfway through a doubling of CO2, and the anomaly is around 1.1K (as far as I can remember).
Let’s put it this way. You believe the component count in a black box makes the output unique. I don’t care what is in the black box, I look at the black box output, and if it matches your black box’s output with the same input, then they are equivalent.
I dont think you’ll ever understand when tweaking parameters makes a calculation non-physical and become a fit and when it is allowed.
If I gave you a block of material with an unknown number of layers of unknown material and it slid along a surface and wore away, and lets say you got to measure it for the first half of its wear, I’ll bet you believe there is a physics based formula that can be used to predict its friction for the last half of its wear over time.
I don’t think you’ll ever understand that this is irrelevant here, and it’s almost beside the fact that in engineering applications where we have bearing surfaces, the “block of material” is rarely layered.
Anyway, everyone knows that the basic linear formula is just an approximation not just in the temporal sense (ie. it can become highly non-linear in practice above or below certain values of Fn). The thing is that the linear formula is pretty good in a wide force range and for a long time, and in practical engineering applications the surfaces are expected to be regularly maintained (ie. replaced) anyway. This was just an example where you need “data matching” to get the correct (approximate) formula. Classical Physics is full of these observational constants. The fact that you try to sneak out with some bsing concentrating on this particular example is telling.
And you’ll bet wrong. The general case is extremely complicated and only numerical approximations are used in practice (using even more “data matching”).
It was your example. I’m using your own example and adding conditions to make it blindingly obvious its a fit so that no “physics based” model of the calculation has predictive ability.
This isn’t about “engineering”, its about prediction. Its about projecting a tiny signal far into the future where no measurements have ever taken place using an complete, highly simplified model.
The fact you cant understand this and cant see the analogy between the friction example and GCMs is your own failure.
And you’re not even consistent.
*incomplete
Yep, you desperately try to move the goalposts (“adding conditions”). This is a perfect approximation from classical physics (still widely used), and a perfect counterexample to Tim Gorman’s bs.
And some made up bs that doesn’t make sense…
It is evidently not about just engineering, but, for that matter, what is your problem with the latter? In these deniers’ forums, engineers are always shown as down to earth, common sense guys who cannot be fooled. And now you just introduce an arbitrary restriction (based on your misreading of the example). Talk about moving goalposts again…
Yep. Just as the cosmological constant. I can perfectly predict the friction force for any similarly manufactured stuff in the future.
I can see the analogy, and that’s why I’m telling you and Tim that bsing about “data matching” is just that, bsing.
Changing over time? Why you can take care of that by using averages!
Changing static friction and/or sliding friction. No way. Averaging doesn’t allow for uncertainty, it all cancels.
Gradients because of changing friction coefficients. Don’t bother me!
/s
nyolci, here’s an example of parameterization to fit calculations to reality.
In the GISS-E climate model, they were having trouble with melt pools on the ice. In modelworld, it kept melting at the most inopportune and physically improbable times.
So they simply said “OK, melt pools can only form from this day of the year to that day of the year, and at no other time.”
Of course, this gave them a much better fit to reality … but it is hardly “physics based” as the modelers love to claim …
I discuss this and other parameterizations in the GISS-E climate model in my post Meandering Through A Climate Muddle, q.v.
w.
Again, parametrization is not an error in itself. You have to do it right, of course.
Melt pools and the whole associated phenomena are extremely complicated. Furthermore, modelling is necessarily simplification. I don’t know whether they got it right, and I don’t think you know that either, and I doubt you understood well what they were doing (and this is an extremely frequent problem with WUWT contributors and most of the readers, too). But even if they did it the way you describe that would not automatically mean it’s invalid.
Thanks, nyolci. My point was simple.
This is a FIT. It is not, as is often claimed about the models, PHYSICS BASED. There is no physical basis for it, it’s just a way to force the model to give more realistic answers.
Next, you say “I doubt you understood well what they were doing.” I discovered the problem, noted their “solution”, and based solely on your prejudices, you claim I didn’t understand what they were doing???
What they were doing is clear and simple. They were unphysically restricting melt pools based solely on the day of the year. There are no physical laws that say anything remotely resembling that. What’s to understand?
My guess is that I was writing computer programs before you were born, and you question my ability to read a computer program based on your bizarre fantasies? Do you really think that’s how adults discuss contentious issues, by tossing out insults?
So let me invite you to stuff your nasty fantasies about what you think I understand as far up the distal end of your esophagus as your arms will reach. They don’t help.
Finally, does that “not automatically mean it’s invalid” as you claim? No clue. Depends on what you mean by “invalid”. But it’s certainly not science-based.
w.
Any (mathematical) modelling is as good as it FITS reality. Furthermore, I still don’t think you understood what they were doing there, and how much this was PHYSICS BASED.
I don’t know and I don’t really care. Because scientists know what they are doing. I’m pretty sure they had the justification for that. They can be trusted.
By the way, If you deniers weren’t so toxic in your interactions with scientists they would happily explain to you the underlying reasoning.
I seriously doubt that.
This is a computer program of 400k lines. Understanding such a big program is a major undertaking. You grepped around for what you thought were the juicy bits using some keywords you thought were relevant. Please let me be skeptical about your assertions.
Correction: these are not my bizarre fantasies. I’m just getting them from the scientific community.
Okay, I’m not a native English speaker. I meant: if they did the way you think they did, even that would not automatically mean they did it in a non-scientific way.
As I said, you don’t have a clue as to why non-physical fits have no predictive capability. Any yet they pervade GCMs.
Again, I seriously doubt Willis understood well what happened in the code. We don’t know what it really was.
Furthermore, even if it was what is was, it was clearly an approximation in a calculation. Modelling is basically stepwise numerical a solution to a differential equation system, and here you have to simplify and use approximations to small details.
That is true to an extent. I did the same thing when working with the remote sensing lab designing antennas as a senior at school. You must make assumptions and simplifications to Maxwell’s EM partial differential equations to solve them, and we didn’t even have computers back then.
The difference is that those assumptions were verified with real world testing. GCM’s have never been verified because they always run hot. The ECS range has expanded rather than narrowed. Dance around the truth all you want; those are facts that can’t be denied. Parameters should be based on physical quantities in the real world, not made up to reach some anticipated result at the end.
This is simply false.
No, they are made up to match observations.
Read the headline from this paper by your “experts”.
https://www.nature.com/articles/d41586-022-01192-2
Are the “experts” mistaken?
Here is another.
https://yaleclimateconnections.org/2020/07/some-new-climate-models-are-projecting-extreme-warming-are-they-correct/
These prove your “false” assertion is based on nothing more than a belief system of CAGW. You simply won’t admit that they may be running hot because they may be be wrong. You could have answered maybe, but you didn’t. Anything that challenges your deeply held belief in CAGW must be wrong.
I hit the send button by mistake.
Lastly here is a defining article.
https://www.science.org/content/article/un-climate-panel-confronts-implausibly-hot-forecasts-future-warming
Even Gavin Schmidt admits they are wrong!
Yep, you can see science working. BTW, only one specific model family did run hot, but they quickly discovered it and corrected it. Models in general do not run hot. Furthermore, they could use the information even from the hot models after correction. This is explicitly stated in the above article.
What do you think WRONG is? It destroys your belief that the experts know what they are doing!
Corrections when you don’t know why they run hot is patching things by seeing what makes it comes out to what you want it to be. That doesn’t make it correct!
The point of the article you referenced was that they discovered why they’d run hot, corrected the models and they were even able to get useful information from hot models.
Show us the projections from the corrected models, I have not seen anything issued by the IPCC that their current report has been corrected.
Here is my link to the ModelE source code. Its broken now, they removed access to it. But like Willis, I took some time marvelling at it too.
https://www.giss.nasa.gov/tools/modelE/ar4/modelEsrc_ar4/
You can still see the source code if you’re “approved” and pretty much work for NASA by looks of it… from
https://www.giss.nasa.gov/tools/modelE/
Source Code RepositoryView source code in the repository (access limited to approved users within the nasa.gov domain) for latest updates. This link allows you to view all the source files currently in the central Git repository together with their older versions. You can also make comparisons between different versions of the same file.
It is a clear example of why GCMs have no predictive power. Even if you cant understand why.
Because of using approximations? Do you honestly claim that? FYI each and every calculation is an approximation. The question is the error/variance/whatever. In these models they have verifiably lowered that to an acceptable level. And this is what counts.
So the models are now accurately predicting temperatures to two decimal places?
I again have the bad feeling that you’re missing something here. Models do not “predict” anything in the sense that you can’t predict the outcome of a coin flip. In a chaotic system you can’t really predict. Models are used to get the “climate”, the “attractor” that governs the trajectories we call “weather”. They can tell you that at this time in the future, at this location what the distribution of various parameters like temperature and precipitation are. They are quite good in that though.
“At this time in the future” sure sounds like a prediction to me.
“the distribution of various parameters like temperature and precipitation” sounds very much like what what is used to determine global average ΔT’s! Like it or not, they run hot!
With two decimal place accuracy? 🙂 No. We have a distribution that is getting wider and wider with time.
Yep, this is why I said you misunderstood something, and probably you mix up these two. They are computed very-very differently. But with deltaT we can get a two decimal place accuracy though, with a much-much tighter distribution. I know this part is a mystery to you, I have seen your struggling with it and I have seen all the other guys trying to help you understanding how it works, to no avail.
Ok, you know where the failures are, so let’s work thru an example.
Day 1
Tmax = 70°F
Tmin = 50°F
I love when you shift gear in bsing 🙂 Okay, the first misconception: Tavg is not the average of the min and max. This is one of your persistent brainfarts. The second: in science we use proper measurement units. Forget Fahrenheit. The third: that I have to answer these questions. It’s useless. People here have answered your brainfarts quite a few times and you seem not to understand it. You can’t even understand that Tavg is not the average of the min and the max. Furthermore, somehow you are unable to understand that the average has a much lower uncertainty than the individual measurements. Somehow you think we claim averaging changes the uncertainty of the individual measurement. No. The average of the measurements is a “measurement of the true average”, and it’s much more precise than the individual measurements.
Most folks here have followed tradition and use Tavg = (Tmax + Tmin) / 2. If you wish to define it different, feel free to do so.
However, you define it, show what the uncertainties are.
Since you appear to be able to convert temperatures,
Tmax = 294.26K
Tmin = 283.15
It is not useless.
It is a simple calculation that can be done with any statistical calculator on the internet.
This exercise will illustrate just how well you know what you are arguing about. Get your friends at Real Climate to help you out if necessary.
Funny how you just said Tavg isn’t a thing!
This statement illustrates just how much you do not know about metrology. You should do some study before repeating things the “experts” know nothing about either.
Here is a primer. Each and every measurement has a stated value and an uncertainty interval surrounding it. The interval may describe various probability distribution, it is up to you to determine the correct one.
This example has two measurements to begin with. When combining measurements, uncertainties add, always. Sometimes directly, sometimes by RSS.
In this case, with single measurements, you can not use statistical calculations to determine the uncertainty interval for each measurement. That means you need to find a Type B uncertainty value. Let’s assert we are using ASOS stations. Look up what NOAA says to use for uncertainty of those stations.
You are so hosed, but you don’t even know it. Find the term “true average” in a metrology web site on the internet dude.
I suspect you are describing “true value” which requires you to know the accepted value of a measurand. There is no such thing for temperature measurements.
Look, this is a simple example any 1st year engineering student would be expected to know about and solve in any calculus-based chemistry, physics, or engineering lab class. This doesn’t require advanced degrees or “experts”.
Google repeatable uncertainty and reproducibility uncertainty for a taste of what you should do.
This is a good site as well as numerous documents at NIST.
5 Reproducibility Tests You Can Use For Estimating Uncertainty (isobudgets.com)
Yep. There are two problems here. This is not the average of temperatures in a time period, and this quantity is not even called the average. I can’t recall what it’s called, something like Tax. The other thing is that nowadays we use the actual average of samples collected during an interval.
Yes it is, and it’s characteristic that you didn’t understand this sentence. It’s useless to debate you about this. You show remarkable resilience to the facts, you repeat your bs ad nauseam.
And you claim you are good at debates… No, I didn’t claim Tavg wasn’t a thing. I didn’t even claim (Tmax+Tmin)/2 wasn’t a thing. I only claimed that Tavg was not calculated this way nowadays. Congratulations.
Simply false. And actually measuring Tmax and Tmin are not two measurements. Nowadays they don’t use min-max thermometers, they simply sample at well defined times (this has been regularized in the last few decades w/r/t local time of the day etc.). The daily Tmax and Tmin has a very funky dependence on all of the daily/weekly/monthly/etc sample values, ie. on all the measurements. Tavg is simply the average of the daily/weekly/monthly/whatever samples.
That’s why I put it between quotes. If we knew the true values for what we measure we would know the true average. The average of the measurements has a distribution that has the same relation to the true average as any individual measurement to the true value, except for the uncertainty which is much smaller. You somehow cannot understand this.
This makes no sense. A measuring device has a well known characteristic, that has been determined beforehand. So we do know uncertainty for single measurements. This is a statistical thing, if we measured the temperature of the room with multiple devices we would see the distribution.
More dancing. Up until ~1980, min/max LIG thermometers were the standard for quite some time. Tavg did = (Tmax+Tmin)/2.
If you begin using another method when ASOS was implemented, then comparing the LIG with ASOS is even less scientific. New records should be started when the change in methods is changed.
BTW, from CRN.
The maximum air temperature, in degrees C. See Note F.
The minimum air temperature, in degrees C. See Note F.
F. Monthly maximum/minimum/average temperatures are the average of all available daily max/min/averages. To be considered valid, there must be fewer than 4 consecutive daily values missing, and no more than 5 total values missing.
I notice you have pointedly refrained from answering my question to you about uncertainty in the temperature measurements.
I can only assume you have no clue about how to answer it. Did you forget how this is done in your physical science lab classes?
I’ll repeat it in case you can’t find it.
Day 1
Tmax = 70°F (294.26K)
Tmin = 50°F (283.15)
What is the repeatability uncertainty of Tmax?
What is the repeatability uncertainty of Tmin?
What is the resolution uncertainty of both?
What is the mean and reproducibility uncertainty of the random variable Tmean (see Note F above)?
Yep. No wonder they do those adjustments etc. you are so upset about.
These values are published. Why do I have to answer to you?
Your questions don’t make sense. I don’t know how to put it more politely. My guess is that you don’t know what you are taking about. It is a mystery to me why you can’t comprehend that. If we only know Tmax was measured x, we don’t know anything about its distribution.
They don’t make sense to you! That is a surprisingly good indicator that you know nothing about making measurements or how to treat them.
I’ll repeat the questions.
I notice you have pointedly refrained from answering my question to you about uncertainty in the temperature measurements.
I can only assume you have no clue about how to answer it. Did you forget how this is done in your physical science lab classes?
I’ll repeat it in case you can’t find it.
Day 1
Tmax = 70°F (294.26K)
Tmin = 50°F (283.15)
What is the repeatability uncertainty of Tmax?
What is the repeatability uncertainty of Tmin?
What is the resolution uncertainty of both?
What is the mean and reproducibility uncertainty of the random variable Tmean (see Note F above)?
Maybe you could take Tmax for a day and show us an uncertainty budget for that single measurement. That would at least be a start.
If you can’t intelligently discuss measurement uncertainty, then you have no business telling anyone they don’t know what they are talking about. Change my mine!
This is something that depends on the measurement device. How the hell am I supposed to know that? If you have the device these values are supposed to be supplied. Why the fcuk would I have to deal with this even if I knew it? This is absurd, and this is not the first time you have come up with absurd questions like this, that are irrelevant. Furthermore, if this is a digital device and Tmax is just the max of samples per time period, it has a very goofy relationship to the individual measurements (for which we have the known characteristics from the device) because it depends on not just the device but on the temporal change of temperature during the period, and the sampling time instances. I guess you didn’t even realize it when you started your bs session. The actual average is “linear” w/r/t individual measurements. Each measurement proportionally counts. You can average averages if you weight them with sample number (in practice: time period). Max and min is not like that.
Discussion? You asked a concrete value. Or did you ask for the definition? Or the way we determine these? If so, that would be absurd too in its complete irrelevance to the subject.
I don’t have to answer any of your questions if they are unrelated to the subject. This is just deflection.
Referring back to this, the min’s and max’s very tortured relationship to individual measurements is one of the reasons why we use other “well behaving” aggregates. Like the average. Mathematical ease of use. And that’s why standard deviation uses squares and not absolute value. Mathematical simplicity. The min/max thing was an artifact of the measurement method of the time, and we abandoned it as soon as it was possible.
If you don’t know these things then how do you conjure up the hutzpah to tell others they don’t know what they are talking about.
Maybe you should learn what a Type B uncertainty is and where to find it from NOAA documentation.
Illustrative of what a mathematician would do. Ignore commonly accepted physical science practices for making and propagating measurement uncertainty. You should check the variance of some of these “well behaving” aggregates like an average. Every mean you calculate also has a standard deviation to describe the distribution surrounding the mean. That should prove how well behaved some of your averages actually are.
If this is the best you can do, I suggest you refrain from criticizing others when discussing the measurement uncertainty of temperature data.
BTW, here is a table from a NOAA document on ASOS stations. Maybe you can find a Type B uncertainty value in this table.
Yet it is still used when computing and comparing anomalies prior to about 1980. If the method of calculation has changed then the comparisons are invalid.
I know very well how ASOS stations work. Here is an excerpt from the ASOS manual.
If I don’t know what kind of instrument you have in mind?
I’m still not getting what I’m supposed to tell you. I assumed it was the concrete value (like the ones you listed as an example), but those are instrument dependent. You seriously wanted me to give you definitions? Okay, go along, tell me what answer you wanted to hear to the question “What is the repeatability uncertainty of Tmax”. Because in its original context this question doesn’t make sense. Or I seriously misunderstood you.
And if that’s the case, you’d better listen. It was mathematics that made Physics a real science.
This is one of the cardinal points of using well behaving aggregates. Because their propagation is much-much simpler. Tmax itself is surprisingly problematic, not to mention the fact that it’s not entirely independent of Tmin since we use the same dataset to compute each. You can calibrate a thermometer with the standard and simple methods (like the repeatability and reproducibility testing you seem to inquire about in your awkward ways) to get its characteristics for isolated measurements. You basically measure temperatures independent of each other multiple times, with multiple temperatures in the whole range, blablabla. I suppose you know this. But getting the distribution for Tmax/Tmin is far-far more problematic and depends on all what you measure in a given period. You may get a much rougher approximation but that’s all.
I again want to express my surprise how you enthusiastically you dug your own grave with the uncertainty of Tmax/Tmin ‘cos this is not something you usually easily get.
Furthermore, average has pleasant properties when we combine multiple locations and/or multiple time periods (“propagation”). We can easily calculate the resulting distribution. This is highly problematic with Tmax/Tmin. I’m kinda shocked that you don’t get this.
Did you have these in mind? If so, why didn’t you ask that explicitly? Furthermore, why the hell I’m supposed to tell you these details? Why is this relevant to your “data matching” bs?
But now if we are here, can you tell me what the repeatability uncertainty of Tmax for the ASOS stations? Be careful, the question is not simple ‘cos the data above is not for Tmax, it’s just for isolated measurements. I expect a value from you (or a set of values, I wonder how you handle ranges in this case ‘cos that’s far from trivial for an aggregate like this).
Yep, and that’s why we assume greater variance and other tricks so that we can treat Tmin+Tmax as the approximation of 2Tavg. This is not something straightforward but scientists seem to handle it well. These are part of the adjustments you are so keen to work yourself into a rage about.
The data in the table is for single measurements. Each and every single measurement.
Repeatability conditions are required to use statistically determined Type A measurement uncertainty. That fundamentally means measuring THE SAME THING multiple times. The only way to do that is the crude way that CRN does it, with three thermometers, which doesn’t really give a good, valid distribution.
If you do not have multiple observations of the same measurand, you must use a Type B measurement uncertainty for the repeatable uncertainty.
You can’t even admit to what the uncertainty of a single reading is even though it is right in front of you.
Yep, and this is the catch. So, WHAT IS THE REPEATABILITY UNCERTAINTY FOR Tmax FOR ASOS? Please entertain me. Remember, this was your question. I pretend I can’t calculate it. It’s time for you to shine.
Yeah, you old moron 😉 really? Don’t you think that the people who calibrate the equipment may have some kind of a controlled environment for that? When I worked for a national authority for evaluating measurement equipment, we didn’t have problems maintaining a stable environment for this kind of repeated testing.
But what is your problem with just using a calibrated instrument? Why do you want to calibrate it? Furthermore, even after calibration you can have much fun computing the distribution for Tmax and you don’t even need three thermometers to stick into your bottom.
This is getting ridiculous. 0.9F as per calibration.
And now I would like to see your calculation for Tmax, Tmin, and Tmax+Tmin for ASOS.
No, because the projection is using a fit and not a calculation that represents reality.
FYI fit is a calculation based on a quantity that represents reality.
“represents reality” is NOT reality. I can throw darts at a map and end up with something that “represents reality”, but again it isn’t reality.
You are starting to sound like a Heaven’s Gate proponent.
It never pays well if you try to play with words. Modern physics is a mathematical model (before you shxt your pants, this is not the “model” in modelling) itself that represents reality. Physicists know well that the “underlying reality” may be something very alien. The model is an extremely good representation. So if something is representing reality well, then we have to be happy.
No it is not a good representation except in your tiny mind. Even your expert disagrees with the current set of GCM outputs.
Per Gavin Schmidt
Yep, for one particular family of models that were corrected.
Show us the IPCC correction!
I don’t really know, and I don’t care. They know the problem, and they can treat it, even the article you linked says that. This is a technical problem, not a fundamental, if you can distinguish these (I doubt, I expect the next round of bs from you…). I don’t have any reason to assume they don’t publish the caveats they already know. Science is like that, and only in the deniers’ distorted world is where 10s of 1000s scientists who directly or indirectly take part in the IPCC reports conspire like a secret society.
IT IS A PROBLEM. If the physics were correct, there wouldn’t be a problem, right?
The models have been wrong for their entire existence. Not one “projection” has been correct when compared to measured temperatures.
Yep. Technical, and it’s already been solved.
Is this a serious question? We can’t even calculate the three body problem. We can just approximate it. The questions is how accurate that is. Apparently we have just taken a big step in modeling.
This even contradicts the article you referenced. Could you please refraining from being self contradictory?
This is a waste of time. You’re not capable of understanding the difference. Its kinda hilarious that you think Willis’ melt pool example in ModelE results in a projectable quantity. It couldn’t get more obvious.
Really? Jim Gorman came up with this exact bs, and I just pointed out to him that Modern physics was just a (mathematical) representation of reality. “Physics based” just means using the appropriate mathematical model. Model here is not the “model” as in “modelling” before you start to cross yourself. FYI it may mean approximation. You’d better know that in practice we almost always use classical methods while we know that it was falsified 130 years ago with Michaelson-Morley, the perihelion precession of Mercury, etc.
Melt pools and the associated phenomena (like seeping into lower layers etc.) are extremely complicated and we started to tackle it theoretically just in the last 10 years or even less. Even the fact that meltwater was able to get to the bedrock and acted as a lubricant for glaciers was discovered cc 15 years ago. Before that we only had approximations, and Willis, god bless his ignorance, very likely discovered one. FYI even if we knew the exact theoretical background it might be so extremely mathematically complicated that for a model that is not particularly interested in this aspect, and what is an approximation anyway, can use a statistical model for this without significantly changing results.
You just move from one argumentative fallacy to another. “scientists know what the are doing”, really? How do you know that? Have you reviewed all the program and all the parameters of all the GCM’s. If you have, why do none of them agree? You should know that!
As to the argumentative fallacy, you keep using a Appeal to (anonymous) Authority. I really question the completeness of your education. You appear to have never had a debate or logic course. Those teach you to use facts that are supportable, not some generalized, “somebody said something I believe”.
Willis gave you a concrete, researched example and you respond with “I know better”. Sad.
That’s how it works. That’s why they are scientists. And don’t get me wrong. I don’t mean they are infallible. Of course, they are not. But the whole mechanism is built in a way that in the long run they correct themselves.
No, and I don’t have to. This is the point. Science has the mechanisms for these. Scientists review other scientists. And they are the ones who actually understand others’ work. That’s why I don’t think Willis is able to do reviews. He may be an enthusiastic amateur but that’s all.
Because they are the authority. They are the experts. FYI this “always questioning authority” doesn’t mean what you think it means. Experts may have the ability to question other experts, and profound paradigm changes are extremely rare. So when outsiders like Willis question things the sure bet is that that’s just ridiculous. And please spare me from “But Einstein, but Wegener”. When Einstein worked, classical physics was clearly in a crisis. Everyone knew there would be a great paradigm shift. There were a lot of observations that they were unable to explain in the Newtonian model, like radioactivity, the Michaelson-Morley experiment, the perihelion precession of Mercury etc.
MSc EE.
The first thing I learnt from experience was that debating science guys were not easy, and required much more than Willis’ “research”. You would have to become an expert first.
I still question your overall education. You should know that you argue with facts and not appeals to authority. If you had ever debated in high school or college you would know “Because they are the authority. They are the experts.” just doesn’t cut it.
Read the book by Mototaka Nakamura who is an expert in the GCM’s.
Confessions of a climate scientist
The global warming hypothesis is an unproven hypothesis Kindle Edition.
When I mentioned him before, his criticisms were summarily dismissed. He is an expert in GCM’s and has many of the same criticisms as Willis and others. You may not want to believe his writings, but that simply means you have a closed mind and believe in the consensus rather than true science.
Are you okay with me calling you a moron?
It means you can’t just dismiss it out of hand. You have to have an extremely strong counter argument, and this is clearly missing here.
I’ve read excerpts, and it was celebrated in denialist circles when it came out in 2019. Real experts quickly pointed out the errors in his “confession”. Furthermore, even if his criticism has some validity, he is still an extremely rare dissenting voice with literally 1000s of scientists on the other side. And this does have a weight here (just as you thought Nakamura being an expert would have).
See what I mean! You dismiss an expert out of hand and the only evidence is your assertion that”real experts” said there were errors!
No facts, no names. Appeal to (anonymous) Authority.
Fail!
Well, not my assertion.
You just use “Real experts” as an anonymous appeal to Authority. That IS your assertion!
Show us what these anonymous real experts wrote and published anywhere. Blogs, peer review, tweets, etc.
Yep. And this is what happened. You will notice when there’s a paradigm shift ‘cos the Authority will be visibly divided, and/or there will be observational phenomena that they will openly state to be w/o explanation. The late 19th century, early 20th century was like that. Now we don’t have that. Some obscure people writing in obscure blogs (with industry funding), who are demonstrably wrong are not the people who will change the paradigm.
You should start at realclimate.org They are not anonymous.
Is that your church where the Word is dispensed to the believers so they can then go proselytize?
That’s not how science works.
You said:
“They can be trusted”. How many cult members have said the same thing? I’m reminded of Jim Jones. He was trusted enough to convince people to suicide.
If you have not done the research such that you can argue with your own facts and theories, then you belong to a cult because you believe what you are told.
Except this is not a cult. Anytime I check a scientific result it turns out to be good. And I’m not alone. I have yet to see a long standing scientific result that turns out to be flat out wrong. There may be uncovered edge cases, there may be a whole new class of cases that were unknown at the time the result was published, and this invalidate (or rather refine) the result. It means scientists by and large can be trusted.
Even your experts don’t agree.
Yep, there are details. This is about levels of confidence. We are very confident in the fundamental laws (like the law of conservation of energy, an observational law). We are less confident in this or that. There are results that are kinda very likely but we are not entirely sure, and there’s debate about them in science. But this is completely normal. Though when they say it’s settled, you’d better take that seriously.
nyolci
Reply to
Jim Gorman
June 6, 2024 2:39 am
nyolci, two of the most important “constants” in climate science are the amount of increased forcing from a doubling of CO2, and the climate sensitivity.
Regarding forcing from doubling, the models give answers ranging from 2.6 W/m2 per 2xCO2 to 4 W/m2 per 2xCO2.
Regarding sensitivity, the answers range from 0.25°C/2xCO2 to 8°C/2xCO2.
And in both cases, over the last half century of study, the uncertainty of both of those most important “constants” has only gotten wider. Name me one other science where that is true.
And they say it’s settled?
You are free to “take that seriously”.
And I’m free to point and laugh at anyone who thinks those are signs of a “settled science”.
Regards,
w.
No. The “constants” what I am talking about are those values that (i) science can’t explain or if it can, calculating them are extremely complicated and requires disproportional effort, but (ii) they are easily observable. The elasticity modulus is a good example from classical physics, or the “cosmological constant” from relativity. As far as I know we can actually calculate the elasticity modulus but that would be extremely complicated, and in practical applications this is almost never done, and they just measure it and use the classical formula.
Your “constants” are actually what we want to know and we can’t observe. The physics behind it is known but the mathematics is complicated. That’s why we are trying to get them with complicated calculations called models.
Actually, the calculated range is like 1.4-4.5 or so. FYI we already have observational evidence that it is at least 1.1, the value of the current anomaly.
Orbital mechanics. On its proper time scale (millions of years) we have a hard time predicting orbits even inside the Solar System (disregarding effects from outside). Uncertainties increase with time, in some cases dramatically. But I’m sure you won’t question whether it is a settled science. That’s just an example.
Yep. It’s settled that the current warming is man made, and the models are performing well. Remember, uncertainty means we have limits to our knowledge. It doesn’t mean total ignorance.
from my interaction with nyolci:
OK, that’s it for me. I’m outta this conversation. I can’t deal with that level of childlike credulity …
w.
Childlike and/or cult like! I BELIEVE!
No. You claimed that it was unphysical. It doesn’t mean it really was unphysical.
I was sure about that.
Nice work, Wills.
Any idea what this means, “This is to prevent overweighting the models with lots of runs.” How does lots of runs “overweight” a model? Averaging model results never made sense to me. Now it makes even less sense.
If a model has lots of runs, it gets overweighted in the average of the models compared to a model with only one run.
Sorry for the lack of clarity.
w.
That makes sense. Averaging the results of many models does not. It assumes that they are all pretty close to correct, but some predict too much warming and some predict not enough warming. If the bias is to make models that run hot, and I say it is, the average will also run hot. It’s like saying, “we have considered the opinions of all liberal pundits and we conclude that free markets don’t work.”
Without lots and lots of averaging, climate science would be totally bankrupt.
Most of the landmass on Earth is in the northern hemisphere and a higher % of the
southern hemisphere is ocean. This results in the northern hemisphere being hotter
than the southern hemisphere because the northern hemisphere has more land (and
probably less ice) which absorbs more heat from the sun. And this is why Willis
Eschenbach’s graph shows there is more rain in the northern hemisphere — because
there is more water vapor being heat pumped from the oceans to the skies there, to
eventually fall back down as rain.
What I don’t understand is why this increased rainfall doesn’t also occur globally due to the general warming of the entire earth over the last 150 years. The total global rainfall should have also been going up for the last 150 years due to this warming.
And what no one understands is why the dramatically increasing atmospheric
CO2 of the entire globe, which most assuredly fluidically distributes smoothly throughout the entire atmosphere covering both hemispheres of the earth on time scales of a month or two, and which allegedly warms the Earth, nevertheless only has its “greenhouse effect” on only half the Earth, the northern hemisphere, which has been getting warmer (as Mr. Greene has pointed out) over the last 150 years at
a much faster rate than the southern hemisphere.
David Solan
A significant reduction of SO2 emissions and other air pollution after 1980 mainly affected the Northern Hemisphere (causing warming).
The atmosphere is much more polluted in the northern hemisphere than in the southern hemisphere. This is because, while the southern hemisphere mainly consists of oceans, the northern hemisphere includes the large continents of Asia, Europe and North America and their industry and traffic.
In addition, while the Arctic is warming fast in the warmer months, Antarctica is not warming at all due to the permanent temperature inversion over most of the continent that causes a negative greenhouse effect. A paradoxical negative greenhouse effect has been found over the Antarctic Plateau.
The SH has a higher percentage of oceans than the NH. Oceans warm slower than land due to higher thermal inertia.
So you are claiming that we have reduced particulate pollution even while doubling population (and manufacturing and consumption) ? I would like to see that data.
‘Most of the landmass on Earth is in the northern hemisphere and a higher % of the
southern hemisphere is ocean. This results in the northern hemisphere being hotter
than the southern hemisphere because the northern hemisphere has more land (and
probably less ice) which absorbs more heat from the sun.”
Land does get hotter than water in the Sun, but not because it absorbs more than water.
In fact, it absorbs less.
Oceans have lower albedo (0.06-0.1) than land (typical 0.2-0.3 range).
Snow and ice have albedo 0.7-0.9, and can cover both land and and water.
A lot of NH landmass is snow covered for a significant part of the year, but very little in SH.
Water has much larger heat capacity comparing to solids making up land, heating up less for the same amount of heat absorbed.
Sand 830 J/kg degree C
Concrete 850 J/kg degree C
Water 4181 J/kg degree C
Moreover, heat absorbed by water is transferred into evaporation as latent heat.
This is why water never gets too hot, while absorbing up to 94%.
Landmass baked in the Sun gets hotter unless it is covered by snow.
But landmass also gets really cold at night and during winter.
So does, NH really absorb more or less than SH?
While NH experiences larger temperature swings between summer and winter, is
is NH on average hotter than SH?
Ellipticity of the Earth orbit could also be part of the discussion.
Earth is the closest to the Sun in January, moderating NH temp swings a bit.
I would not jump to any conclusion just based on a daytime experience in a desert or in the middle of a big city…
OK, so are you aware that the average temperature of the northern
hemisphere now is about 2°C above that of the southern hemisphere? As a
SEPARATE issue, the warming differential between the two hemispheres over
the last 150 years was much greater in the northern hemisphere. You are
saying, I guess, that these 2 differences are and were not due to the
land of the northern hemisphere absorbing more heat than the water in the
southern hemisphere, which is clearly consistent with the conclusion I
“jumped” to that the more water evaporation in the southern hemisphere
versus less evaporation from the land in the northern hemisphere was
partly responsible. The differing heat capacities of water and land are
balanced out by the greater density of the land versus water. And they
can only explain differing RATES of warming, not differing temperatures.
And you are certainly not saying that water, with (allegedly) less albedo
than land, gets cooler by absorbing more heat! Then what is your
explanation for a warmer and more warming northern hemisphere with more
land surface? Does the synchronization of perihelion with the northern
hemisphere winter mean greater heating of the northern hemisphere all
year long, remembering that this same synchronization causes less heating
for the northern hemisphere in its summertime? What about the more time
the northern hemisphere spends being heated in its summer months (as the
Earth slows its revolutionary speed then)? And if you say one word about
CO2 (uniform throughout the Earth) and its mysterious lurches from the
greenhouse effect to the “anti-greenhouse” effect, explaining these
differences, I’ll puke.
David Solan
Oh well as a city slicker it was nice while it lasted-
80 per cent of Australia to be soaked (msn.com)
Now if Gaia would just get her/they/them act together and rain steadily between my bedtime and and uptime the farmers would be happy too. What a capricious fickle flibbertigibbet and the Gummint orta do sumpink!
Hmm. Shouldn’t the average of figure 2 somewhat equal figure 1? Looks like at least the dip at the end of figure 1 isn’t consistent with figure 2?
Does the dataset record rainfall over land, or rainfall over the entire surface area?
Does the dataset record only rain, or all precipitation?