Guest Post by Willis Eschenbach
My mind wanders in curious back byways. I got to thinking about the “global” part of “global warming”. Over the past ~ quarter century, according to the CERES satellite data, here’s where the world has warmed and where it has cooled.

Figure 1. Areas of warming and cooling as shown by the CERES satellite dataset. White contours outline the areas that are cooling.
Now, before you get all passionate about how “nobody predicted that global warming would actually be global”, that’s true …
… and it’s also true that nobody predicted that India, South Africa, most of Northern Africa and South America, the North Atlantic, and the Southern Ocean would cool over that period, in some cases by a degree or more per decade.
Nor can any scientist or computer model explain WHY those specific areas and not other areas are cooling …
So I thought I’d take a look at what the models are saying about that same time period, March 2000 to February 2024. I used the models from the Computer Model Intercomparison Project 6 (CMIP6) in order to give the models every possible advantage.
Why is that an advantage? Well, because the historical data that they are trained to replicate goes from 1850 to 2014. It’s only in 2015 and later that they are actually predicting the future. Up until 2014, they have all the available observational data to train their model against.
So I went and got the surface air temperature outputs for 39 different models from the marvelous KNMI data site. Here are the first 12 of them, the rest are pretty much of a muchness.



Figure 2. Areas of warming and cooling as shown by twelve of the CMIP6 climate models. As in Figure 1, white contours outline the areas that are cooling. Note that by an error, I did not change the labels on the charts from my default, which is W/m2, to the proper label, which is °C/decade. Mea maxima culpa, but the creation of the graphics was quite laborious, so I’m not going to redo them.
As you can see, the results are what you’d call “all over the map”. The global trends range from ~ 0.10 °C per decade up to ~ 0.40 °C per decade. And every one of them shows very different areas and amounts of heating and cooling.
Now, the prevailing theory is that you get a more accurate answer by averaging what they call an “ensemble” of models. I’ve never believed that one bit; that seems like crazy talk. Average a bunch of bad models that disagree with each other to get a valid result?
Really?
But that’s what the climate megabrains do, so I averaged the 39 models to see what that looks like. Here’s that result.

Figure 3. Areas of warming as shown by the average of 39 of the CMIP6 climate models.
Now you may be wondering, where are the white contours outlining the areas that are cooling?
The answer is … there are none.
Because the areas of cooling predicted by each model are somewhat randomly scattered around the planet and the overall projection is one of warming, when you average them all, you end up with no average cooling anywhere … et voilà, you have “global” warming.
Funny how that works out.
With warmest wishes for everyone, I remain,
Yr. obt. svt.,
w.
PS: As always, I ask that when you comment, you quote the exact words you are referring to. It avoids endless misunderstandings.
Wait a minute. This article published by WUWT yesterday say Eurasia’s been cooling significantly for the last 20 plus years. This is not shown in your story.
https://mail.google.com/mail/u/0/#inbox/WhctKLbfGnzwLlHrvqXxnCjvRrwszvvFnZZWTDjDsPtKpBLsDMszWSnfmFjqJgGkzshqFRB
Don’t know what to tell you. All I can offer are the facts that I’ve found.
w.
Which data set, what start date and what end date seems to decide what the headline should say Think of it as an ensemble of headlines. When you factor in that the oceans (plural) are boiling, the average of all headlines has to be hot.
If human culture is like a social averaging device then one might steer human culture by peppering it with sufficiently nonsensical opinions for a sufficiently long time?
Bingo!!
Why the link to your gmail account?
The paper on which that WUWT article was based does not say that it said that Central Eurasia had been experiencing increased snow in the autumn.
This is not much of a problem, rather it is called weather. Randomly something warms, something cools, even on a global scale. That is why temperature records do not show a linear shape, and why we distinguish between weather and climate..
HOWEVER we have this:
A long term “global warming” trend that is more than just one sided. Given anthropogenic GHGs are well enough mixed in the atmosphere, so that their forcing is quite uniform, this is hard to grasp. Sure, the usual argument would be over the south being more “oceanic”, but the NH ocean is also warming faster than the SH ocean.
The first problem is, with the NH warming that much faster, some of that heat will “spill over” into the SH, inevitably. So whatever the forcing is, it will have to be even more concentrated in the NH than the temperature record alone would suggest.
The second and even bigger problem is indicated by those black arrows. Unlike anthropogenic GHGs, the anthropogenic aerosols are NOT a global forcing, but concentrated in mid-NH latitudes. Up to this day, but certainly over the 20 year period 2002 to 2021, there they should have dominated GHG related forcing. And since that is negative forcing, by the metrics of “climate science”, we should not have had warming in the northern third of planet at all.
Empirical data and theory are opposing each other.
https://greenhousedefect.com/contrails/aerosols-in-climate-science
As Dr. Ole Humlum said some years ago (regarding Gistemp): “It should however be noted, that a temperature record which keeps on changing the past hardly can qualify as being correct.”
Do you think it is more correct to leave biases/mistakes/errors in the record?
Do you think older station records should not be digitized and uploaded and made part of the repository of observations?
After thinking about it… yes.
Fascinating. I’ve asked the question before and I’ve got the same answer.
It’s weird because my and most other people’s worldview is that it is unethical at best and possibly even fraudulent to knowingly leave biases, mistakes, and errors in the works they produce.
It’s weird because my and most other people’s worldview is that it is unethical at best and possibly even fraudulent to knowingly omit data relevant to the works they produce.
The problem is that determination of error requires that the true value of a measured quantity be known, which is not possible.
Error is not uncertainty.
And who are these “most other people”?
“The problem is that determination of error requires that the true value of a measured quantity be known, which is not possible.”
Somehow that always escapes anyone trying to defend climate science temperature data. How do they know they aren’t adding more error than the raw data has?
Trying to substitute a guess at an “average” value just hides natural variability in the actual measurand and what benefit does that actually provide?
When those biases, mistakes, and errors can’t be directly quantified then substituting guesses for them is even more fraudulent, especially when they are not identified as guesses.
Define the difference in the amount of time it would take me to haul a bucket full of sea water 20′ up the side of a ship versus my wife. The amount of time directly impacts the temperature that is read on the deck of the ship. Is an “average” amount of time guessed at? Is that “average” amount of time modulated by the weather conditions at the time (e.g. is the rope slick from rain or ice) or is that just another guess? How about the calibration drift of the thermometer used on the ship 100 years ago? How do you know what that was or is it just another guess?
Every time you GUESS at an average value you are adding measurement uncertainty to the result. If that measurement uncertainty isn’t added into your uncertainty budget and accounted for then you might be adding in more uncertainty than the original, raw data has! And if you don’t make that clear to subsequent users of the data then that just compounds the unreliability of the data.
Define bias. Is it measurement bias, that is, systematic uncertainty or is it that the trends over periods of time do not match properly as microclimate and device changes modify the values being measured?
Obvious errors/mistakes are changeable. Changing measurements so trends are continuous is not acceptable.
First charts were C/decade, second charts were W/m2. Rereading for explanation.
Nope not explained. The thesis “averaging what they call an “ensemble” of models… that seems like crazy talk.” is well supported. Even one extra-hot version in the seemingly large ensemble makes the whole ensemble produce hot outputs.
Thanks for catching the error, Kevin. My bad. I didn’t change the labels when I made the charts. I’ve added an update to the caption pointing out the error.
This is why I love writing for the web. My errors rarely last long.
Best to you,
w.
What I find interesting is, that of the 12 models you chose, 9 show either a vast majority or a complete coverage of the Artic as cooling.
But the summation of all of them show a strong warming!
Somehow, those few (25%) “hot Arctic” models are so hot they offset the 75% showing some cooling.
And as we all know, the “hottest year ever” is driven by the Arctic models (and sparse data), where no one lives, so there’s precious little means to counter the narrative.
One would expect the Arctic would have the strongest warming as the low temperatures make it more susceptible to increase in sky temperature.
And maybe someone here should qualify the arctic ‘warming’, whether in summer or winter and the consequences. Ie, the actual result of the supposed warming given the fact that it usually is rather on the chilly side there i would say. 😀
Or maybe a lot of the temps in the Arctic are “guesstimated” because there’s no one there to call BS. It’s hard to fudge numbers where there are independent observations. Arctic? No one there, so claim it’s the worst.
Nine of 12 records show slight cooling, the other 3 show massive heating. On a distribution chart, the 3 hot one would be outliers and should be tossed.
That is why when asked, no one ever produces non-UHI stations in the high northern latitudes that show very high temperature growth, i.e., a hockey stick. Between homogenization over 1500 km and UHI temperatures, the models are presented with created information that is invalid.
Clive Best provides this animation of recent monthly temperature anomalies which demonstrates how most variability in anomalies occur over northern continents.
https://rclutz.com/2023/08/19/arctic-amplication-not-what-you-think/
Well, there is no “global climate” as such, only constructs, so it makes sense that there is no “global warming”.
40 years later and no one is able to grow watermelons or peaches on the coast of California.
The fog that covers the coast almost every night is not getting any warmer. As Mark Twain supposedly noted in the 19th century, “the coldest winter I ever spent was a summer in San Francisco”.
“Over the past ~ quarter century, according to the CERES satellite data, here’s where the world has warmed and where it has cooled.”
I haven’t looked into CERES data, but I get a rather different picture using UAH.
It’s possible I’ve messed up the code here, so if anyone wants to double check it would be a help.
I get something quite similar. The difference is that the CERES data is down at the surface. The UAH data is in the lower troposphere, which is constantly circling around the planet. Wind speed up at 2 km altitude is ~ 7 m/s, which means it will move about 375 miles per day. And like the surface trade winds, it blows mostly from east to west.
This will “smear out” any differences in the east-west direction, but have less effect in the N/S direction.
Anyhow, that’s my explanation. YMMV.
w.
“The difference is that the CERES data is down at the surface. The UAH data is in the lower troposphere, which is constantly circling around the planet.”
Maybe. But surface data shows similar results to UAH. Neither show rapid cooling over South America or India.
Don’t the prevailing winds run in different directions depending on which hemisphere?
Or is it the latitude?
As someone witter than I once quipped, “The average of a Messerschmitt is still a Messerschmitt.”
My attic fan says it’s 114F right now and my weather station says the indoor temp is 73F, so my house’s average temp is 94F. But it is actually quite comfortable and not warm at all. I suspect that despite all the arctic’s heat, it is actually quite cold there.
Intensive properties shouldn’t be averaged in any case.
Almost all station temperature reports today are averaged. So if you are truly convicted in regard to this topic I assume you aren’t using any temperature reports whatsoever to plan your day?
For example, here in the US our temperature reports are comprised of 1-minute and 5-minute averages of 6 and 30 instantaneous temperature readings respectively. We never see the instantaneous measurements. We only see the average value.
[ASOS User Guide]
You probably meant “convinced” rather than “convicted”. I hope so, anyway.
I certainly don’t use any temperature reports to plan my day. If it looks like rain, I might take an umbrella just in case. If it was cold yesterday, I might add a layer or two.
Severe weather warnings (generally expected soonish) are a different matter. Satellite pix and weather radars can provide information you can’t get just looking out the window, as any smart 12 year old will tell you.
Should we average the temperature in Alaska and Florida?
I’m having a hard time understanding the purpose. Can you explain more about why you want to do this?
That by itself, should invalidate splicing those temperatures to LIG temperatures. Is it any wonder that temps prior to 1980 need to be “adjusted” to match later temperature measurements? It isn’t because of instrument measurement bias (systematic uncertainty) but because of perceived bias of not matching between measurement devices. Add that to UHI and you’ve got a prescription for too high temperatures.
I always look at the temperature where I’m at and where I’m going, as around here, a short distance can make a large difference in temperature.
Right. Those are average temperatures you are looking at. As I said almost all temperatures reports today are actually averages. If you are really sure that “Intensive properties shouldn’t be averaged in any case.” then why are you looking at those temperatures?
No, they are TEMPERATURES, not “average” temperatures. Do YOU dress for average temperature? No one I know does that.
“ I assume you aren’t using any temperature reports whatsoever to plan your day?”
I use max temp and min temp. Because of autocorrelation the temp tomorrow will be about the same as today. You can’t plan much better than that unless you watch where cold fronts are moving through.
No one I know uses AVERAGE daily temp today to predict AVERAGE temperature tomorrow. No one dresses for AVERAGE temp. They dress for max and min temps.
Do *YOU* dress for AVERAGE temp?
BRILLIANT !
Good stuff as always Willis.
Anyone in the USA notice that PBS is still showing CC programs, still talking about CC and the increasing threat…you know…all the standard boiler plate that was debunked years ago, they never got the memo, I guess that is how they scare more funding out of their wealthy donors. If all you have is a hammer, everything looks like a nail. When will the realize, the climate scam is deflating.
J Boles, ditto for CNN. I just watched a Cristiana Amanpour interview John Kerry. They both talked about the Trump “Big Beautiful Bill” defunding CC will end the world. Both of them together aren’t smart enough to pour piss out of a boot (where I grew up in logging country in Oregon, that’s the dumb standard).
If John Kerry poured piss into a boot, Ms. Amanpour would probably drink it and look for the other boot.
pour piss out of a boot, with the instructions written on the heel!
fixed it for you.
They were hired to do those things, that’s what they said they wanted to do, they went to school to do them, they joined an organization that does them… both capitalism and. evolution say they will be replaced not changed if the prevailing situation continues too long. PBS survived several non-‘D’ terms before, including Reagan’s. The hiring manager probably remembers fetching coffee on the Ken Burns Civil War set.
As much as I would like climate alarmism to fizzle, there is no chance that will ever happen. The people promoting it are leftists. The popularity of climate alarmism may be waning slightly because the fruit of governments’ “green” schemes is ripening and it smells rotten, but inconvenient facts have never impeded leftists in their policy pursuits. To paraphrase Ernest Benn:
“(Leftist) politics is the art of looking for trouble, finding it whether it exists or not, diagnosing it incorrectly, and applying the wrong remedies.”
You can’t stay in power unless you can dupe the public into believing that your pet problem is a Big Deal that affects them, and that you are the right person to fix it. Leftists constantly flog the public with how big a deal their pet problems are. They just can’t shut up.
The other problem is that so many people have been told that CAGW is not real – and they defended CAGW and laughed at the “deniers”. They can’t suddenly say “Oh, my bad. I was drinking the Kool-Aid” because they would look as scientifically stupid and naive as they actually are.
Wisdom of crowds is a real phenomenon but does it apply to GCMs? I think there is much too much bias in tuning for expected and reasonable results and their use of common code myself.
I think you have to throw out the top and bottom 10% to get an average. No, say some, that’s not science. Science seems to have gotten lazy in assuming the normal distribution for situations that do not fit it.
TTTM:
Agreed.
“Wisdom of crowds” commonly needs diversity of opinions and independence – neither of which GCMs & modelers have. GCMs use the similar basic model structure, databases, and are all tuned to the 20th Century.
Plus according to Dick Lindzen if the output is inconsistent with the consensus the modelers will get a phone call suggestion to re-run their model. [Since the 2009 Climategate no one is leaving a paper trail by emailing.]
However, the “madness of crowds” can be seen during herd behavior, social contagion, hysteria to perceived threats and groups becoming more radical over time.
IMO the alarmist meme is more “madness”, fueled by B. Lomborg’s “confluence of interests”.
As crazy as it sounds the evidence backs it up.
For example TVCN/IVCN and HCCA have better skill in predicting hurricane track and intensity than any one particular model that went into their averages for most lead times. [1]
For example in the CONUS region the RMSE for 2m temperature, 500 mb geopotential heights, etc. is lower for the GEFS (ensemble) than the GFS (deterministic) for most lead times. [2]
I think it makes sense from a law of propagation of uncertainty perspective since the correlation of errors between models is r < 1. And when the aggregate model is in the form M = Σ[Mi, 1, n]/n this means u(M) < u(Mi) for all Mi.
Ensembling is not a panacea for all prediction efforts though. It comes at a cost. The cost here is that they tend to blur the forecast fields in both the spatial and temporal dimensions. This is why ensembles are favored for long lead time forecasting and deterministic models are favored for short lead time forecasting. Though this distinction is becoming less obvious today with the newer short range model ensembles like the soon to be implemented RRFS ensemble and WoF ensembles models which are designed to assist meteorologist in the near time forecast of whether threats using ensembling to improve skill. And we already have the SREF and HREF in operational use already.
Thanks, BD. While that is true for short-term weather models, I’ve never seen a scrap of evidence that it is true for climate models.
I think you are conflating averaging a number of runs from very similar models for weather forecasts, with averaging a number of runs from widely and even wildly different climate models for climate forecasts.
A main difference is that while the weather models are clearly closely related to the underlying physics, as proven by their success, the climate models have little to do with physics.
To take just one example, the most important single number in climate modeling is the “climate sensitivity”, and the climate models all assume that the changes in temperature are equal to the climate sensitivity times the change in forcing.
The weather models make no such unsupported assumption. It doesn’t even enter into their calculations.
Not only that, but the climate models and the observational data calculate climate sensitivities that range from 0.5°C per doubling of CO2 to 8°C per 2xCO2 … but despite using wildly different climate sensitivities, the models all do a reasonably good job of emulating historical climates.
Say what? See my post here.
And despite 40 years of investigation involving hundreds of thousands of work-hours and computer-hours, the uncertainty of that figure just keeps increasing, not narrowing. I know of no other field of science where this is true.
And those facts alone mean the climate models have little to do with the underlying physics, and they reinforce the idea that averaging them is meaningless.
Regards,
w.
I did an experiment a while back with CMIP5.
From 1880 to 2020…
The RMSE of all member and BEST monthly global average temperature pairs was 0.24 C.
The RMSE of the ensemble mean and BEST monthly global average temperature pairs was 0.17 C.
This is clearly a case where the ensemble mean is better than the individual members.
ECS is certainly an important forecasting metric.
That doesn’t mean that the GAT isn’t also an important forecasting metric. Some might even argue that it is just as important if not more important than ECS.
One problem I have with indicting ensemble climate models as unphysical or meaningless using ECS is that the ECS hasn’t occurred yet so we cannot observationally compare it to forecasts in a strict sense to get an RMSE like we can for the GAT or other metrics.
Yes and No. GCMs are still producing possible weather over time and a second run of the same model will produce different weather so that multiple runs will converge to “average weather”. The same argument applies if its different models used as an ensemble.
So the claim really is that average weather is a better fit than a single instance of weather and given the variability of weather, that doesn’t surprise me.
I’ve never heard anyone claim that a higher RMSE is better than a lower RMSE.
Its the reason for it being better. Its not more accurate, its that average compared to variable is better than variable compared to variable.
If not lower error than what exactly is your definition of “more accurate”?
A good question. I think a better projection that isn’t just a better fit for the purposes of the error calculation but numerically it amounts to the same thing because how would you know its a better projection.
Maybe what I really want is less emphasis on the hindcast as “evidence” of accuracy because its clearly not.
[JCGM 200:2012] defines “accuracy” as the closeness of agreement between a measured quantity value and a true quantity value of a measurand. And that a measurement is said to be more accurate when it offers a smaller measurement error. Lower RMSE values necessarily means lower error and vice versa.
And the true value is unknowable!
ROTFLMAO! You think an OLS of a time series somehow describes MEASUREMENT ERROR?
From the GUM: JCGM 200:2012
The GUM and associated documents are about measurements and their uncertainty.
It has nothing to with the RMSE of a trend line of a time series other than how the uncertainty of the data points (temperature measurements) affects the trend line.
A GCM doesn’t produce a measurement in the sense of that definition.
Exactly. At best, a GCM produces a time series of calculated information as iterations are calculated.
The only way to determine accuracy is through validation of the models output to actual measured data.
True that. But the concept isn’t that different so unless someone has a convincing explanation otherwise I see no reason why “accuracy” can’t be defined in terms of error regardless of whether it is a measurement or prediction.
Are you suggesting that correlation is measurement?
No. I’m suggesting accuracy is the agreement between a prediction and observation and that a prediction is said to be more accurate when it has a lower error.
What is the predictor variable you are using to compute a dependent variable?
Ok, but that’s still not measurement and so the accuracy definition using measurement isn’t justifiable “scientifically”. At least not with your JCMG definition.
It follows from prediction not being measurement that the rules around minimising error dont hold either, without actual justification. And I’m talking strictly now.
Then we’re back to square one…if not lower error then how do you decide which model is more accurate? What specific metric do you suggest using to support Willis’ claim that an ensemble mean being more accurate is crazy?
I said it’s crazy because no attempt is made to determine which model performs the best.
Instead, the models are all taken as equal, although they are obviously not equal. Then they are averaged, and the average is used.
w.
Exactly. And that is the problem. You made no attempt to determine objectively if the ensemble mean really was better or worse. Your hypothesis that it isn’t the best was, as best I can tell, based on incredulity as opposed to evidence.
“if not lower error then how do you decide which model is more accurate?”
YOU CAN’T! That the whole issue in a nutshell!
“I’m suggesting accuracy is the agreement between a prediction and observation”
Since that “observation” has its own uncertainty how do you tell the level of agreement if the difference between prediction and the observation is less than the uncertainty of the observation?
If the prediction and the observation both exist inside the uncertainty interval then you can’t really know *anything* about “error”.
Take a Q-tip and poke this in your ear until your brain absorbs it.
To know “error” one must know the true value. To know accuracy, one must know the true value.
JCGM 200-2012
You are dealing with trending in this thread, not the determination of measurement uncertainty. However, the measurement definition of accuracy does apply. A prediction is accurate only if it correctly tells one an answer. The conclusion at this time is that GCM’s are inaccurate, period!
He likes to quote the GUM, but only when it appears to support his preconceived ideas about metrology.
You need to explain where you get the true values from.
You are confusing “accuracy” with a trend line of best fit. A line of best fit is no guarantee of accuracy. The original purpose for even doing a line of best fit was to validate the linear relationship between the independent variable and the dependent variable. The better the fit, the more linear was the relationship and the better the linear equation predicted the dependent variable.
None of this applies to time series trends. If it did, we would all be millionaires in the stock market.
Remember, each data point has its own uncertainty and that the uncertainty can combine in any fashion. The starting points could be higher and the end points lower. Or, vice versa. There is no way to predict any of the combinations. You end up with an uncertainty interval by testing all combinations of all the data points to see where the starting point of the trend line changes and where the end point changes.
The real question is, why do you think that BEST produces anything except what they aim to produce..
ie a somewhat match to the models.
ECS is a scientific fantasy.. !
Put more formally, it means that all the models have to produce outputs that regress about the mean. Another way of putting it is that all the variations in the models have to be random and not biased. If there is even one model that doesn’t meet those requirements then one can expect the ensemble average to be wrong. The more models that are not fit for use, the more the calculated mean will drift from the true value. The problem is to be able to know which are unfit for use. Considering that there are a number of complaints about the raw data being changed to fit a mental model of warming, the whole idea of using an ensemble is fraught with problems. If all the individual models meet the criteria of being unbiased, they can potentially produce better results than any of the individual model runs, unless, by coincidence, one model happens to equal the mean. How does one know that to be the case?
The approach probably works better with empirical data than with models, which are subject to invalid and unexamined assumptions.
Give it up. You are talking completely made-up drivel.
BD, you’ve missed the point. Despite hundreds of thousands of dollars, computer hours, and work hours, the uncertainty on the ECS has steadily increased over the last 40 years. To me, that indicates that the theory is wrong, in this case, the theory that
∆T = ECS * ∆F
where T is temperature and F is forcing.
Next, in the CMIP5 models, the cutoff date for historical data was 2005. In other words, the models are carefully tuned to match the actual historical data. So when you did your test, you’re testing 125 years of tuned output and 15 years of actual predictions of the future.
Of course, due to the tuning process, the model outputs have roughly equal odds of being above and below the historical temperature. And so, just as we’d expect, the average of the models does better than individual models.
However, we have NO SUCH expectation regarding the predictions of the future. They may all be above or all be below the future reality, or simply all over the map.
There’s another problem. The mean of the output of an ensemble of models will be a straighter line than any of the models. And as a result, the SD of the straight line with respect to the future temperatures will be better than the mean SD of the pairwise models and temperatures.
But that’s a result of mathematical realities, not a measure of the value of the models. It’s as true with random numbers as it is with the models.
So the lower SD does not mean that the average is more accurate. If the models are all above the the future temperature we’d call that a bad result, and it is not improved by averaging the models, despite the fact that the SD will be lower.
Best to you,
w.
Yes, very clearly put.
I have never understood why averaging a bunch of model predictions, many from models which are failing, would give better results than picking the forecast of the best model.
Is there any other area where we take the forecast of models known to be failing and average them with the one or two that are forecasting correctly, and claim the result is better?
Would you fly in a plane built entirely from models that are not validated? I would not. Why should I pay good money for “zero carbon” due to forecasts from models that, as Willis points out, are not validated.
AKA, I could use a pencil and draw a straight line from the past and do every bit as good a prediction as averaging a bunch of wild ass models.
It might be a fun challenge if you’re up for it. Use your pencil technique to predict the daily high and low at say KSTL and we’ll compare the RMSE of your pencil prediction to that of the NBM.
First…I’m the one that made the point. Specifically it is that an ensemble average being more accurate than a single model is supported by evidence despite how crazy it may seem.
Second…an increase in uncertainty doesn’t necessarily mean that a prediction is wrong. For example, if I made 3 predictions of the high temperature in KSTL today of 72 ± 1 F, 72 ± 2 F, and 72 ± 3 F then you can see the uncertainty increasing. But if the actual temperature does end up being 72 F then all of my predictions were correct. The uncertainty could increase due to an issue I’ll describe next.
Third…your chart isn’t showing any effect relevant to an average anyway since it isn’t averaging the available ECS estimates. it is only showing the available ECS estimates. If you were to perform a type A evaluation of the uncertainty of the ECS estimates over time you find that the degrees of freedom of those evaluations are low early in the period and high later in the period as more and more ECS estimates become available. For example the DoF of 3 for the ECS estimates around 1980 is going to yield an unreliable estimate of the uncertainty as compared to the DoF of like 100 between 2010 and 2020.
If it is expected than why call it crazy?
First…I’m talking about RMSE.
Second…The RMSE between the actual temperature and a straight line model is low when the actual temperature is also a straight line, but higher when it it is not.
Again…I’m talking about the RMSE. A model with a low RMSE is absolutely more accurate than one with a high RMSE.
An RMSE of 0.17 C for the ensemble model is better than 0.24 C for the individual models.
Thanks, BD.
I fear you’ve missed my point. This is not an increase in the uncertainty of the results. This is an indication that the models are not even distantly related to physics.
Again you are missing the point. The lack of any reduction in the ECS indicates the models are just tuned nonsense. Look, if they were actually based on physics, they could not all successfully hindcast the past given their wildly differing ECS.
It’s expected only for the hindcasts.
Sorry for my lack of clarity. By SD I meant the SD of the errors, which by definition is the RMSE.
And I say again, it is NOT true that a lower RMSE for the ensemble models means that the prediction is more accurate. It can just as easily mean it’s a straighter line than any individual model output, which will give a lower RMSE. Try it with random data, and you’ll see what I mean.
w.
No it isn’t. If the error between a modeled value and an actual value is always x then the SD is 0, but the RMSE is x. RMSE retains the bias or systematic error in the model whereas SD does not.
It is true. If model A has a lower RMSE than model B then model A is necessarily more accurate.
This isn’t to say that RMSE is the be-all-end-all metric for assessing model skill. There are other metrics. But it is absolutely true that a lower RMSE is more accurate than a higher RMSE. That is the whole point of using RMSE to assess the skill models.
A straighter line for the model doesn’t necessarily mean a lower RMSE.
I did. Consider a set of 100 actual values given by x + sin(x/10 * 2π) + 0.1*random() where x are integers from 1 to 100 and random() produces a value between 0 and 1. Now consider a set of 100 modeled values given by y = x + λ*sin(x/10 * 2π). When λ = 0 the model is a straight line with an RMSE of 0.71. And when λ = 0.5 the model is a curvy line with an RMSE of 0.35. The straight line model yields more error than the curvy line model.
BD, you say:
My friend, if you consider those “random numbers” I truly don’t know what to say.
Here’s a real test. I generated a 10,100 long string of fractional gaussian numbers with a high Hurst coefficient. This is a typical shape for a temperature record.
I put them in a 100 x 101 matrix columnwise, and I used the first column as my pseudo-data, with the other 100 columns being pseudo-model outputs.
I then took the pairwise RMSEs of the p-data versus each of the p-model outputs.
I also took the RMSE of the p-data versus the mean of the p-model outputs, which is close to a straight line.
Here is the calculation. Everything after a hashmark is a comment.
As you can see, contrary to your claim, the mean of the random p-model outputs gives a better RMSE than the pairwise action … but it’s purely random data, so it cannot possibly mean that it is actually more accurate.
w.
Or put simply the ensemble appears to give a better result because of how the maths works rather than being more accurate due to any wisdom of crowds phenomenon.
Ding. Ding. Ding. Exactly. It’s just the way the math works out. So despite it seeming crazy it turns out to just work. And it works even in cases when all the data is random with little if any attempt at predictive skill.
I think this is related to the same concept that drives the law of propagation of uncertainty. When the partial derivative ∂y/∂x of the model y is < 1/sqrt(n) then u(y) < u(x). An ensemble mean model is in the form y = Σ[x_i, 1, n] / n so ∂y/∂x = 1/n which is less than 1/sqrt(n) so the uncertainty of the ensemble mean is less than the uncertainty of one specific model. And since uncertainty is supposed to embody the dispersion of possible true values that means lower uncertainty implies lower error.
So if there is any truth to “wisdom of crowds” in this context it is related to inevitability of the math.
Let me be more blunt. The brilliance of Willis’ experiment is that even when the individual models have little to no skill in terms of the Pearson correlation coefficient such that R ~ 0 the RMSE of the average of those models to the actual data is still less than the RMSE of pairwise comparison of the individual models to the actual data. That is mind blowing. Maybe that’s the crazy part; or maybe not…
You just demonstrated that a group of monkeys CAN randomly type the works of Shakespeare. That’s not exactly a good thing to hang your hat upon.
It was Willis’ demonstration so I’m going to let you pick this fight with him alone.
Jim is quite correct, it is what you are claiming when you assert that performing an average can increase information.
This is nonsense that denies the very nature of uncertainty, which is in essence a limit on information.
You are claiming that it is possible to gain information that was never there, just by invoking the average formula in a blind manner.
If you make this extraordinary claim, you need to demonstrate how the measurement uncertainty of the individual elements of whatever you are averaging cancels.
Asserting that the average formula is a “measurement model” doesn’t mow the grass, because it isn’t a measurement model.
By claiming this magic cancelation, you are denying decades of measurement science and engineering.
Exactly. Thanks, Tim.
w.
If you examine what is happening through the lens of uncertainty, the value may have a better RMSE but the uncertainty interval is vastly increased.
An average must also have a variance. Since we are talking about averaging with no repeatability of the same thing, one must either add the individual variances in RSS or use the standard deviation of the input quantities.
That illustrates the trade off between averaging model outputs and how uncertain the mean value is versus how well the mean value may best fit the data points.
For RMSE to be a good predictor, the data points must be assumed to be 100% accurate with no uncertainty at all. Vary the starting point and ending point and RMSE can vary a great deal.
The other issue is that this is a time series. A straight line regression ignores the cyclical variation in time. Name them off. ENSO, ocean currents, the sun, aerosols, seasons, orbital, etc. Unless these somehow by chance always result in a linear trend, then those trends are a joke. We barely have enough temperature data available to visualize just a few of these over long periods. Certainly not enough to make valid predictions.
One more thing, nice work on that example, Willis.
Maybe if you’d do what you tell everyone else to do we could avoid needless strawmen like these. I never said the mean of random p-model outputs does not give a better RMSE than the pairwise action.
What I said is that the mean of a p-model ensemble can and does give a better RMSE than the pairwise action. I gave you several different examples demonstrating this including the CMIP5 prediction of 2m temperature, IVCN prediction of tropical cyclone intensity, TVCN prediction of tropical cyclone tracks, and GEFS prediction of 2m temperature, 500 mb geopotential heights, etc.
I also gave you an example that falsifies the hypothesis that a straight line model will necessarily yield a lower RMSE. That doesn’t mean a straight line model cannot yield a lower RMSE; only that it isn’t always the case. The CMIP5 example I provided above is an example where it is true.
And your experiment proves my point. If your red line really does have a lower RMSE than the pairwise action of the individual blue lines it is absolutely the case that you should use your red line to predict your pseudo-data because…and this is stating the obvious…it yields a lower RMSE. This in no way endorses your experiment since as best I can tell your 100 pseudo-models make no actual attempt to predict your pseudo-data since it’s all random data. It’s a stretch at best to even call these “models”.
It looks to me like you’ve taken a curve and added variability via the random function, and are saying as the comparison curve gets closer to the original curve, it correlates better than a straight line. Well duh.
But also consider the comparison curve does correlate to the curve with randomness but there is no actual expectation the models are modelling climate and hence Willis’ example is superior.
Ding. Ding. Ding. Exactly. Straight line models do not necessarily result in lower RMSE values.
Willis’ example is just random numbers that stretch the claim of them being representative of “model” in the first place, but which ironically proves my point nonetheless. That is the average of a p-model ensemble can yield a lower RMSE than the pairwise action of the individual models that went into the ensemble.
And the proof is in the pudding as they say. The CMIP5 average has an RMSE of 0.17 C while the pairwise action of the individual models was 0.24 C. If the goal is to minimize the error of a prediction using CMIP5 the solution is obvious…use the ensemble mean.
Whereas your model is correlated numbers which represent the case of the model actually being accurately representative of the climate. Willis showed the general case, you showed the specific case where the calculation was biased to improve with closer fits.
I was falsifying the hypothesis that straight line models necessarily have lower RMSE. The model I used to do this has nothing to do with the climate.
But the discussion relates to GCMs and the climate. What you’ve proved is that you can construct examples that lower RMSE and what Willis shows (with a better example IMO) is that the general case doesn’t.
First…my example proves that straight line models can raise RMSE.
Second…Willis’ case isn’t general. It was specific to random data.
Third…Willis said the RMSE was “better” when using the average of the models which I take to mean lower since it is, in fact, lower. I can’t imagine any other meaning for “better” in this context.
Fourth…his example demonstrates my point in the most brilliant way I can think of since even in the case when the individual models have little to no skill, at least in terms of R, the average still results in a lower RMSE.
Fifth…the actual CMIP5, TVCN, IVCN, HCCA, GEFS, etc. ensemble means demonstrate this phenomenon in real world scenarios some of which people are using to make life and death decisions… literally.
Again… crazy as it may seem the evidence supports the notion that ensemble means really can improve predictive skill.
So somewhere between the well correlated and not correlated lies the RMSE that corresponds to the GCMs and due to tuning and hindcasting, its biased towards the well correlated. Or in other words GCMs are biased towards having the ensemble seem better.
Are they? Or is it your first hunch that it is a consequence of the way the math works out?
Why can’t it be both?
If it is better because of an inevitable consequence of the math then any perceived bias of being better would…ya know…be a consequence of the math.
Remember, Willis showed that this works even for purely random data in which the individual models had little to no predictive skill on their own.
That’s not a factor in favour of claiming GCM ensembles give a more accurate result. In fact the result proves it’s the opposite.
The result tells us that combining results makes them seem more accurate even if they’re demonstrably not.
And my argument here is that random series don’t correlate so the error should be related to their variance not reduced by their average. Fundamentally what does it mean to reduce the error? I think it means nothing. It’s not justifiable.
It’s a case of interpreting when errors are minimised by multiple measurements and when they’re not. You already agreed GCMs don’t produce measurements. The question now is whether errors reduce and is justifiable.
The result tells us that they ARE more accurate. It is an indisputable and unequivocal fact.
And Willis’ demonstration indisputably and unequivocally proved that hypothesis false.
To bring the agreement between a prediction and observation closer.
Is it a demonstrable and unequivocal fact that the error is reduced by combining a series describing a country’s citizen’s hair length over time and a series describing the country’s citizen’s height over time?
Its a blindingly obvious example of two series that have no relation to each other and so therefore cant justifiably be combined to reduce error.
That’s the first step of the argument and it looks like you disagree.
No — you are creating information from nothing.
And using RMSE implies that you know what the true values are — you don’t (and can’t).
Maybe a slight tangent (or maybe not)…it turns out Tmax ended up being 74 F today at KSTL. Forgetting the fact that this is actually the highest 5-minute average for the day and the intensive property police here are convinced you can’t take the average of an intensive property which one of these hypothetical predictions was “right” or “wrong” and why?
What are you defining as right and wrong?
If it was for a bet with joe public then you’d lose with all of them. Climate scientist alarmists believe that a couple more degrees higher and its an existential threat irrespective of the error.
I probably should have included “accurate” in the list as well. It goes to the heart of Willis’ hypothesis that the average of an ensemble of models is not more accurate. What is “more accurate”? What is “right”? What is ‘wrong”?
Well in that case your most accurate prediction was wrong and your least accurate prediction was right.
Your most accurate prediction was arguably most skilful and useful and your least accurate prediction was least skilful and useful in a practical sense.
None of them had any basis for existing at all in a physics sense because there was no physical justification for any of them.
Notice how the conversation has devolved to the accuracy of a trend in relation to to models that have no validation against the real world. Let’s not look at the inability to forecast what really occurs, let’s just live in the fantasy world that the models create.
That isn’t really the issue is it? The real issue is why is the averaging done at all! Why not just record THE Tmax reading?
Why? Because you don’t want to declare LIG readings from 1900 as no longer fit for purpose. So let’s do whatever it takes to make sure our precise and accurate new devices are no better than an ancient LIG thermometer. Let’s adjust past temperature RECORDINGS for the sole reason that they can’t be spliced to what we see today.
At some point, climate science and modelers need to join the scientific community in using better and better measurements to achieve a goal. If that means older data can’t be modified in order to remove a perceived (not real) bias, then so be it.
When I see *anything* referenced to temperature attempt to be described by a random number generator and a Gaussian distribution I shiver.
Temperature is not random. Temperature is not a Gaussian function, not even across the globe. Climate is not random. Climate is not a Gaussian function, not even across the globe. Climate is determined by a biosphere driven by a non-linear, coupled, chaotic function. Temperature and climate are multi-modal distributions with different averages and different variances and both a short-term and a long-term cyclical variation. Like most multi-modal distributions the average value is almost useless to forecast anything and the average value of a Gaussian distribution developed from sampling the multi-modal parent distribution is even less useful in forecasting anything.
The law of large numbers and the central limit theory tells us that for many distributions a set of sample means from the distributions, even severely skewed ones, will tend to a Gaussian distribution. That does not mean that the parent distribution is also Gaussian. Yet climate science pretends that the sample means distribution (Gaussian) *is* the parent distribution (non-Gaussian) when it comes to temperature and climate.
Nothing better describes the failed assumptions of climate science than assuming that the average of two distributions with different variances is simply (x1 + x2)/2. Thus averaging southern hemisphere temperatures with northern hemisphere temperatures is done by just adding them all up and dividing by the number of samples. And, no, the use of anomalies doesn’t fix this problem. The anomalies inherit the variances of the absolute values. The anomalies in the southern hemisphere will have different variances than anomalies in the northern hemisphere. You still can’t just add’em up and divide by how many values you have. You have to weight each component based on its variance when taking an average.
This stems from the assumption that all measurement uncertainty is random, Gaussian, and cancels. Therefore all components in the data set are assumed to have a variance of 0 (zero) and make equal contribution to the average. Nothing could be further from the truth yet it is endemic in climate science.
I have a book here, Experimentation and Uncertainty Analysis for Engineers, that covers how to derive potential measurement uncertainty of components when performing experiments. Did you perform any analysis to determine uncertainty other than RMSE?
Remember, RMSE only tells you how well a regression line matches the data points being regressed. RMSE has no value for determining systematic uncertainty nor the actual measurement uncertainty of the data points being regressed.
You can very easily end up with a regression that is totally inaccurate going forward. Dr. Frank proved this. Willis has also given you a good analysis of why your “experiment” is invalid.
You made no mention of what your experiment forecasted for the next 20 years, let alone the next century. The fact that tuned models can predict the past is meaningless. You are a mathematician attempting to show model math is correct regardless of how well it makes forecasts. It is circular logic that has made a complete circle!
That is true. “Some” people “might” indeed make that particular “argument” …
… but only in the “tomorrow’s weather will more likely than not be the same as today’s weather” context.
Not when considering “forecasting metrics” for the question :
“What is the daily weather ‘likely’ to be for a given location in the year 2100 ?”
I’d have a hard time arguing against because you have to know the GAT before you can estimate the ECS.
That’s called a persistence forecast. The global average temperature (GAT) forecast I’m talking about is a dynamic forecast from a GCM.
The GAT is an average of a large spatial (globe) and temporal (month) domain. It’s not a specific spot forecast at an exact location and exact time. So I’m not sure how your statement relates to what we’re talking about. Can you clarify what you mean?
After -re-reading the sub-thread I fear we are “talking past each other”.
We were using the same term, in this case “GAT”, to talk about two completely different things.
From your post I initially clicked the “Reply” button on :
Your “GAT pairs” there are referring to the historical BEST reanalysis numbers as well as the model forecasts.
.
PS : Also from your OP :
CMIP-five had “Historical Data” up to (December) 2005.
The “forecast” part only covered the last 15 years (out of a total of 141) of your analysis, the first 126 years were “hindcast”.
Better for what? Because you think it more accurately predicts the actual climate or because it takes a bunch of wild models and smooths them out so they are not so obviously BS.
Predicting the global average temperature. Note that an RMSE of 0.17 C is better than 0.24 C.
RMSE tells you nothing about accuracy. RMSE can’t identify systematic uncertainty, i,e, accuracy. If the models turn linear after a few iterations, and they do, the conclusion must be that CO2 begins to override all cyclical and natural variation phenomena. That has never been shown by measured evidence.
Willis, you say –
Agreed. If 131 models used by the IPCC give 131 different answers, then at least 130 of them are wrong. It is magical thinking to think that averaging demonstrably wrong answers provides a correct one!
Unfortunately, even weather forecasts are no better than a smart 12 year old can do, given access to the same data. Example – “40% chance of rain in the next 24 hours”.
In other words, it probably won’t rain, but it might. The computer model says so.
Yes, it is possible that all of them are wrong, depending on the working definition of “right.” All that one can say with certainty is, assuming no duplicate results, that there will only be one “best” result.
i am 100% sure that 100% of them are wrong
Indeed, you have only to look at how the models perform in relation to observations in the atmosphere where the infamous “hot zone” should be according to IPCC doctrine.
I eagerly look forward to your posts because they almost always have interesting responses. Thanks.
My simple summary of this one is quite straightforward: The climate models all have baked in ECS values of various amplitudes, and a variety of other variables. These other factors but not the ECS are varied to “hindcast” past temperatures. Thus they reasonably reproduce the past over the time period they used for their calculations. Assuming the errors in past temperatures are random, the ensemble average of past temperatures should be reasonably accurate. Using the same assumptions, the ensemble average of future temperatures would be directly dependent on the ECS values used in each model, also assuming the other factor effects are randomly distributed. It would be reasonable to expect that the average increase in predicted future temperatures might even be related to the average ECS value used in the forecasts. Thus, the future is always warming in the models, and in no way accurately predicts the future.
Tom, in support of your points above, a recent paper by Green and Soon proves that models hindcasting past temperatures does not mean they can forecast reliably. The study concluded:
The IPCC’s models of anthropogenic climate change lack predictive validity. The IPCC models’ forecast errors were greater for most estimation samples —often many times greater—than those from a benchmark model that simply predicts that future years’ temperatures will be the same as the historical median. The size of the forecast errors and unreliability of the models’ forecasts in response to additional observations in the estimation sample implies that the anthropogenic models fail to realistically capture and represent the causes of Earth’s surface temperature changes. In practice, the IPCC models’ relative forecast errors would be still greater due to the uncertainty in forecasting the models’ causal variables, particularly Volcanic and IPCC Solar.
https://rclutz.com/2025/05/22/ipcc-climate-models-proven-to-lack-predictive-ability/
I probably should have mentioned the NBM. It is the National Blend of Models and is what NWS forecasters use to make the official NDFD forecasts for all locations in the United States. It is an ensemble model. [Craven et al. 2019]
Another cost I didn’t mention is computational complexity. Each ensemble member linearly adds to the time it takes to complete the ensemble run. For example, if each ensemble takes 1 hour to run then it would 24 hours for 24 members if those members were ran sequentially. So while a 24-member ensemble model would likely produce a better prediction than a 1-member deterministic model it’s 24 hour forecast would be useless because by the time it was available that time period would already be upon you. This is why sequentially ran ensemble models typically have reduced resolutions further blurring the forecast for spot locations. Fortunately many ensemble models just collate their members after they’ve ran in parallel like is the case when you aggregate model outputs from independent institutions.
“So while a 24-member ensemble model would likely produce a better prediction than a 1-member deterministic model”
Or could just as easily produce total garbage. ! You would never know.
And the more models or runs that are computed, the larger the range that can be expected.
Exactly. A lot of weather forecasters utilize this feature of ensembles to better predict not only the most likely outcome of a weather system, but the range of possibilities as well. One problem with ensembles right now is that there are too few members (the GEFS uses 21 and EFS uses 51) and so we often underestimate the potential of certain outcomes. I’m hoping in the not too distant future ensembles will incorporate hundreds if not thousands of members. I think we’re not too far away given the AI models (like ECMWF’s AIFS and NOAA’s GraphCast) utilize a fraction of the computational resources. This should provide benefits for climate modelers as well.
[snip. just stop harassing the same users over and over again with the same silly thoughts -mod]
When they stop pretending weather forecast models are the same as “climate” models.
The UK MET office uses a unified model that is both a weather and climate model. They say this
So whilst I’m sure modules are turned off and on for the different timescales, there is a lot of commonality.
When you get 50/50 percent of rain, the models are worthless. Would you fly in plane that had a 50% chance of reaching the destination? How about 75%? We are going on 50 years of modeling. The models still end up going linear after a short period of time. That makes them worthless for predicting anything.
There’s an old commercial that said Ivory soap was 99 and 44/100 percent pure. I read somewhere that if every component of the Apollo spacecraft had that success rate, then one in three missions would fail catastrophically.
That’s a whole lot of gobbledygook.
I’m happy to explain or clarify anything if it helps. Just understand that I’m not an expert and won’t have all the answers. In such a case I might still be able to at least point you toward relevant literature.
Literature from the anti-science climate glitterati. !
The climate models are based on scientific nonsense from the ground up.
Roy Clark: A Nobel Prize for Climate Model Errors | Tom Nelson Pod #271
Exactly. Apparently bmx still has not discovered that the more a lie is repeated the more it is believed – by some..
I’m sorry you feel like my post is a lie. I might be mistaken (and often am), but I would never lie. The content in my post can be verified. In fact, I even posted links to the sites that publish the skill scores of GCMs. And you verify that the RRFS, WoF, SREF, and HREF all exist as well. I understand that this seems like gobbledygook to people unfamiliar with it. For me biology, medicine, and other scientific disciplines I’ve not spent time with all look like gobbledygook to me. So I get it.
I never once said you were lying. However you do seem to believe the lie that a climate model can predict the climate.
Please explain how using an unknown ECS figure in a model does anything but give you the wrong answer.
GCMs don’t use ECS. ECS is the equilibrium climate sensitivity given a 2xCO2 pulse. It is the GAT output once the GAT stops increasing.
I see. So what do the programmers of an Earth climate model put in regarding the effect of CO2 when attempting to predict the future?
I think I understand where you are going wrong. All the weather models produce roughly similar predictions as they are based on physical principles. Averaging their output therefore makes sense. Climate models are not based on Physics, and their outputs vary wildly from one another. As Willis points out, averaging them simply eliminates any local variation and produces a uniform, “global” warming.
Weather and climate GCMs are two sides of the same coin. They utilize similar numerical cores and physics modules. In fact, many GCMs can be configured to run in either mode. The primary difference is that climate models usually run at much lower resolutions and for longer periods so that they can complete forecasts out to many years as opposed to many days.
If climate models are so wonderful, why do they have to be tuned on past data? Why do their outputs vary so hugely? If they had any value, the outputs would all be roughly the same, as is the case with short-range weather models.
Pat Frank has shown how error accumulates exponentially in these models, rendering them worthless.
They use free parameters that must be experimentally determined.
He confused W.m-2.year-1 with W.m-2 when citing [Lauer & Hamilton 2013]
That’s odd – weather models, which do have some predictive power, don’t need tuning on past weather. They simply take the current data and extrapolate into the near future. Why should climate models need tuning?
Can you name another type of computer modelling which needs to be adjusted on the basis of past, observational data before it can be used? I certainly can’t.
You haven’t rebutted Frank’s assertions.
Weather models are tuned on past weather. That’s how they set the free parameters for the various parameterization schemes.
Any model with a free parameter needs to have those free parameters experimental determined or tuned to fit observations. Newton’s universal law of gravitation requires the free parameter G which is experimentally determined or tuned to fit observations. The standard model of particle physics 20+ free parameters that must be experimental determined or tuned to fit observations. There are many other examples of models with free parameters outside those used in climate science.
As I said he confused W.m-2.year-1 with W.m-2 when citing [Lauer & Hamilton 2013]. That is the rebuttal. You can see the specifics of the mistake in the pubpeer thread. In a nutshell he arbitrarily changed the given value of 4 W.m-2 to 4 W.m-2.yr-1 so that he could then erroneously multiple it by years to get it back to W.m-2 Except that it was already in W.m-2 to begin with.
Had Newton known the mass of the Earth, he could have calculated G very easily. Your comparison with climate model “tuning” (fudging) is invalid.
Even today, G is not well determined.
Your rebuttal is worthless. Pat Frank’s rejoinder:
Dissoconium Aciculare, when did an annual average not have the unit, “per year”? The level of ignorance informing that complaint is astounding.
Because he’s full of crap, they aren’t the same.
It seems you think you understand the process but you do not see the fundamental differences at the outset.
Because you cannot use the same utilities for predicting both the weather AND the climate, whatever the latter means. The fundamental mistake is to assume they can, as Willis and many others have pointed out. I think you are too close to the models and have lost the ability to judge them properly..
Well that’s a bunch of BS, but that aside most weather models are pretty bad more than three days out, whether you run one of them or a hundred.
We can test that hypothesis if you want. Define “bad” as the threshold of the anomaly correlation coefficient (ACC) score and we’ll see if they are truly bad.
We can test that hypothesis
When I see 50/50 chance of rain. The forecast is worthless. It might rain or it might not! I can make that prediction myself with 100% accuracy as I wake up in the morning. If a model can’t do better than that, it is worthless. Probability chance is not a prediction that something will occur. A prediction is what WILL HAPPEN.
Taking a pencil and drawling a straight line from the past is every bit as predictive as averaging the models.
That’s called a persistence forecast. They are skillful, but their skill is lower than a single deterministic GCM forecast and lower still than an ensemble GCM forecast.
The skill of climate models is probably less than ZERO !!
No it’s just a crap model. Built on a predetermined premiss that is tragically flawed.
I mentioned 3 of them. TVCN/IVCN and HCCA for tropical cyclone forecasts and GEFS for all weather types. Which one is crap and what should forecasters replace it with? Can you post a link to literature describing your suggestion and its skill?
When is a model so badly off in its forecasts that it should not be included in the ensemble? What is the criterion for inclusion? Or is it just that twenty years ago these are the ones that were in it?
I am asking, Willis seems to be averaged 32 model outputs. Why not 40? Or why not 15? What is the criterion for inclusion? Why not include one I knocked up the other day? Yes, I know it gives very strange forecasts for last year, and a few years before that, actually, but you are already including ones which are not a whole lot better than mine, so why not mine? And Fred down the street has one too, why not his?
I don’t get it. The task is surely to get an accurate model, not to keep on basing forecasts partly on results of ones that are failing.
Which makes them useless in forecasting. All they are doing is saying that “something will happen, but we don’t know when or where”. Heck, a group of monkeys throwing darts of assorted colors for different phenomena could do as well!
Probably better.
There is no way of knowing if even one of those models are correct as they are simply a set of assumptions.
if we knew how the climate worked there would be one model not 40+ and counting
While re-reading this, I realized that my argument has been misrepresented.
bdgwx and others have interpreted it as saying that using the mean of the model “ensemble” outputs is better than picking a model at random.
But my point is that using the mean of the ensemble is worse than figuring out which model is the best and using that one.
This is the part I’ve never understood about the Climate Model Intercomparison Project (CMIP) series of model comparisons.
If I ran the zoo, I’d have all of the models get trained on the data from 1850 to 1950, and then see how well each of them does at predicting the climate from 1950 to the present.
Then I’d use the best one, or perhaps an average of say the best three, to try to determine what the future might hold.
However, my problem with all of this goes deeper. It is that we truly don’t understand what makes the temperature of the earth go up and down. We say it’s “natural variations” as if that explained something, but it doesn’t—it just names something, it doesn’t explain it at all.
For example, here’s the Ljungqvist reconstruction of the historical NH temperature.
My questions are:
• Why did the temperature peak around the year 200 AD during the Roman Warm Period?
• Why did it then start cooling instead of just staying warm or continuing to warm further, and cool until 500 AD?
• Why did it stop cooling in 500 AD?
• Why did it start warming again instead of just staying cool or growing colder, and warm until 1000 AD during the Medieval Warm Period?
• Why did it then start cooling again instead of just staying warm or continuing to warm further, and cool until the bottom of the Little Ice Age around 1700 AD?
• Why didn’t it just keep cooling after 1700 AD and plunge us into a new ice age?
• Why did it start warming again instead of just staying cool, and warm in fits and starts for 300 years up until the present? The first 200 years of that warming cannot have been from CO2, it didn’t start rising significantly until about 1900 AD.
Here’s the thing. Nobody knows the answers to any of those questions. Not the climate scientists. Not the climate models. Nobody.
And until we have those answers, thinking that we can predict the climate 50 or 100 years into the future is a joke.
w.
The problem with this is that GCMs are effectively a proxy for future temperatures and if you cherry pick proxies to best match the observed warming, we know we produce hockey sticks. So the moment a model is claimed to be modelling climate, it really needs to be used.
The other problem is they’re going to be tuned on satellite and other measured data during construction (ie free parameter fitting) and if that genuinely wasn’t allowed I doubt they could even construct a viable model at all. Satellite data started in the 70s. Argo started about 2003 I think.
What I see in the graph is a likelihood that there is a cyclical behavior. To know for sure accurate data from 1000 years at each end is needed.
As you say, no one has ever postulated a functional relationship with multiple variables that would explain the variations.
The CMIP5 ensemble mean is the best.
I am afraid that reports of extra CO2 causing global warming are greatly exaggerated……
Indeed. Someone please tell this bloke.. He’s been talking Hiroshimas again lately.
Willis,
You might inadvertently mislead readers by then saying –
A model output is a “fact” of course, but not a “physical fact”.
The Earth as a whole is losing 44 TW, meaning it is cooling, regardless of what any incorrect model, or ensemble of incorrect models, says.
No GHE. Adding CO2 or H2O to air does not make it hotter, regardless of what models might say. Not “physical fact”.
Less than a week ago you said, in this WUWT comments section, that :
Editorial note to other readers : Please check out that entire sub-thread to see to what lengths MF will go to to avoid providing “supporting evidence” when (more-or-less politely) requested.
.
Attached (to the bottom of this post) is a screenshot of the table summarising the results of Davies & Davies (2010), “Earth’s surface heat flux”.
The average depth of the Earth’s oceans is approximately 4.4 km (volume ~= 1338 million cubic-kilometres divided by surface area of ~302 million km²).
NB : Human beings do not live 4.4 km below the ocean surface.
The 32 TW (~0.1 W/m²) of geothermal heat flow from the solid Earth into the bottom of the Earth’s oceans will “flow” into salt-water at an average temperature of 2°C.
According to the ERA5 reanalysis product the average surface sea-temperature since 1981 has been in the range 20.5°C +/- 1°C (60°N to 60°S latitudes, direct link).
Now how can that be the case if “the Earth is cooling” (present tense) ?
.
Allowing for an albedo of 0.3 the latest estimate for incoming (SWIR) solar radiation is an average of 160 Watts — absorbed over the course of 24 hours — per average square-metre of (land and ocean) “surface” (IPCC AR6, WG-I report, Figure 7.2, page 934) :
The average 0.091 W/m² (47 TW) of geothermal “heat flow” from the solid Earth’s “surface” is, to a very good first approximation, constant … 24 hours a day, 365 (or 366) days a year …
The average of 160 W/m² (81600 TW) of incoming (SWIR) solar radiation “absorbed” at the Earth’s surface will vary every 24 hours from zero — “from dusk ’til dawn”, as the poetically-inclined amongst us might put it — to over 1000 W/m² (local noon with a zenith angle of 90° on a cloudless day).
Which of those factors is more likely to impact both short-term “weather” and long-term “climate (change)” ?
Why are you so fixated on that 0.091 W/m² ?
Mark, you wrote –
I don’t believe I’ve ever mentioned 0.091 W/m², so maybe you are confused?
You lambaste me for accepting your rate of net energy loss (a little higher than the one I customarily use), but you cannot say why you disagree with the larger figure you provided! Strange.
Cooling, slow or fast, is cooling – not getting hotter.
You ask –
I can’t help it if you refuse to accept that thermometers respond to man-made heat. Man-made heat is ephemeral, but produced continuously, and results in hotter thermometers.
You might even be delusional enough to believe that adding CO2 or H2O to air makes it hotter! No GHE, I’m afraid.
Four and a half billion years of continuous sunlight, atmosphere, CO2 and H2O, has resulted in the Earth’s surface cooling considerably. You don’t have to believe it.
No I am not confused.
You, however, appear to be “wilfully blind”.
Please scroll up and look more carefully at the screenshot of “Table 7” of the Davies & Davies (2010) paper that I attached to the end of my initial “Reply”.
The last line says that they calculated an average outward heat flow of 46.7 TW over the 510.1 million square kilometres of the Earth’s solid surface, i.e. 91.6 mW/m².
Your “44 TW” is actually strictly equivalent to “86.3 mW/m²” (to 3 significant figures).
.
Once again you “read into” or “extrapolate from” the words I wrote something that is not there.
I don’t “disagree with” the numbers in Davies & Davies, I accept them as valid.
.
I have already said to you (several times ?) that I do not “believe that adding CO2 or H2O to air makes it [ the air ] hotter”.
Please scroll up (again) and pay more attention to the “details” in the IPCC’s “Figure 7.2” (AR6 WG-I assessment report, page 934).
The “average of 160 W/m² (81600 TW) of incoming (SWIR) solar radiation ‘absorbed’ at the Earth’s surface” that I used to compare against both
1) the “91.6 mW/m² (47 TW)” number from Davies & Davies (2010) and
2) your “44 TW [ ~= 86.3 mW/m² ]” number
did not include the 342 W/m² of “downwelling radiation” in the upper half of Figure 7.2.
160 Watts per square metre of direct sunlight (after taking into account absorption and reflection) is already a lot larger than either your 86.3 milli-Watts number or the Davies & Davies 91.6 milli-Watts number.
I had no need to invoke the “GHE” mantra. 160 W/m² is more than sufficient all on its own.
.
.
The figure you “customarily use” is part of the phrase you repeatedly parrot of :
“The Earth as a whole is losing 44 TW.”
That is approximately true for the solid Earth … but a bigger issue is that you keep avoiding saying where you got that figure you “customarily use” from.
I explained where I got my numbers from in the “Dumbest Genius Librarian” comments section as follows :
= = = = = = = = = = = = = = = = = = = = =
Concrete contradictory evidence : Davies & Davies (2010), “Earth’s surface heat flux”.
URL : https://se.copernicus.org/articles/1/5/2010/
The last sentence of their abstract :
Your “44 TW” is below their lower limit.
Please provide a citation to a (peer-reviewed and published in a “serious” scientific journal) paper that includes the specific number of “44 TW” … with no “error range” or “confidence interval”, always a bad sign … that you keep asserting is the one to use.
= = = = = = = = = = = = = = = = = = = = =
There are probably dozens (or maybe even hundreds ?) of “previous estimates” made in scientific papers before Davies & Davies was made public in 2010.
Which one of those “previous estimates” papers did you get the “44 TW” number from that you “customarily use” ?
.
.
Post Scriptum
The way my brain “wired itself” early on means that I have been fascinated by maps for as long as I can remember.
One of the “Davies” who co-wrote the 2010 paper (J. Huw Davies) published a separate follow-up paper of their results in 2013 titled “Global map of solid Earth surface heat flow”.
A copy of Figure 7 of that paper is attached below.
The Davies & Davies (2010) paper calculated that 31.9 TW of the total 46.7 TW of geothermal “heat loss” from the solid Earth went into heating the bottom of the Earths oceans rather than being directly “radiated away to space” from the “surface” of its continental land masses.
What fraction of your “44 TW” goes into heating the oceans from below ?
You say –
OK, so why did you utter the following piece of stupidity –
You seem to be fixated on something (you aren’t quite sure what), and trying to blame me for your obsession.
The Earth has cooled for four and a half billion years, and neither four and a half billion years of continuous sunshine, an atmosphere, or CO2 or H2O – or the devout prayers of ignorant and gullible GHE worshippers, have managed to stop it.
Are you perhaps suffering from a mental disturbance?
You wrote –
That’s just stupid – if a body is emitting more energy than it receives, it is cooling. Why should I waste my time providing anything to anyone who agrees that the planet is cooling?
I even agreed to use your estimate (resulting in faster cooling than mine) – but you don’t even like me using your “preferred” figure.
I hope you won’t get too annoyed if I come to the conclusion that you are quite deranged, as well as ignorant and gullible.
The Earth is cooling. No GHE. Adding CO2 or H2O to air does not make it hotter.
Harold The Organic Chemists Says:
ATTN: Willis and Everyone
RE: CO2 Does Not Cause Warming of Air!
Shown in the chart (See below) are plots of temperatures at the Furnace Creek weather station in Death Valley from 1922 to 2001. Click on the chart and it will expanded and become clear. Click on the “X” to return to the text.
In 1922 the concentration of CO2 in dry air was ca. 303 ppmv (0.59 g of CO2/cu. m), and by 2001, it had increased to ca. 371 ppmv (0.72 g of CO2/cu. m.), but there was no corresponding increase in the air temperature at this remote desert. The reasons there was no increase in the air temperature at this arid desert are quite simple: There is too little CO2 in the air and there is quite low constant humidity. If climate models are using the concentration of CO2 as input variable, they are likely producing erroneous results.
At the MLO in Hawaii, the concentration of CO2 in dry air is currently 428 ppmv at STP. One cubic meter of this air has a mass of 1.29 kg and contains a mere 0.849 g of CO2, a 15% increase from 2001. Most all people and especially the climate scientists and politicians do know how little CO2 there is in the air.
The above empirical temperature data falsify the claims by the IPCC that CO2 causes global warming and is the control knob of climate change. The purpose of these claims
is to provide the justification for the maintenance and generous funding of not only the IPCC but also the the UNFCCC and the UN COP. Hopefully, President Trump will put an end to the greatest scientific fraud since the Piltdown Man.
NB: The chart of the temperature plots of Death Valley was taken from the late John Daly’s website “Still Waiting For Greenhouse” available at http://www.john-daly.com. From the home page, scroll down to the end and click on “Station Temperature Data”. On the “World Map”, click on a region to access the temperature charts of the weather stations located there. John Daly found over 200 weather stations that showed warming up 2002.
Here is the chart of the temperature plots at the Furnace Creek weather station in Death Valley. Click on the chart and it will expand and become clear. Click on the “X” to return to the text.
and
I think I’d want to see the numbers because it looks to me like the annual temperature does increase a bit in that graph. The chosen scale and use of actual temperatures makes it harder to see.
I have been peering at that chart for several years and there seem to be a slight increase in the winter and annual temperatures. However, this might be due to a move of weather station. In recent years, Death Valley has become a very popular destination for tourists and this might have affected temperature measurements. Nevertheless, the chart can be
used to convince the those who believe in AGW that CO2 does not cause global warming.
Look what the CO2-caused global warming belief has done to the economies of the UK, Germany, Australia, and California. If the EPA rescinds the CO2 endangerment finding, hopefully all this global warming and climate nonsense will be put to rest.
Check out this chart for Adelaide which shows a cooling since 1857.
That is exactly what the rural states in the U.S. are showing also. Tmax in winter has been increasing since ~1980 while Tmax in summer is pretty much flat. Here are a couple of graphs from NOAA. The marked increase starting in 1980 is suspiciously common with the advent of ASOS stations.
I’ve asked you before, but given you are talking about changes to TMax, why do all your graphs use average temperatures?
“Tmax in winter has been increasing since ~1980 while Tmax in summer is pretty much flat.”
Taking the first state you show, Wyoming, NOAA’s site shows TMax summer temperatures warming at 0.6°F /decade since 1980, TMax winter temperatures are only warming at 0.2°F / decade.
They are monthly averages of Tmax for the whole state. They are the smallest periods of time (monthly) NOAA allows. If you think graphs of daily Tmax will somehow disprove what I have shown, then feel free to show that.
The fact is, there is little to no growth in monthly Tmax averages in rural states during summer months. This is a time of maximum insolation and should cause maximum temperature rise.
The fact is, there is growth in monthly Tmax averages in rural states during winter months since 1980. You tell everyone here why that is! UHI? ASOS stations? Land use?
If you doubt these conclusions, why don’t you add your screenshots of NOAA’s graphs and show why they are wrong. Show Tmax for July, August, December, and January. Then show Tmin averages from the same months.
“They are monthly averages of Tmax for the whole state.”
No, they are monthly averages of daily averages. It says so on the top of each chart. Here’s the monthly average for TMax, for Wyoming. Note, it says “max temp”. Note the temperature scale on the left.
“The fact is, there is little to no growth in monthly Tmax averages in rural states during summer months”
And you accuse me if handwaving. Back these claims up with some evidence. Just eyeballing noisy graphs is not evidence.
“This is a time of maximum insolation and should cause maximum temperature rise.”
Could you provide a reference for that. My understanding has always been that CO2 should cause more warming in winter months and night time
The fact that max temperatures are warming faster than min temperatures might indicate increased sunshine is causing additional warming during the day. That may well be the case in the UK.
That shows quite a bit of warming.
The warming trend at the Death Valley station in Furnace Creek USC00042319 from 1912 to 2024 is…
+2.0 C.decade-1.using raw unadjusted data.
+1.0 C.decade-1.using the adjusted data.
As a point of comparison the global average trend is +0.10 C.decade-1 over the same period so Death Valley is warming at 10x the global rate (or over 20x if unadjusted data is your preference).
I am confused. Does +2.0 C.decade-1 mean +2.0 deg. C/decade? If so, why are the temperature plots fairly flat up to 2001? Has there been a large increase in air temperature since 2001?
Where did you obtain this temperature data? Could you make a plot of the annual average unadjusted and adjusted temperatures for comparison to John Daly’s chart?
I’m trying to figure out if this type of temperature data can be used put end to California Gov. Gavin Newsom’s disastrous plans to phase out FF and cars and light trucks with gas and diesel engines.
BTW: Where do you live? I live in Burnaby, BC. BC was the first NA jurisdiction to introduce a carbon tax on FF which was recently $CDN 80
per tonne of CO2 equivalent, but was finally repleaded this year.
You will NEVER find ANY temperature series that will dissuade Newsom’s plans to destroy California because Californians don’t vote based on temperature series, facts or logic. They vote on emotion. The feel like something bad is happening to the earth, they feel guilty because its their fault, and they must be punished and take the rest of us with them.
Burnaby North is where an MP seeking election stuffed a flyer in my mailbox claiming the IPCC said we had only 10 years left to save the climate. That was… 10 years ago. I asked him on what page of what IPCC report it said that and he never answered. It doesn’t exist, that’s why. But he got elected. Facts be d*mned.
Actually, the majority of Californians vote for the candidate or referendum/initiative promising the most free stuff.
Sorry. That was supposed to say +2.0 C.century-1. The temperature graph isn’t flat. It just appears flat because of the scaling. Here is another view of it.
Wait, what? There’s no way warming was over 10C. I’m sure you meant per centaury.
Yep. That was supposed to say C.century-1. I’m not even sure why I did the calculation by century anyway since I always do it by decade. That ranks among some the stupidest mistakes I’ve committed here. Nice catch.
Doh. That was supposed to say C.century-1.
The correct figures in C.decade-1 are…
+0.2 C.decade-1 using raw unadjusted data
+0.1 C.decade-1 using the adjusted data
This is obviously incorrect as a result. Death Valley is warming at about the global rate (or over 2x if unadjusted data is used).
+2.0 C.decade-1.using raw unadjusted data.
Do you ever wonder about the worthiness of the data you use?
+2.0C (+3.6F) per decade? That would be about 36F hotter than 1912. Goofy at best.
1912 -> +0.0 C
1922 -> +2.0 C
1932 -> +4.0 C
1942 -> +6.0 C
1952 -> +8.0 C
1962 -> +10.0 C
1982 -> +12.0 C
1992 -> +14.0 C
2002 -> +16.0 C
2012 -> +18.0 C
2022 -> +20.0 C
So, let’s say in 1912 the Tmax was 36C (97F). So in 2022 the Tmax should be about 56C(133F). Hmmm. The record for Death Valley is 134F in July, 1913. There should be newer record temperature with a +3.6F per decade growth.
First, the physics requires that the hottest places will warm up the least, so showing me a temp graph of one of the hottest places on earth isn’t of much value.
Second, Death Valley is no more representative of the globe as a whole than is any other location on earth. In fact it is subject to unique weather patterns that make it less representative of earth than most other places.
Third, this is your one and only piece of evidence that you post repeatedly. If all you have is one data series from one point on earth you’re playing the same game as Briffa and his most powerful tree on earth only in reverse.
“. . . Death Valley is no more representative of the globe . . . .”
John Daly addressed this years ago. He said they were trying to show global warming using Death Valley temperatures. At the time, I captured the Death Valley data; and at the time, there was a strong ENSO. However, the Death Valley temperature was lower than normal. Indeed, during a major temperature rise globally, the Death Valley temperature was lower–third lowest.
Unfortunately, Mr. Daly passed away, and I stopped referencing the data for a few years. When I did look at it later, the temperature graph had changed. They lowered the older temperatures and raised the more recent temperatures. If they did this with the Death Valley temperatures, then they probably did it with the others.
The temperature rise is bogus.
His argument at the time was that their poster child for warming didn’t show what they claimed. Nor does it show what you claim.
Actually it does:
It’s the other way around. The GHCN PHA raised the older temperature relative to the newer temperatures. For the Death Valley station the unadjusted trend +0.2 C.decade-1 whereas the adjusted trend is +0.1 C.decade-1.
Check out the chart for Brisbane. I can only post one chart per comment, I don’t how make graphic with many charts for posting here.
As I mentioned above one cubic meter of air contains a mere 0.839 g of CO2 has mass of 1.29 kg. This small amount of CO2 can heat up such a large mass of air by only a very small amount if at all.
I made the exact same calculation when I first started researching global warming and came to the exact same conclusion. I was wrong. The mass of the CO2 is immaterial because its not the heat capacity of the CO2 that matters. What matters is that CO2 can momentarily absorb a photon and then give up that quanta of energy via collision with other molecules or by emitting another photon. The amount of energy stored by the CO2 molecule doesn’t change except by a number too small to measure, but the number of photons it can absorb and then shed the energy of is effectively infinite.
You’ve drawn your conclusion on physics that are immaterial to the process.
Except it can also receive energy from the surround molecules and emit it.
As with all matter in the universe. Yes, gas is a form of matter.
As a matter of interest, if CO2 absorbs a photon, then emits it unaltered, how would you know it absorbed and emitted a photon?
The CO2 has experienced no change, has it? Or do you think it has? Odd, isn’t it?
Because we can do spectroscopic experiments to determine just that. These experiments are decades old and have been confirmed over and over again, here is the result. If you want to quibble with the details, go get a text book and study.
David, I asked –
Your response –
which makes no sense at all.
[SNIPPED—Keep a civil tongue in your mouth.
w.]
Check out this chart for Brisbane as shown below. Go to John Daly’s website and check out all the charts for Australia. These show no warming up to 2002.
I do not how to assemble large graphic file with many temperature charts which I could post here. I don’t how get temperature data from the GISS data base, for example, to prepare plots of temperature from a weather station.
As I mentioned above one cubic meter of air contains 0.839 g of CO2 and can cause heating of such a large mass of air.
Correction: ” …can cause heating of such a large mass of air.” should be: ” … and it can not cause heating of such a large mass (1.29 kg) of air.”
Brilliant! There’s something wrong. Just as I suspected.
It’s too bad that Dilbert’s creator, Scott Adams, has terminal cancer.
“Only the good die young.”
Very sad indeed. I love his work.
So, if you need higher arctic temps homogenized into the mix to get ”global warming”, surely this is just more evidence that CO2 is not doing it.
There’s obviously been a multi-decadal trend in misunderstanding.
The models make it clear: what’s happening is global smarming.
Thank-you, Willis. 🙂
Its a mystery to those of us who live in the temperate northern latitudes. After all this warming we have experienced or told about to be more accurate, over the past 150 years why it is so blooming cold?
The uptick in CO2 has given us one thing for sure. We can see it, NASA has reported it, that is a lot more green growth.
I own woodlands, hedgerows and have areas of grass that require cutting. The need to maintain stuff, trimming and cutting back has certainly increased in my lifetime. Well it is either that, or I am getting slower (that is very possible).
What I have not seen or felt in my 70 years, is warmer. Now, as I have existed for half of the period of notable warming which I am confident has taken place, as we thankfully cycle out of the little ice age era, how long do we wait for the benefit of the warmth to be noticeable?
The fantastic good fortune of the world this past fifty years has been record harvests and record population growth, which thankfully is better fed today than during the first quarter of my life.
That food productivity is partly due to better systems of agriculture, fertiliser and better growing condition, including CO2 availability to assist plant growth.
I am constantly shocked by the MSM’s desire to denigrate the benefits of CO2, they prefer to constantly project fear of some upcoming apocalyptic heating of the world come 2100?. Well that 2100 future is about the same time span forward as my lifetime looking back. The warming we have ‘enjoyed’ this past 70 years is physically not discernible. I am confident the future warming will not be physically noticeable by 2100 either.
The big question is, are we actually warming any more?
Carbon dioxide levels have increased and are basically uniform around the planet.
Some parts of the planet have warmed; some cooled, some no change.
To me that means whatever causes warming or cooling, it ain’t carbon dioxide.
Carbon dioxide is innocent.
“… “nobody predicted that global warming would actually be global”…”
They did until “predicted” became “projected” and they certainly now promote the notion that “global temperatures “ are “global” showing “global” warming… or more explicitly these days global climate change/emergency/crisis.
Robert G. Brown at Duke University wrote an informative article on the subject of averaging climate model outputs. Read it at https://ktwop.com/2013/06/14/why-averaging-climate-models-is-meaningless/. Dr. Brown has been silent of the subject of “climate change” for many years now. Perhaps he had an epiphany and now believes, or perhaps he just wanted to keep his job at Duke.
Yes. He used to post here on occasion, but as you say, it’s been a few years.
Thanks for the reminder, Denis. Anthony elevated his comment to a full post, which has some interesting comments. Included among them is this gem:
Yep.
w.
RGB was the first to make me realize that there was something really wrong with the theory, when he said that run away warming can’t happen because if it could it would have already happened sometime in the past. When you realize the major events the planet has gone through in even the last billion years and the climate has stayed in a fairly tight range there must be a strong reason for it to stay in that range. Lucky for us the extremes of that range are all capable of supporting life and the high extremes are best for life.
Some might argue it’s evidence small changes in parameterisation cause large variance in the projection.
Some might argue changes in implementation of the simplified physics cause large variance in the projection.
Some might argue projections aren’t modelled climate change at all but instead modeller’s solutions that don’t blow up and seem plausible.
I like the following paragraph
We already know what has caused the warming during the CERES timeframe. It is a reduction in clouds. As Willis clearly showed in his greenhouse efficiency paper, the greenhouse effect has stayed constant. It is additional solar energy which has warmed the planet.
Therefore, it would be more interesting to look at the cloud changes in models during the same period where they show variable warming. What are the various models producing for clouds. Are the warmer models producing less clouds? And also do any models match the actual locations where clouds have decreased?
“Now, before you get all passionate about how “nobody predicted that global warming would actually be global”, that’s true …”
Some said the planet would burn up and the oceans would boil.