Challenges trends in climate simulations
Ross McKitrick, Special to Financial Post
An important new study on climate change came out recently. I’m not talking about the Intergovernmental Panel on Climate Change (IPCC) Synthesis Report with its nonsensical headline “Urgent climate action can secure a liveable future for all.” No, that’s just meaningless sloganeering proving yet again how far the IPCC has departed from its original mission of providing objective scientific assessments.
’m referring instead to a new paper in the Journal of Geophysical Research-Atmospheres by a group of scientists at the U.S. National Oceanic and Atmospheric Administration (NOAA) headed by Cheng-Zhi Zou, which presents a new satellite-derived temperature record for the global troposphere (the atmospheric layer from one kilometre up to about 10 km altitude).
The troposphere climate record has been heavily debated for two reasons. First, it’s where climate models say the effect of warming due to greenhouse gases (GHGs) will be the strongest, especially in the mid-troposphere. And since that layer is not affected by urbanization or other changes to the land surface it’s a good place to observe a clean signal of the effect of GHGs.
Since the 1990s the records from both weather satellites and weather balloons have shown that climate models predict too much warming. In a 2020 paper, John Christy of the University of Alabama-Huntsville (UAH) and I examined the outputs of the 38 newest climate models and compared their global tropospheric warming rates over 1979 to 2014 against observations from satellites and weather balloons. All 38 exhibited too much warming, and in most cases the differences were statistically significant. We argued that this points to a structural error in climate models where they respond too strongly to GHGs.
But, and this is the second point of controversy, there have also been challenges to the observational record. Christy and his co-author, Roy Spencer, invented the original method of deriving temperatures from microwave radiation measurements collected by NOAA satellites in orbit since 1979. Their achievement earned them numerous accolades, but also attracted controversy because their satellite record didn’t show any warming. About 20 years ago scientists at Remote Sensing Systems in California found a small error in their algorithm that, once corrected, did yield a warming trend.
Christy and Spencer incorporated the RSS correction, but the two teams subsequently differed on other questions, such as how to correct for the positional drift of the satellites, which changes the time of day when instruments take their readings over each location. The RSS team used a climate model to develop the correction while the UAH team used an empirical method, leading to slightly different results. Another question was how to merge records when one satellite is taken out of service and replaced by another. Incorrect splicing can introduce spurious warming or cooling.
In the end the two series were similar but RSS has consistently exhibited more warming than UAH. Then a little more than a decade ago, the group at NOAA headed by Zou produced a new data product called STAR (Satellite Applications and Research). They used the same underlying microwave retrievals but produced a temperature record showing much more warming than either UAH or RSS, as well as all the weather balloon records. It came close to validating the climate models, although in my paper with Christy we included the STAR data in the satellite average and the models still ran too hot. Nonetheless it was possible to point to the coolest of the models and compare them to the STAR data and find a match, which was a lifeline for those arguing that climate models are within the uncertainty range of the data.
Until now. In their new paper Zou and his co-authors rebuilt the STAR series based on a new empirical method for removing time-of-day observation drift and a more stable method of merging satellite records. Now STAR agrees with the UAH series very closely — in fact it has a slightly smaller warming trend. The old STAR series had a mid-troposphere warming trend of 0.16 degrees Celsius per decade, but it’s now 0.09 degrees per decade, compared to 0.1 in UAH and 0.14 in RSS. For the troposphere as a whole they estimate a warming trend of 0.14 C/decade.
Zou’s team notes that their findings “have strong implications for trends in climate model simulations and other observations” because the atmosphere has warmed at half the average rate predicted by climate models over the same period. They also note that their findings are “consistent with conclusions in McKitrick and Christy (2020),” namely that climate models have a pervasive global warming bias. In other research, Christy and mathematician Richard McNider have shown that the satellite warming rate implies the climate system can only be half as sensitive to GHGs as the average model used by the IPCC for projecting future warming.
Strong implications, indeed, but you won’t learn about it from the IPCC. That group regularly puts on a charade of pretending to review the science before issuing press releases that sound like Greta’s Twitter feed. In the real world the evidence against the alarmist predictions from overheated climate models is becoming unequivocal. One day, even the IPCC might find out.
Ross McKitrick is a professor of economics at the University of Guelph and senior fellow of the Fraser Institute.
Satellites seem to be the major way of avoiding UHI effects, and the gross lack of weather stations in much of the world.
Is it same product used in this article?
Such that even Dr. Schmidt’s “TCR screened” models show a strong warm bias in the mid troposphere.
In a related subject, I find the notion of “screening” billion dollar mechanistic models to be quite absurd. One would do just as well (or better) with a simple empirical fit. This is effectively what is done with TCR screening in post, thereby totally nullifying the supposed benefits of the investment into the CMIP members.
The IPCC has been created by famous UN, just like the WHO. Time to spit it all out.
Since all the IPCC produces is verbal diarrhoea, I think the “p” in “spit” needs replacing with “h”.
Ross, thanks for the update. Many have questioned the validity of the models for years. Are they just tools of propaganda?
Because of your areas of expertise, I have an off topic question: Is it more difficult to model global climate or the global economy?
IMHO, both the “global economy” and the “global climate” are both constructs susceptible to gross errors of dodgy data inputs, and therefore any metrics derived from them are worse than useless (i.e. misleading).
May as well just make up assumptions for all inputs elements.
Thus the track record of both.
I learned running a substantial part of a business that when forecasts were wrong, they were simply wrong. Trying to work with the people that made them was hard. They could never spot why they were wrong, why methods allowed them to be wrong, and no idea how to make them right. Their way was the right way and you’ll just have to live with it.
Does this mean all the Sierra Club state reps and others will publicly retract what they have been preaching about the mid troposphere warming alarm to schoolkids for the past 10-15 years?
You already know the answer to your question –
it’s a word that begins with “N” and ends with “O”.
The great “Who could have known” refrain from the climate crusade armies and leadership has to start somewhere.
As posted previously, there are at least two known sources of model warming bias.
In CMIP6, there is only one model without a tropical troposphere hotspot—INM CM5. It was parameterized using ARGO observations and has an ECS of 1.8, close to EBM observational ECS estimates.
Rainfall in addition to a water caused LW radiative “heat trapping” effect is unphysical. The dynamics creating the rainfall are transporting heat from surface to the cloud condensation altitude.
Rud, what is the rational given for not just using that model which best fits observations? Is any reason given?
I have never understood what I gather they do: average the good and the bad performing models. Why don’t they just junk the bad ones?
Or is that not what the spaghetti charts show?
It’s because we live in the era of “participation certificates”, and no exam results can receive an “F” grading.
If climate modeling was half as serious as your typical TV game show, they’d issue ‘participation certificates’ and ‘lovely parting gifts’ to the ‘contestants’, who didn’t make the cut. Instead, the losers get dragged along form ‘episode to episode’ at our expense.
Saving costs by defunding the models that repeatedly fail the reality test, just wouldn’t be the gentlemanly thing to do.
Reminds me of the head of San Francisco’s “reparations” committee, chastising those who have expressed concerns regarding how much the committees proposals cost.
According to him, nobody should be concerned about money, because this is the right thing to do.
The UN IPCC CliSciFi political management types have publicly stated that they cannot rank models because the governments funding the various modeling centers would be upset if their centers were “slighted.” As is typical, the huge politicized UN IPCC CliSciFi apparatus is nothing more than a Leftist/Marxist anti-West cluster-f**k.
Did your money tree die? Mine did. About two years ago.
I thought the money trees were producing much more, smaller, fruit.
The models aren’t actually measuring temperature, they’re effectively proxies for temperature. And the rule is once a proxy has been declared as being representative of the quantity it represents, then it must stay and cant be discarded. Doing so means the remaining proxies (models) simply create hockey sticks.
The fact is that we can’t account for the lack of Lower Tropospheric warming at the moment, and it is a travesty that we can’t fix this 30-year failure quietly.
I think one of the most important issues is that adjusted data is used. I wish models were calibrated to the UAH data. The adjustments have an R2 of almost 1 to CO2. I think this is part of the reasonwhy the models project a high CO2 sensitivity.
If the ECS is 1.8, why are the anomalies reducing since February 2016. I acknowledge that the change in reference period is responsible for some of the reduction effect.
Combined GHG have now doubled since 1850, therefore the GHG effect will now be very small, so what is the ongoing ECS with this doubling.
“so what is the ongoing ECS with this doubling.”
Would there be value in examining parts of the climate system that are responding in unusual ways? For example, there are areas the oceans that are warming much faster than the average.
My favourite is the Sea of Marmara, but the Red Sea, the Eastern Mediterranean, Lake Michigan etc are not marching with the crowd.The most striking example is Professor Tom Wigley’s blip, the anomalous warming of sea surfaces during WWII.
It seems obvious that in those cases something unexamined is going on.
The SeaWifs data looks helpful.
Critical data analysis is the base of science and technical progress but is unlikely to convince ‘Joe public’ to revolt against caca verde.
Heat pumps are fun and may do the job far more efficiently:
Our £25,000 heat pumps left us out of pocket – and operating a Nasa spaceship would be easier: Brits who signed up for boiler upgrade scheme are left facing £5,000 energy bills, wake up to cold homes and have to use blankets to keep warm.https://www.dailymail.co.uk/news/article-11972635/Britons-complain-problems-heat-pumps-amid-soaring-energy-bills.html
Article says:”…invented the original method of deriving temperatures from microwave radiation…”.
Is all this energy from microwave radiation counted in the energy budget?
Warming is lower.
Wood for Trees: Interactive Graphs
Thank you Professor McKitrick.
Meanwhile the Canadian government spent nearly $80,000 polling Canadians on their preferred climate change phrases.
“Participants were asked which of the following descriptors they would prefer: ‘climate change,’ ‘extreme weather,’ ‘climate crisis’ and ‘climate emergency,’” said the report. “Preference was split between ‘climate change’ and ‘climate crisis.’”
My relative from Toronto tells me his ‘climate crisis’ would be definitely man-made if his heating is to fail in the winter, or AC to stop working in the summer.
Since such studies have potentially massive policy and cost implications, it is astounding that only a small cadre of professionals, practically on their own initiative, with limited funding, and at great personal and professional cost, are the ones bringing such information to light.
As it now stands, the predominant “mitigation” measures (ruinables , EVs, unattainable efficiencies, net-zero and worse) are unthinkable and threaten to bankrupt and destroy western civilization, so one would think that objective research (were it so) would be receiving $100s of millions or billions to earnestly critique and try to poke holes in the models and the so-called “consensus.” At least half of the funded climate research should be trying to reject the null hypothesis. Faced with two possible “catastrophes”, (1) implausible run-away warming and (2) very real and devastating short-term emissions controls, let’s poke, prod, test and analyze the science basis for policy, as well as the feasibility of the methods being proposed to mitigate the conjectured warming.
Instead, in climate science, we have McKitrick, Christy, Spencer, Curry and a few others having to play the role of giant-killers. In economics, we have Lomborg, Pielke and a few others. It is not as though these people are flakes or lunatics (irrespective of how they are portrayed and smeared in the media), they are highly qualified, experienced and earnest professionals.
This very massive misalignment of resources and interests alone exposes the CAGW lies for what they are.
Folks like Michael Mann and Andrew Dessler and the thousands of other, less notorious players should themselves be aggressively critiquing their own work and that of others, but they just want the grant money, prestige, and notoriety (or whatever it is that the father of lies [John 8:44] has offered them), no matter the human cost of their personal obsessions. They are murderers and hate the truth. Lying is their native language, consistent with their characters and their father.
The fact is that we can’t account for the lack of Lower Tropospheric warming at the moment, and it is a travesty that we can’t fix this 30-year failure quietly.
It is amazing that NOAA funded Zou et al to reformulate STAR, and even more amazing that they were permitted to publish their results. Can someone provide a full attribution for the new paper, or a link?
Click on the link in the 2nd paragraph.
Very good news. We should never depend on the IPCC for anything. They are a political movement with a mission, the truth is not part of that mission. We need to get this important information out ourselves. Just put it out there any way we can and hope the other side attacks us. I am convinced they will get their backsides handed to them. I double dog dare them to confront us.
RADIATIVE FORCING BY CO2 OBSERVED AT TOP OF ATMOSPHERE FROM 2002-2019
“The IPCC Fifth Assessment Report predicted 0.508±0.102 Wm−2RF resulting from this CO2 increase, 42% more forcing than actually observed. The lack of quantitative long-term global OLR studies may be permitting inaccu-racies to persist in general circulation model forecasts of the effects of rising CO2 or other greenhouse gasses.”
As someone who creates computer models for a living, it’s continually frustrating to me that the media doesn’t understand the concept of computer modeling. Nor does the UN, though that’s just politics – the UN should not be involved in science at all.
I study and study and study to understand what I’m modelling. But the more I study, the more I understand that what I’m doing is assigning correlations and numeric values to physical entities – not the reverse. Ultimately, my models come down to a series of constants and functions/formulas that work with those constants. The initial values I choose for those constants drive the model. The functions and how they treat my research drive the results of the model.
So when I run the model and present the results, they’re not really predictive of anything. They are hopefully useful “what-if” tools. If I do my work well, they’re useful as long as they’re not misused. If I don’t, they’re garbage-in, garbage-out.
If I allowed politics to infect my research, you can imagine the models produce garbage.
The media doesn’t get this it at all. One hobby a lot of my friends had when growing up was building models. Thousands of little plastic bits, together producing a very realistic-looking vehicle or something. But still just little plastic bits. Not miniature airplanes that could fly like real airplanes. The best of them could perform a lot like real airplanes, but not at 20,000 feet or not at full-scale with passengers. The models were just what-ifs. No journalist would get inside one and trust that it would take him across the country.
From the article: “mathematician Richard McNider have shown that the satellite warming rate implies the climate system can only be half as sensitive to GHGs as the average model used by the IPCC for projecting future warming.”
All those ECS guesses, out the window!
Slightly off-topic, I guess, but today’s UK Daily Telegraph includes an obituary for Virginia Monroe Norwood, 1927-2023, ex- Hughes Aircraft, etc., pioneer of MultiSpectral Scanners, designer of the transmitter for the transmitter for the pre-Apollo Surveyor probe, and overseeing developer of Landsats 2-5 (using her tech, not NASA’s Landsat ! system).Wow!!!