By Christopher Monckton of Brenchley
A third of a century has passed since 1990, when IPCC made its first predictions of global warming. Over the 400 months since January 1990, IPCC’s original predictions of 0.2-0.5 C warming per decade over the following century (below) have proven grossly excessive.
In 1990, IPCC also predicted that at midrange doubling the CO2 in the air would cause 3 C global warming – the same warming as the predicted warming from a century of anthropogenic emissions from all sources. In 2021, IPCC predicted that warming by doubled CO2 would be 2 to 5 C, with a best estimate of 3 C, ten times the decadal predictions in 1990.
The Realitometer, which will be published each month, shows the real-world global warming per century equivalent since January 1990 from the satellite monthly temperature dataset of the University of Alabama in Huntsville, compared with IPCC’s range of predictions and with the midrange 3.9 C centennial-equivalent warming predicted in the CMIP6 models.
The Realitometer shows a mere 1.33 C/century equivalent real-world warming over a third of a century. The CMIP6 models’ midrange 3.9 C prediction is thus proving to be a shocking 293% overshoot compared with real-world warming. IPCC’s 2-5 C predictions are 150% to 375% of real-world warming. Yet since then IPCC has not reduced its predictions, first made in 1990, to bring them somewhere within range of observed reality.
IPCC based its predictions in 1990 on four emissions scenarios A-D. Scenario A was the “business-as-usual” scenario. It assumed substantial growth in annual emissions compared with 1990. Scenario B predicted no growth in annual emissions by now compared with 1990. In reality, annual emissions have grown by more than half since 1990. Emissions to 2021, the last full year for which figures are available, track the business-as-usual scenario A exactly. Yet the warming predicted under Scenario A is simply not occurring.
The Realitometer relies on satellite-measured temperature anomalies because the data are not contaminated, as the terrestrial datasets are, by the urban heat-island effect, the direct warming by emission of heat from cities, which is insufficiently corrected for.
UAH v.6 rather than other satellite datasets (RSS v.4, NOAA v.4 and Washington U v.1) is used because, as Andy May has pointed out in a distinguished column here, UAH alone has corrected the spurious data in the older NOAA-11 to NOAA-14 satellite instruments and because, after that correction, the UAH data conform far more closely than any of the other satellite datasets to the radiosonde data, an independent yardstick.
Month by inexorable month, the Realitometer will show just how absurdly exaggerated were and are the official predictions of global warming on which easily-manipulated governments – in Western nations only – have predicated their economy-destroying net-zero policies. Those policies are based on the notion that at midrange there will be almost three times as much global warming as has been occurring. Yet not one mainstream news medium has reported just how startlingly large the ratio of prediction to reality is proving to be.
So UAH is reality?
I see Lord M has his head in the clouds again.
How can you tell, with you’re own head firmly inserted between your butt cheeks?
UAH data doesn’t have the contamination of activist scientists interpolating data how best they see fit (one example mentioned in a WUWT article on exactly that issue dealt with Icelandic scientists complaining that NASA (or was it NOAA) had boosted their measurements to better blend with surrounding temps – even though the Icelanders regarded their temp measurements as pristine, and the surrounding temps were also nust interpolations of other data as there are no daily thermometer readings out in the Arctic Ocean.
Instead of interpolating based on some scientists’ “secret sauce” of mathematical data torture, the satellite measurements scan a huge area directly. Sure, there are calculations to convert sensor readings into temperatures, but at least they are calibrated against radiosonde readings – whereas NASA/NOAA data manipulators don’t seem to care if their data is accurate or not.
“the satellite measurements scan a huge area directly”
A common misapprehension. They don’t. But worse, they only scan each point once a day. And it could be any part of the diurnal cycle, and different parts of the region in view at different times of day.
Ah, only once per day, yet the satellite sea-level measurements scan the same point once every 9 days, with a “view” of sea surface completely unrelated to what it was over a week prior… yet that is accurate?
How about the vast areas of the Earth that never get a land temperature measurement but somehow get a temperature assigned to it? Look at Africa, Siberia, Northern Canada. Somehow all these areas are figured into an average Earth temperature to the nearest fraction of a degree. What a joke.
Maybe you meant to say thee areas are fudged into an average temperature?
UAH interpolates those areas using a local weighted strategy. This isn’t a joke though. It turns out that interpolation is better than doing nothing which effectively (mathematically equivalent) assumes those areas behave like the global average.
But its still a measurement and is much better than a projection from a measurement hundreds or even thousands of kms away.
It is a projection. In the case of UAH it is up to 4175 km away.
Well, that’s disingenuous isn’t it? A remote sensor designed for the use and taking a direct measurement is hardly the same as using two points 1200 km part to interpolate another temperature between them?
I think you misunderstood. UAH is taking two points up to 4175 km apart horizontally to interpolate another temperature between them. That’s how they infill their grid given the sparseness of the satellite observations.
BTW…that does not include the infilling that happens at the poles. Their data file advertises 90S-90N, but their grid only contains values for 82.5S-82.5N. So to compute a complete 90S-90N average they effectively (mathematically equivalent) interpolate the missing portion using values that are up to 40,000 km away. I usually don’t bother mentioning this because the missing portion of their grid only represents 0.85% of the surface area so the effect of the infilling here is negligible. But it still happens nonetheless.
Using a value that is 40,000 km away should give a perfectly accurate result, since the Earth’s circumference around the poles is 40,007.9 km. You’d be using a measurement that was really only 7.9 km away…
Yikes. I have should have said 20,000 km away.
Interpolate: “insert (something of a different nature) into something else”
I think that word, accurately describes your first line 🙂
That’s what SHE said.
Nope. [Spencer & Christy 1992]
15 grids representing 2.5° of longitude each at the equator is 4175 km. Compare this to GISTEMP which only interpolates to a maximum of 1200 km. And GISTEMP does not perform any temporal interpolation.
Unfortunately, NASA/NOAA data manipulators greatly prefer inaccurate weather data. All the better for them to thoroughly infill, adjust, manipulate weather data and force it to meet their preconceived beliefs or political goals.
And yours seems to be stuck firmly up your rear passage.
So Mr Stokes, what are your issues with UAH?
Many, including the history of drastic changes. But the main one here is that whatever it measures, it isn’t where we live. And it isn’t what CMIP was calculating.
Changes in temperature records are unique to UAH are they?
Are you having a laugh
The use of adjustments aren’t unique to UAH. It is the amount of adjustment especially those to correct mistakes that are unique to UAH.
Year / Version / Effect / Description / Citation
Adjustment 1: 1992 : A : unknown effect : simple bias correction : Spencer & Christy 1992
Adjustment 2: 1994 : B : -0.03 C/decade : linear diurnal drift : Christy et al. 1995
Adjustment 3: 1997 : C : +0.03 C/decade : removal of residual annual cycle related to hot
target variations : Christy et al. 1998
Adjustment 4: 1998 : D : +0.10 C/decade : orbital decay : Christy et al. 2000
Adjustment 5: 1998 : D : -0.07 C/decade : removal of dependence on time variations of hot target temperature : Christy et al. 2000
Adjustment 6: 2003 : 5.0 : +0.008 C/decade : non-linear diurnal drift : Christy et al. 2003
Adjustment 7: 2004 : 5.1 : -0.004 C/decade : data criteria acceptance : Karl et al. 2006
Adjustment 8: 2005 : 5.2 : +0.035 C/decade : diurnal drift : Spencer et al. 2006
Adjustment 9: 2017 : 6.0 : -0.03 C/decade : new method : Spencer et al. 2017 [open]
That is 0.307 C/decade worth of adjustments jumping from version to version netting out to +0.039 C/decade. And that does not include the adjustment in the inaugural version.
We don’t live in Antarctica but you would be quick enough to bleat about a fractional rise in temperature there.
71% of the planet isn’t where we live, either, but it should be measured properly, no? And UAH measures where our weather and climate are born, and the greenhouse effect occurs, in the atmosphere above, not within the range of human heat-sinks, and not the first 6 feet above the surface.
Bob: How did you calculate that Nick is 6 ft. tall? Could be standing on a pile of ‘refuse’.
Mr Stokes is perhaps unaware that, since the lapse rate of temperature with altitude is near-invariant in the lower troposphere, the anomalies measured by the surface and satellite datasets ought to be about the same, even if the absolute temperatures slightly differ. Where we live, chiefly in cities, there is an urban heat-island effect that is not attributable to anthropogenic greenhouse gases. The terrestrial datasets make insufficient allowance for the urban heat-island effect: the satellite datasets are far less affected by it.
The CMIP5 models were predicting the temperature anomaly for doubled CO2, or, equivalently, for the realized warming from all anthropogenic sources over the 21st century. The UAH dataset measures the actual anomaly. So the anomaly is what the CMIP6 models were predicting, and their midrange prediction – whichever dataset is used – is an embarrassingly large overstatement compared with observed reality. It is time for Mr Stokes and his ilk to admit that fact.
“Mr Stokes is perhaps unaware that, since the lapse rate of temperature with altitude is near-invariant in the lower troposphere”
I am very aware of the lapse rate, and it is not at all invariant. The observed lapse rate is quite a lot less than the dry adiabatic lapse rate, an the difference depends on water movements.
The lapse-rate is certainly near-invariant as between the surface and the lower troposphere, of which the surface forms part.
Besides, the terrestrial datasets show about 0.2 C/decade since 1990, which is right at the bottom of IPCC’s range of predictions and little more than half the CMIP6 models’ midrange prediction. The models are running very hot, and it behoves Mr Stokes, just for once, to tell the truth and admit that.
Haven’t you noticed that NOAA’s STAR satellite temperature data series now tracks with UAH6 and the radiosondes? Its a wonder what cleaning up the data from failing instrumentation will reveal. The politicized RSS is now the outlier.
You bet your sweet ass UAH is reality. Over 44 years (a goodly portion of which included the upswing portion of cyclical temperature variations) of atmospheric temperature data and you get only a 0.13℃/decade trend, even with an ending Super El Niño. You can kiss the UN IPCC CliSciFi climate models goodbye; they will never recover from the CMIP6 fiasco.
“Haven’t you noticed that NOAA’s STAR satellite temperature data series now tracks with UAH6″
So does that radical change give confidence in satellite temperatures?
In the entire previous pause, Lord M used the “politicized RSS” exclusively, when UAH was about as high as RSS is now. He uses whatever is lowest.
UAH calibrates with balloon data.
If it were calibrated against balloon data then you would expect it to match balloon data.
And besides if the balloon data is supposed to be the gold standard then why use UAH in the first place? Why not just use the balloon data?
Was that a rhetorical question?
If it’s not, the answer is “spatial and temporal coverage”
It was mostly rhetorical. But yeah, that is a good point. The coverage of upper air stations isn’t that great.
It would be good to add the graphs to support your message
Probably, but I won’t bother since they have been covered extensively at WUWT. My memory is good enough for me for well-known factoids.
It’s a double edged sword. STAR matches UAH in terms of the 1979-present trend because of a coincidental cancellation of high warming trend post 2002 with a low warming trend pre 2002. According to STAR the warming has accelerated meaning that STAR and UAH could diverge more and more as time goes on.
Here are the trends from 2002-present in the TTT layer.
UAH: 0.156 C/decade
RSS: 0.153 C/decade
STAR: 0.184 C/decade.
Who gives a shit about cherry-picked dates? The entire 44 year satellite-derived records for UAH6 and new NOAA STAR datasets give us the de minimis 0.13℃/decade warming trend. And that period encompasses the late-20th Century increasing temperature trend that the UN IPCC CliSciFi practitioners used to tune their models (along with imaginary aerosol concentrations) to get their unphysical sensitivities to atmospheric CO2 concentrations.
There are no Tropospheric Hot Spots, anywhere. Accordingly, the CliSciFi fiction of significant water vapor temperature amplification is defunct. CliSciFi is a scientific dead end and an embarrassment to the traditions of the scientific method. The whole catastrophic climate change narrative is a Leftist political construct designed to dismantle Western liberalism and free market economies.
They aren’t my dates. They come from STAR. Remember, you’re the one that brought STAR into the discussion. If you are now harboring reservations about what they said then you shouldn’t have used them to defend UAH.
Bullshit. Both UAH6 and NOAA STAR have identical trends of 0.13℃/decade over the entire 44-year period.
And very different trends after 2002 for which STAR shows significantly more warming than even RSS. The warming is accelerating they say. Do you accept STAR or not?
No Nick, temperatures recorded by runways are the new reality…
Dear Nick Stokes,
Do you really believe man made carbon dioxide emissions are controlling the planet’s climate?
“And the Wall came Tumbling down”; a lack of a reply (@UK GMT 09.58am) deems necessary an inference; none this deluded soul’s comments ever, as far as I have read, deal with that killer question or the allied questions (if he does) which is any variation on “Precisely why do you consider CO2 a heat reflecting pollutant?, or “Why do you wish to reduce a gas, that is vital to all life on earth?”
I am not a scientist; even I can read the numerous shredding of the IPCC reports ( using their own words), the dichotomy of the alleged scientific elements from the political driven precis, the fraudulent use of the “97% Club”…and a myriad of other extremely inconvenient data and facts that deniers like him don’t like to address.
So Mr Stokes, if “you” can reduce CO2 emissions from human activity to nil, what effect will that have on all plant life on this planet given the current CO2 ppm in the atmosphere? What amount of CO2 by ppm are “you” content to see? Will “you” then, by extension of the logic behind your belief, advocate the reduction NON human CO2 emissions? If not, why not? Surely “you” cannot be persuaded that human COR emissions = “BAD”, whilst NON human CO2 emissions = “GOOD”? What NON human emitters of your heating polluting CO2 will be top of “your” agenda? A cull of all herbivores and omnivores? Trees? All grass lands? Crops for human consumption? Do you see insects bred for human consumption replacing CO2 emitting animals? Do you know that all insects exhale CO2 – how do you propose to eliminate that crushing fact from your narrative?
Come on, “lets have it” so we can all understand exactly where you are coming from…
I would argue that Mr Stokes is what we English call a wind-up merchant.
How are you pronouncing the “I” – long or short vowel?
186no, you are hitting some points that almost make Nick look like a front-man for China, India, and Russia, who collectively have exceeded all of the carbon cuts made by western civilizations, and even sold them green machines, made with coal power, to add insult to injury. Makes you wonder….
China and India – yes, Russia – no. Russian economy has taken a big hit in 1990s with a lot of manufacturing closed. Subsequent economy growth was slow. Russia’s CO2 emissions are still significantly lower that they were in USSR times. In 1990 they were ~15 metric tons per capita (about on par with USA per capita emissions today), in 2019 – 11.8 metric tons, while the population is declining, from ~150 mln in 1990 to 143~ mln today.
You forgot to ask how Mr Stokes is going to control CO2 emissions from volcanos. A far larger source than anything man made. 🙂
Sincere apologies – I await the Stokesian reply..
Not just volcanos.
Every hot spring and all along the entire Ring-of-Fire outgas CO₂ continuously and intermittently.
Heat and pressure metamorphizing/decomposing carbonate rocks releases CO₂.
A factor that alarmists absolutely hate and try desperately to ignore, because it destroys their alarmism religion.
And yours is up your anus.
You’re right as always Nick. Satellites are bad news. We can’t exhaust an air conditioner into them. We can’t park a refrigerated truck next to them. We can’t site them where jet engine exhaust can ‘adjust’ the readings. There’s no urban heat island steadily increasing around them that we can apply trickery to exaggerate the warming trend.
In short, wholly unfit for purpose.
The satellites are subject to the instrument body effect in which the instrumentation is subject to heating and cooling changing how it senses the upwelling microwave radiation. This effect has both diurnal and long term components. The long term component would not be unlike the long term UHI bias for land temperature measurements.
Keep in my that the UHI effect itself isn’t the problem. It represents warming that is real. It should be included since it actually happened. The problem is how this effect biases spatial averages. Ideally you would remove the bias without removing the effect. Contrast this with the IBE which is itself a problem because it causes a spurious warming/cooling trend that didn’t actually happen.
Thanks for taking my sarcasm so seriously bdgwx.
A couple of problems with your thinking. First, UHI is real for sure, but it’s not a greenhouse gas effect. If the goal is to estimate how much the climate is warming as a result of anthropogenic CO2 emissions, we need to exclude all man-caused effects such as land use changes. Sure there might be actual harmful impacts caused by man due to land use. I actually think that’s much more likely than harmful effects from additional plant food in the air. But establishing harmful effects from land use does not justify demands to eliminate fossil fuel use. It suggests changing land use maybe, but CO2 emissions from fossil fuels are irrelevant to those effects.
Second, and I suspect you didn’t actually miss my point, the concern you choose not to acknowledge is that far too many measurement stations are not properly sited. They are contaminated with waste heat that goes above and beyond the UHI effect. So many cases that it strains credulity to think that all of them are due to “honest” incompetence. There is no analogous effect above and beyond the instrument body effect impacting satellite measurements.
No dataset is going to tell us the warming caused by anthropogenic CO2 directly. They only tell us how much warming has occurred. More analysis is needed to determine contribution breakdown.
Anyway, my point that although satellite measurements are not contaminated by nearby air conditioners, refrigeration trucks, and airplanes they are still contaminated by other effects. There are a lot of adjustments that go into satellite data processing. In fact, an argument can be made that satellite datasets perform more adjustments than their surface counterparts.
That is certainly true as far as it goes. The problem I am asking you to acknowledge is that dishonest/incompetent (take your pick, not mutually exclusive) analysts pretend that heavily contaminated datasets exclusively represent the effect of CO2 emissions and sensationally point to “record high” readings that almost certainly only reflect the most egregious contamination cases.
The propagandists conveniently forget your advice. It would be good of you to admit that.
Anyone who tracks my posts knows I’m not kind to claims that CO2 and only CO2 can have an effect on the global average temperature.
Yet you have studiously avoided admitting that the surface temperature networks have a problem with improper siting. Why is that?
They do have siting problems. They also have time-of-observation change problems, instrument package change problems, relocation problems, measurement type change problems, UHI problems, etc. They have problems just like satellite datasets have problems.
And those adjustments are justified and documented.
According to greenhouse gas theory, the atmosphere is the place that the trace GH gasses operate to warm the Earth system. Theory says that surface temperatures are but a reflection of and must follow atmospheric temperatures that respond to GHGs. If surface temperature trends exceed those of the atmosphere it implies the surface temperature trends are bogus. It is similar to CliSciFi claims that the supposed excess heat is hiding in the deep oceans, although no such heat has been measured in the intervening ocean layers.
I agree. Those adjustments are justified.
Nor has the mechanism to “warm” oceanic/lake/river water ever been identified, demonstrated or otherwise proven.
Right now, climate alarmism depends upon belief that deific CO₂ molecules warm surface water levels, that then somehow manages to circulate warm water thousands of meters deep.
AGW, CAGW, Global Warming, er, climate change theory and fact.
As Andy May said, “ Oops! Has NOAA just shown that Spencer and Christy’s UAH satellite record is correct?”
It is certainly evidence in support of that conclusion. However, it is a double edge sword. While they say the 1979-2022 trend is similar to UAH they also say that is the result of the cancellation of a low warming trend in the first half of the period and a high warming trend in the second half due to the far more pronounced acceleration of the warming. Globally the acceleration is assessed as +0.03 C.decade-2. What that means is that the TTT layer could go from +0.18 C/decade to +0.24 C/decade by 2042 should the acceleration continue at that pace.
Accelerated warming? Over what (cherrypicked) periods? Last period ending on a Super El Niño? Warming dramatically decelerated over the period 1997 to 2014.
That’s over the whole period ending on a triple dip La Nina.
Temperatures are still going down off the Super El Niño high.
I know. And yet STAR still says the acceleration is +0.03 C/decade-2. I’m having a hard time envisioning how that isn’t close to the lower bound since we are just beginning the transition out of a La Nina with a planetary energy imbalance of +0.8 W/m2.
Some politicized bureaucrat’s notion of short term “acceleration” means nothing to me. Long term data is conclusive that CliSciFi, especially its fried models, is bunk and we’ll just have to wait on what the future brings.
Everything coming out of the UN IPCC and Western governments is attempts at crude sales jobs to scare the masses.
Earlier you made it sound like you approved of STAR saying “Its a wonder what cleaning up the data from failing instrumentation will reveal.” and “The politicized RSS is now the outlier.” Now you seem to be saying STAR is politicized bureaucracy who’s result means nothing to you. Obviously this leaves me puzzled as to your position on the matter.
NOAA STAR is doing just fine. Read Ross McKittrick’s critique of the acceleration claims over at Judith Curry’s ‘Climate Etc.’ site. He shows there is no statistically valid “acceleration.” Please try to keep up with the real science, bdgwx.
So STAR isn’t a politicized bureaucrat’s creation?
bdgwx, what the hell is your problem? NOAA’s Zou et al. paper rebuilt the STAR database and showed its overall trend is slightly less than UAH6’s. As shown by Ross McKitrick, in the paper Zou only presented some suggestive calculations of an acceleration in some of the latter years, but performed no formal statistical tests to determine if that were true or not. So the paper did not prove there has been an acceleration and any assertion that it has is a “politicized bureaucrat’s creation.”
Ross McKitrick’s April 19, 2023 posting at Dr. Curry’s Climate Etc. provided some preliminary statistical analyses showing claims of acceleration are incorrect. Go to Climate Etc. to see Ross’s analysis.
Ross McKitrick says; “In sum, based on a preliminary analysis the new NOAA data do not support a claim that warming in the troposphere has undergone a statistically-significant change in trend.”
Zou et al. created STAR. Zou et al. said the warming is accelerating. The same people that said the warming is accelerating are the same people that created STAR. So if you want me to accept that the claim of acceleration is from a politicized bureaucrat then I don’t have any choice but to accept that the whole dataset is from a politicized bureaucrat as well.
BTW…McKitrick didn’t say it wasn’t accelerating. In fact, his own analysis shows that is more likely than not that it is. It just isn’t statistically significant at p < 0.05. Notice that the alternate hypothesis is more likely than the null hypothesis in his experiment. This is true even with the triple dip La Nina.
Christ, bdgwx. The STAR dataset is an entirely different proposition than Zou’s bogus acceleration calculations. I assume he did the acceleration speculation to soften the blow from the reduction in the 44-year trend.
I don’t know where you get the idea that McKitrick didn’t say STAR wasn’t accelerating. He actually said:
“In sum, based on a preliminary analysis the new NOAA data do not support a claim that warming in the troposphere has undergone a statistically-significant change in trend. The Global and Tropical TTT series show no support for the claim. The Global MT series appears to show support but only if the break data is placed in a specific interval in the early part of the last decade, and more recently the tests do not support acceleration.”
Please go off and find another dead horse to beat.
I’m concerned that it might take many years for our corporate media outlets to realize that there is nothing alarming taking place with the climate or daily weather events.
Potential elected leaders really need to bone-up on this issue, find talking points, and be able to articulate the truth to their constituents; and have the political fortitude to be true sayers. Vivek R. seems willing and able to.
If they spoke the truth they wouldn’t get elected or any airtime – and that’s what they live for.
They simply don’t care – that’s why all the political parties and mainstream media have signed up to the climate cult, it pays the bills in votes and clicks.
The only hope is that some real crisis comes along and gives them something useful to work on – but if world hunger, war and violence, development and other serious issues, are already being ignored because of the green fixation then I don’t know what else could reset the public attention to useful purpose. Probably an asteroid impact – but you won’t catch me hoping for that, not even a near miss.
I think you are quite right PCman999, following the money which includes the media needing to sell news “if it bleeds it leads” or academics wanting notoriety which may translate to $$.
I do hold out hope that more leaders will come to their senses …I am somewhat hopeful – but it might take 3-5 years for a sea-change of public opinion away from c.alarmism to take shape.
The coming stagflation and energy crisis will wake up many people to the climate scam.
Governments just lie. They protect themselves and allow no useful dissent and pass off all of the consequences of their fecklessness to others.
Take unemployment; if you have 100 people looking for work and 5 can’t find a job the unemployment rate is 5%. But if 20 people can’t find jobs the unemployment rate would be 20%. To protect their incompetence the government just declares 17 people not in the labor force, so the unemployment rate becomes 3 / 83 or 3.61%. So, while the economy stagnates, unemployment goes up the government liars declare unemployment is going down thus the economy is doing well, and the many useful idiots in the population are unable to figure out what is going on.
This is just one of thousands of lies government and their sycophants perpetuate and it takes serious looking to overcome the technology companies suppression of information (search engines).
It’s already been more than 30 years. They won’t give up on “climate change” until they find a new crisis to “milk.”
There is no need. None of the needles is going to move from one month to the next, probably not even from one year to the next. Perhaps publishing it each decade is enough.
Minor point – the UAH needle shows 1.36 not 1.33
The IPCC needle shows 3.98 not 3.90
Didn’t see your comment before posting below. But 1.36 is the correct trend since 1990.
“The Realitometer shows a mere 1.33 C/century equivalent real-world warming over a third of a century.”
Minor quibble, but UAH shows 1.36°C / century since January 1990.
The less minor quibble is where the 3.9°C figure comes from, and why it’s relevant to compare a prediction for a long term trend against the short term warming.
3.1.1 Long-term Climate Change
Nobel Lauriat in Economics, William Nordhaus, showed that warming by up to 3.5℃ by 2100 would be net beneficial to the World. I think he overstated the damage function, but it is what it is.
“showed”? Is that one of those models everyone thinks cannot tell you about the real world?
Are you sure he said a rise of 3.5°C would be beneficial. My, extremely limited, understanding was that he was saying the damage 3.5 warming would equal the cost of keeping the rise to that level. In that sense a rise of 3.5 would be optimal, but it wouldn’t mean it was beneficial.
Not quite. The combined damage of a 3.5 degree C rise and mitigation efforts to keep the rise to that level were the lowest overall cost.
That was based on Scenario A / RCP 8.5.
Earlier exercises by others came in net beneficial up to 2 degrees C
Later exercises came up with a Nordhaus optimum at 2 degrees, or 1.5
It’s largely a matter of the discount rates used, with some contribution of the assumed damage and costs of mitigation.
And they high-ball the “damages” and low-ball the costs.
Look, Bellman, here’s the deal: He took CliSciFi climate models that run too hot out in the future, accepted CliSciFi speculation as to what “damages” are caused by various temperature increases and ran that through an econometric model using unrealistically low discount rates for the (underestimated) costs to limit warming. Anyway, econometric forecasting is about as reliable as CliSciFi climate modeling.
I don’t believe any of this crap but cite it to show, even with their own numbers and faulty CliSciFi models, that high-end warming of 3.5℃ by 2100 will not be the disaster the politicians and hysteria profiteers say it will be.
The IPCC was off by at most 0.1 C over a 30 yr period. And that’s for a scenario that includes higher emissions than humans choose.
Monckton was off by at least 0.7 C over only a 7.5 yr period.
If an error of 0.1 C / 3 decades = 0.03 C/decade is “crap” then what is an error of 0.7 C / 0.75 decades = 0.93 C/decade?
Bellman, who is even less versed in economics than in climatological physics, should get someone to explain to him the difference between “net-beneficial”, which is what Nordhaus actually wrote, and “beneficial”, the misrepresentation that Bellman used.
From memory, Nordhaus’s DICE model gave 3.5 degrees C (or 4, depending on the source) as the lowest combined damage + mitigation cost, rather than being net beneficial. Less mitigation cost less, but had a greater cost of damage, and more spent on mitigation reduced the damage by less than the mitigation cost.
Earlier work by others had shown warming as net beneficial up to around 2 degrees.
The DICE model seems to be rather sensitive to its assumptions, including the climate sensitivity.
I was responding to David Fair’s comment that
“warming by up to 3.5℃ by 2100 would be net beneficial to the World”
That’s very different from saying that there would be a net benefit in spending money to constrain warming to 3.5°C.
The benefit you are comparing against cost is the reduction in warning, not as David suggests the warming.
Bellman asks for the reference for the CIMP models’ 3.9 C midrange prediction. It is given in Zelinka et al. (2020, supplementary matter). And he asks why one should compare long-term with short-term warming. As has been explained to him many times before, IPCC (1990: read the head posting for the quote) says medium-term warming over the 21st century will be 0.3 C/decade, while ECS will be ten times that, or 3 C.
On that basis, 3.9 C ECS implies 0.39 C/decade warming, but only 0.136 C/decade has happened since IPCC (1990). If Bellman doesn’t like that basis, let him take it up with IPCC.
” It is given in Zelinka…”
But that’s just the average ECS. It doesn’t mean that it’s the expected ECS, nor does it mean a warming rate of 3.9 °C / century.
The IPCC are still saying the most likely ECS is 3, with 4 as s likely upper limit.
“As has been explained to him many times before, IPCC (1990: read the head posting for the quote) says …”
But you aren’t using 1990 models, you are using the latest models. And even from 1990, the graphs suggest the warming rate is expected to accelerate across the century.
“…medium-term warming over the 21st century will be 0.3 C/decade, while ECS will be ten times that, or 3 C.”
Ten times is meaningless. They are different measurements of different things.
“On that basis, 3.9 C ECS implies 0.39 C/decade warming, but only 0.136 C/decade has happened since IPCC (1990). ”
Only if you make s huge number of assumptions about how CO2 will change.
“If Bellman doesn’t like that basis, let him take it up with IPCC”
I quoted exactly what the IPCC are currently predicting. If Monckton thinks they are underestesting the projected warming, maybe he should take it up with them.
Is this any relation to the Global Warming Speedometer introduced in 2016?
In that one, the comparison was with shorter term predictions which were claimed to be 2.8, 1.8 and 2.1°C / century for three different IPCC reports.
And reality was then an average of UAH and RSS, since January 2001, which was reported as 0.47°C / century.
The rate of warming for UAH since 2001 is now 1.38°C / century, which if nothing else shows why looking at short term trends can be misleading.
The IPCC just cannot make belief into fact, whatever the weather.
I appreciate that Monckton of Brenchley so tenaciously opposes the claims of climate harm and the resulting insanity of the “net-zero” movement.
However, in doing so, as I see it, too much is ceded to adherents of the “greenhouse gas” dangerous heat-trapping misconception.
The debate devolves to a “forcing + weak feedback” versus “forcing + strong feedback” match of wits in persuading the onlookers which side is more correct.
Stop it. The “forcing + feedback” framing of the issue is itself unrealistic.
Just watch from space to grasp that the static concept is valid in theory, but the atmosphere is obviously not static in reality. The motion changes everything about where to expect the energy involved in the incremental warming effect at the surface to end up. One need NOT assume that heat energy must accumulate on land and in the oceans from what the slowly increasing concentrations of non-condensing GHGs do in the atmosphere.
NONE of the reported warming at the surface or in the atmosphere (the UAH record) can be reliably attributed to rising concentrations of GHGs by any means presently available. There is simply too much evidence of overturning circulation at local and global scale, with the resulting formation and dissipation of clouds, to isolate the minor effect of GHG emissions over time.
So here again is a link to the NOAA GOES East Band 16 animation of two hours of high-resolution images. You can select a longer period. The radiance at 30C on the “brightness temperature” scale (yellow) is 10 times the radiance at -90C (white.)
This is written with great respect for the persistence in opposing the exaggerated claims. But please consider that the ad argumentum posture about “forcing” gives too much away to the other side of the debate.
What you are seeing is a constant negative feedback. It could even be called a negative forcing. I’d call it mild positive forcing + mild negative feedback. The increased CO2 radiative forcing is measurable so hard to ignore it and maintain credibility. The negative feedback is driven by increased evaporative cooling also driving decreased H2O radiative forcing.
I do agree that skeptics need to show this full effect and stop conceding net warming. The complete science doesn’t support it. Why accept pseudo-science? There’s lots of evidence to support it. The lack of a hot spot. The uneven hemispherical warming. But most of all, the basic physics of the boundary layer.
Thanks for this reply. I understand your point. You can tell I accept the measurable static warming effect but not the necessity of a “net warming” result. So another way I could put it, using your terms, would be that neither the mild positive “forcing” nor the mild negative “feedback” can be isolated for reliable attribution to rising concentrations of GHGs, from what is happening naturally to begin with as the atmosphere responds to absorbed and stored energy as the working fluid of its own heat engine circulations.
It would be interesting to see an experiment where a large container is used and water is place at the bottom with a 1X, 2X and 4X CO2 concentrations in the air above it. Then shine an IR beam into the air. My claim is we should see more evaporation with higher CO2 levels which reduces the warming and increases the humidity over the same container without water.
Of course, this would not show the effects of increased convection, but it should provide some verification of boundary layer effects. Climate science predicts increased warming and increased humidity.
Interesting idea. But the one of the reasons I keep posting the link to the Band 16 visualizations is that the end result of ALL atmospheric responses and processes, and ALL interactions radiatively and otherwise with the surface, is already being monitored in relatively high resolution and near-real-time. And it’s in the same band of wavelengths from which much of the concern about greenhouse gas narrowing of the “atmospheric window” arises.
The constant negative feedback comes from the people peddling climate alarmism.
“Not everything important is measurable, and not everything measurable is important.”
— Elliot Eisner
CO2 emissions are not important in heating the atmosphere.
“The increased CO2 radiative forcing is measurable”
Simple answer is – no, not measurable. Due to huge natural variability attribution is not possible.
It should be measurable at the TOA by spectral analysis. Increases in CO2 should reduce the atmospheric window. This is where the added energy comes from. I thought there were already some papers which document this change.
OTOH, the reduction in total OLR required for warming is much more difficult due to the natural variability you mentioned. CERES data may eventually answer this question.
So, the climatards are still lying, yet again. Got it.
“The biggest problem with computer models is getting them to matchup to reality.”
Saying this never gets old, unfortunately…
Reality is not a word in the vocabulary of climate alarmists.
Reality means real time weather observations vs computer model guesses.
Reality means noticing that summer seems about to give us a pass this year.
Even the climate alarmists are noticing this so they have changed their narrative:
“Ireland could see a ‘cooling’ of its weather due to climate change,
unlike many parts of the world which are seeing rising temperatures.“
As a realist, I saw this coming.
Not so the gullible Irish masses.
You didn’t notice that 2022 was the warmest year on record in Ireland then?
Here are the atmospheric GHG concentration scenarios from which the IPCC temperature predictions are based.
And here are the temperature predictions for those scenarios.
As you can see the IPCC did a pretty good job.
Especially considering that widescale CFC production was reduced after the prediction was made.
And a large volcanic eruption.
Those made up temperature numbers only demonstrate the gullibility of whomever produced the graph.
It’s interesting that you describe predictions that match observations reasonably well as “made up” from “gullibility”.
I’m curious. How would you describe the temperature predictions from Monckton? Let’s be specific. Let’s discuss Monckton’s prediction from this post in which he said the Earth would cool by 0.5 C by 2020. Was that a good prediction? Was it made up? It did come from gullibility?
Your attempted deflection changes nothing.
-Phil Jones told us the older SH numbers were “made up”.
-Dr. Spencer’s recent work shows the algorithm for eliminating UHI works backwards.
-Most of the pre-1950 adjustments are biased and not any better than guesses.
-Even the post 1990 increase is 50% higher than the UAH trend.
Your chart is laughable.
Let me get this straight. I point out that the IPCC made a pretty good prediction which you call “made up” from “gullibility”. I then ask for your assessment of Monckton’s prediction whether it is “made up” from “gullibility” which you didn’t answer. And I’m the one deflecting?
Also, it’s not my chart. It is from the IPCC FAR in 1990 that Monckton misrepresents. And is it funny because it shows a prediction far closer to reality than that of Monckton’s?
bdgwx continues to be disingenuous, dishonest and deliberately misleading. The four scenarios in IPCC (1990) were not forcing scenarios. They were emissions scenarios. The largest mistake made by IPCC with these scenarios was that it imagined, falsely, that there would be far more CO2 remaining in the air after our emissions than is in reality the case. That is why Scenario A is so far off beam. Worse, IPCC has not adjusted its predictions downward now that it knows it got the emissions basis wrong.
Patently False. As can be clearly seen in the IPCC FAR SPM figure 9 the IPCC temperature prediction was off by maybe 0.1 C at most over a 30 year period for scenario A. That includes the prediction of about 440 ppm of CO2. That’s hardly what I’d call “far more CO2 remaining in the air after our emissions than is in reality”. So even if you are making an argument starting from emissions for scenario A understand that the IPCC’s prediction was still far better than yours. And as I’ve said repeatedly scenario A includes far more than CO2 emissions. So not only do you not mention that the temperature predictions are made from state of the atmosphere you don’t mention any of the other GHGs that are considered either and the fact that many of those (CFCs and CH4) don’t follow scenario A. And I’m the dishonest one here?
While it’s true that the IPCC pushes that a given amount of emissions will lead to a given amount orf warming, this misses the ley intermediate step in criticizing their predictions. They really have two priciplal claims: The CO2 levels that will result from a given amount of emissions, and the warming that will occur for a given CO2 concentration. For the first two decades of the century, CO2 levels are fairly close for all scenarios, but they start to spread about 2025, so there is an opprtunity to both check each claim independantly and to reduce wiggle room for why the predictions are diverging from reality. As it is now, they can just claim that emissions must have been lower than estimated initially, but that fututre growth is still dangerous.
Exactly. In the 1990 FAR there is a 2-step process. 1) Given the understanding of the carbon cycle estimate the mass of carbon in the atmosphere. 2) Given the understanding of the radiative forcing of carbon in the atmosphere estimate the temperature rise.
It turns out that the IPCC’s prediction from step #2 was nearly indistinguishable from reality. The biggest problem was the IPCC’s prediction from step #1. They expected more CO2 to be in the atmosphere given the emission pathway that actual occurred.
I should also point out that step #1 involves more than just CO2. CH4, CFCs, HCFCs, NO2, O3, etc. are considered as well.
bdgwx continues to be dishonest. The graph of IPCC’s predictions in IPCC (1990) shown in the head posting is for CO2 emissions only. CO2 emissions are indeed tracking the business-as-usual scenario A. Yet the 0.3 C/decade warming predicted by Scenario A is not happening.
I’m not talking about the graph in the head post. I’m talking about the scenario A temperature prediction. Scenario A includes emissions from many GHGs.
The CO2 concentration in 2020 was over 414 ppm. This value is in between the average projections for the A2 and B2 scenarios. The best estimate of warming rate for B2 was 0.24 degrees C/ decade; A2 was 0.34 degrees warming per deacde. The IPCC’s prediction for step #2 far from reality of 0.13. Even the lower bound of the lower estimate (0.14 deg/dec) is hotter than reality.
Don’t forget about the other GHGs. CH4 is below scenario C. CFCs are below scenario D. When considering all GHGs scenario B is closer to what humans chose then scenario A.
The IPCC FAR says the scenario B warming rate is 0.2 C/decade.
These predictions are for the global average surface temperature. According to an equal weighted blend of 7 surface datasets the warming rate is 0.2 C/decade.
The Scenario A CH4 and CFC11 increases were always somebody’s bad acid trip.
It’s hard to disagree. The Montreal Protocol was signed in 1987 and became effective in 1989. An argument could be made that the scenario A CFC emissions were unrealistic by the time of publication of the FAR.
Methane was only projected to account for 15% of the forcing of CO2, and CFC’s were to have a slight cooling effect. When considering all GHG’s the end result is still higher CO2 equivalent than scenario B.
Furthermore, the projected temperature rise for scenarios B1, B2, and A2 are nearly identical through 2030, before B begins to taper off – the projection of 0.24 deg/dec was the long term average after tapering.
The IPCC’s projection for scenario B was 0.28 deg/decade through 2030.
I get about 0.2 C/decade from the graphic in the SPM. That also matches the text.
Estimates of warming trends per decade by the IPCC (and others) are totally meaningless, because our climate is NOT driven by the accumulation of Greenhouse gasses, but primarily by SO2 aerosols from random volcanic eruptions, and industrial activity.
EVERY noticeable change in average anomalous Jan-Dec land/ocean global temperatures can be correlated with a change in the amount of SO2 aerosols in our atmosphere.
This premise is falsifiable (capable of being empirically tested), and has been validated hundreds of times, whenever there is a large volcanic eruption, and their SO2 aerosols first cool the planet, then warm it with an El Nino, when their emissions settle out, 2 to 3 years later.
I came across this little nugget this morning—>
It was on BigJoeBastardi’s twitter page. I have an interest in the paleo side
of the climate debate and Erik the Red’s colony in Greenland. I don’t
know about the technology side of the thermometer used in this video. If
this is valid it changes a lot of things..
As far as I can tell, budgiewax and Nick are singing from the same hymn sheet, the only difference being that budgie believes in the IPCC and Nick thinks you’re wrong and he’s right. Budgie at least presents arguments why he believes in the IPCC, Nick not so much. The problem with belief is that it cannot be countered. (Don’t blame me for the budgie thing, that’s the way my brain works. When presented with a string of unrelated consonants it does a Vanna and provides vowels.)
Why aren’t more people using the NCEI/USCRN data sets for temperature etc.
The data isn’t altered and the locations have been well researched or did I just answer my own question (because they don’t alter the data)
It could be because it shows more warming. USCRN shows +0.54 F/decade whereas nClimDiv shows +0.40 F/decade.
bdgwx …..you do realize that there is a graph located on the right hand side of this website that has the details of the USCRN data collection in a nice graph….the graph and data shows no global warming since 2005
Of course I realize that. It’s where I went to download the data. The trend is +0.54 F/decade (+0.30 C/decade). BTW…USCRN is not a global dataset. It is a dataset covering the United States only.
CMIP? More like CHIMP-anze.