From Dr. Roy Spencer’s Global Warming Blog
March 3rd, 2023 by Roy W. Spencer, Ph. D.
The Version 6 global average lower tropospheric temperature (LT) anomaly for February 2023 was +0.08 deg. C departure from the 1991-2020 mean. This is up from the January 2023 anomaly of -0.04 deg. C.
The linear warming trend since January, 1979 remains at +0.13 C/decade (+0.11 C/decade over the global-averaged oceans, and +0.18 C/decade over global-averaged land).
Various regional LT departures from the 30-year (1991-2020) average for the last 14 months are:
YEAR | MO | GLOBE | NHEM. | SHEM. | TROPIC | USA48 | ARCTIC | AUST |
2022 | Jan | +0.03 | +0.06 | -0.00 | -0.23 | -0.13 | +0.68 | +0.10 |
2022 | Feb | -0.00 | +0.01 | -0.01 | -0.24 | -0.04 | -0.30 | -0.50 |
2022 | Mar | +0.15 | +0.27 | +0.03 | -0.07 | +0.22 | +0.74 | +0.02 |
2022 | Apr | +0.26 | +0.35 | +0.18 | -0.04 | -0.26 | +0.45 | +0.61 |
2022 | May | +0.17 | +0.25 | +0.10 | +0.01 | +0.59 | +0.23 | +0.20 |
2022 | Jun | +0.06 | +0.08 | +0.05 | -0.36 | +0.46 | +0.33 | +0.11 |
2022 | Jul | +0.36 | +0.37 | +0.35 | +0.13 | +0.84 | +0.55 | +0.65 |
2022 | Aug | +0.28 | +0.31 | +0.24 | -0.03 | +0.60 | +0.50 | -0.00 |
2022 | Sep | +0.24 | +0.43 | +0.06 | +0.03 | +0.88 | +0.69 | -0.28 |
2022 | Oct | +0.32 | +0.43 | +0.21 | +0.04 | +0.16 | +0.93 | +0.04 |
2022 | Nov | +0.17 | +0.21 | +0.13 | -0.16 | -0.51 | +0.51 | -0.56 |
2022 | Dec | +0.05 | +0.13 | -0.03 | -0.35 | -0.21 | +0.80 | -0.38 |
2023 | Jan | -0.04 | +0.05 | -0.14 | -0.38 | +0.12 | -0.12 | -0.50 |
2023 | Feb | +0.08 | +0.17 | 0.00 | -0.11 | +0.68 | -0.24 | -0.12 |
The full UAH Global Temperature Report, along with the LT global gridpoint anomaly image for February, 2023 should be available within the next several days here.
The global and regional monthly anomalies for the various atmospheric layers we monitor should be available in the next few days at the following locations:
Lower Troposphere:
http://vortex.nsstc.uah.edu/data/msu/v6.0/tlt/uahncdc_lt_6.0.txt
Mid-Troposphere:
http://vortex.nsstc.uah.edu/data/msu/v6.0/tmt/uahncdc_mt_6.0.txt
Tropopause:
http://vortex.nsstc.uah.edu/data/msu/v6.0/ttp/uahncdc_tp_6.0.txt
Lower Stratosphere:
http://vortex.nsstc.uah.edu/data/msu/v6.0/tls/uahncdc_ls_6.0.txt
By 2100, the Australian BoM will being coming out with statements like:
“This is the 127th warmest year on record”
Tough work being a temperature homogeniser in Australia as the peak solar intensity shifts northward.
This is December solar intensity at 25S for the past 500 years:
-0.500 496.117499
-0.400 495.833635
-0.300 495.537779
-0.200 495.230456
-0.100 494.912178
0.000 494.583445
Not much reduction but enough to shift from warming trend to cooling trend.
It bottoms at 468W/m^2 in 9,000 years.
Possibly the statements will be about the 127th warmest year on record. My guess is that it will be about the warmest year in the last 12 months.
Given the waning La Nina there is a chance that the 2021/04 and 2023/01 anomalies will mark the low points of the current triple dip La Nina. I, of course, cannot eliminate the possibility of going below -0.05 C, but the linear model I have says the odds are low as long as the La Nina continues to wane and there are no large SO2 producing volcanic eruptions.
You forgot NAO and PDO shift to the cool part…
Nope. I considered both NAO and PDO.
“Episodes of El Niño and La Niña typically last nine to 12 months, but can sometimes last for years. El Niño and La Niña events occur every two to seven years, on average, but they don’t occur on a regular schedule.”
IOW, nobody seems to know when a specific event will come or how long a specific event will last.
Language stuns me when I try to learn –
if two events with cute names have opposite effects,
and one causes more than average of something
should not the other cause less than average of something?
Yes. I get a 0.14 C change in UAH TLT for every 1 unit change in the 4 month lagged ONI.
ENSO is very difficult to predict especially beyond the spring barrier. That doesn’t make it impossible to predict though. It might be interesting to note that I believe this will end up being the 2nd longest stretch of consecutive negative ONI values. The longest stretch saw the minimum occur about halfway through. So given the waning of the current La Nina and the fact that there is no guarantee a new minimum would set even if the La Nina continued the odds do not favor a new minimum. That’s not say it isn’t possible. It’s just a statement that it isn’t likely. Time will tell though.
Bdgwx
The current La Nina may continue for many years, since it is being caused by dimming Industrial SO2 aerosol emissions from China and India, unless their economies crash.
However there are offsetting decreases in global SO2 aerosol emissions due to continuing global “Clean Air” efforts, and to Net-Zero activities banning the burning of fossil fuels (which also produce SO2 aerosols).
My expectation is that we will see increasing warming.
.
You and every credible scientific source there is.
ROTFLMAO! The state of climatology is that it is a con game that has nothing to do with science.
The climatological record has been so distorted with ‘adjustments’ that it will take decades to unwind all of the fake temperature records. This is the opposite of science; it is a cult.
“Episodes of El Niño and La Niña typically last nine to 12 months, but can sometimes last for years. El Niño and La Niña events occur every two to seven years, on average, but they don’t occur on a regular schedule.”
Temperatures cooled by about 2.0C from the 1940’s to the 1970’s, as shown in this representative U.S. regional surface temperature chart (Hansen 1999).
How did El Niño and La Niña affect this steady decades-long cooling?
I get the impression that some people think El Niño and La Niña are the only factors in determining the Earth’s temperature. I think there are other things at work besides El Niño and La Niña, since their activity did not prevent the climate from cooling by 2.0C.
Hansen 1999:
ENSO only changes the amount of heat moved from the subsurface to the surface and atmosphere.
75% of the solar energy that makes it to the surface goes into the top 200m of the ocean. From there it has to be transported to the extratropics by oceanic currents and transferred to the atmosphere before the excess can be radiated out of the planet. ENSO does not change the energy within the climate system but changes the part of the climate system that receives more of it. El Niño moves more energy to the atmosphere, making it easier to radiate it to space. Thermometers at the surface register this as warming. It is not from the point of reference of the whole climate system.
The surface cooled from 1945-75 because more energy was moved to the polar regions in winter by the atmosphere. Polar regions in winter dispose of all the energy that gets there through radiative cooling. The greenhouse effect does not work there because the surface is colder than the atmosphere (temperature inversion) and because an increase in CO2 results in more emission to space, not less. It is explained at a somewhat high level in chapters 10 and 11 of my book.
If this is correct, we should expect very little warming or even some cooling over the next two decades. Nothing spectacular, but lots of entertainment value among believers and skeptics.
The net radiative force was negative during the post WWII period as well. That also contributed to the cooling tendency of the planet.
We don’t know that. The values assigned to aerosols and CO2 are impossible to demonstrate and are extracted from models, that are an abstraction of the human mind, and not actual evidence in the scientific meaning of the term.
What we know is that the energy imbalance must have been negative, but we don’t know the cause.
Yes. We do know that.
You have a problem distinguishing fact from fiction. Fact is what has been established by evidence, and fiction is what is not. That figure is fiction because it assigns a change in the radiative flux at the top of the atmosphere to a change in a factor without knowing the feedbacks involved.
Saying that you know something because of fiction does not belong to what we understand as scientific knowledge.
That figure from AR6 is based on around 10,000 first order lines of evidence. If you include the secondary and tertiary lines lines of evidence it could be in the millions. That’s not to say the figure is perfect. It isn’t and any revision never will be. It’s the same with any other figure in any other discipline of science. Nothing in science will ever be perfect. Anyway, claiming something is fiction because it includes components and effects that you don’t fully understand isn’t very convincing. If you disagree with the figure then present another based an equivalent amount of evidence so that we can see what the differences are.
Earth has been cooling since Feb 2016 or Feb 2020 – faites vos jeux!
I prefer 2020 because I published that precise prediction in 2002 and 2013 and because it seems unfair to choose the peak of a huge El Nino as the starting point. I was expecting a bit more cooling, like 2008…
Climate Extremism in the Age of Disinformation – Roy Spencer, PhD. (drroyspencer.com)
Not sure where we go from here – cast the dice:
It’s obvious that the CAGW Hypothesis is false propaganda – CO2 changes LAG, do not LEAD, temperature changes at all measured time scales (Ice Core Records, Kuo et al – Nature 1990, MacRae – Icecap.us 2008, Humlum et al – Science 2013) and the future cannot cause the past.
Solar Cycle 25 is growing and is generally expected to be weak.


La Nina is weakening:
nino34.png (768×384) (tropicaltidbits.com)
That would lean towards mild warming, which would be nice.
However, it’s pre-coffee and my Ouija Board is not working right now::
I’ll vote for “very little warming or even some cooling over the next two decades. Nothing spectacular…”.
No dice required. We go towards further warming.
All your predictions have been wrong, so far as I can see.
The line it is drawn, the curse it is cast.
Javier Vinos:
There was a massive 81 Million ton increase in Industrial SO2 aerosol emissions into the atmosphere between 1945 and 1975, as reported by the Community Emissions Data System, of the University of Maryland.
You appear to be ignoring the cooling effect caused by millions of tons of dimming SO2 aerosol pollution put into our atmosphere, during that period..
The cooling from a VEI4 volcanic eruption is typically about 0.2 deg. C..
According to satellite measurements, the amount of SO2 aerosols injected into the stratosphere from such an eruption averages 0.2 Million tons. An 81 million ton increase would have caused far more cooling.
Do you have an explanation as to why you did not consider the cooling effect of the millions of tons of SO2 aerosol emissions into our atmosphere?
For those interested her is the updated version…


Funny this:
That graph does not contain the corrections for the time-of-observation change bias, station relocation bias, instrument package change bias, etc.
Funny this… Yep hilarious… that you are fine with an inaccurate old graph that shows no warming, but troubled by the more recent one from the same organisation that does. I wonder why that is?
So GISS was wrong under Hanson but we can rely on it to be accurate now because people back then didn’t know how or when to read thermometers correctly.
Too stupid to be funny. All of the “corrections” go the same way. Cooling the past and warming the present.
Further, the contemporary accounts of the various time support what that temperature graph shows.
If you guys didn’t have revisionist history, you’d have nothing!
No, but a probabilistic forecast is valid in this case. ENSO tends to even out over time.
Yep, keep wishing for thermageddon.
Why don’t you just let nature do its thing bdgwx? There’s no point in trying to predict it because its just going to do what it wants. All we know right now is that there will be an El Niño for late ’23 and early ’24. After that is literally anyone’s guess.
I love a good challenge.
Ok well then tell me what’s going to happen in 2029-2030? Colder and warmer than present. Will 2029 be warmer than 2019 and will 2030 be warmer than 2020?
That’s pretty far out there, but I’ll give it a shot. Assuming that the planetary energy imbalance remains elevated, similar forces continue to act on the climate, and that there are no significant volcanic eruptions between now and 2030 I would assign a 65% chance it will be warmer in 2029 relative to 2019 and 73% chance it will be warmer in 2030 relative to 2020 where “warmer” is the based on the difference between annual means of the UAH TLT anomalies. The difference is caused by the 0.08 C enhancement in 2019 caused by ENSO whereas 2020 only had a 0.02 C enhancement. Those are rough estimates from an overly simplistic analysis so I convey low confidence here.
OK. Then I suppose you see the probability of 2030 being warmer than 2010 as >95%, right? That’s what the IPCC calls “extremely likely.”
On the contrary, I see a very good probability that 2030 is not warmer than 2010. But that is because I think the climate is a lot more complex than the IPCC believes. They have settled for a too simplistic answer.
That’s a little harder to quantify. Based on the same analysis methodology as above I get 92%. 2010 had a 0.06 C ENSO enhancement that reduces the odds. Again, I have low confidence in that estimate because I’m using an overly simplistic technique.
The energy imbalance on the planet is about +0.8 W/m2 [Loeb et al. 2021][Schuckmann et al. 2020]. Since the atmosphere is coupled with the other heat reservoirs there will be an upward tendency on the atmospheric energy retention as well. Each passing year should be more likely to be higher than 2010; not less.
Nobody knows what the energy imbalance is because it cannot be measured. According to OHC measurements, it is more like +0.6Wm-2. Nevertheless, Dewitte says it is increasing less over time, so who knows?
Dewitte, S., Clerbaux, N. and Cornelis, J., 2019. Decadal changes of the reflected solar radiation and the earth energy imbalance. Remote Sensing, 11(6), p.663.
We should not talk about things we don’t know as if we actually know. EEI changes from year to year, and from month to month. Nothing says it cannot go negative for over a decade or more.
You say no one knows and then provide an estimate of 0.6 W/m2 and link to the Dewitte publication. I’m not sure how to reconcile that. Anyway, if no one knows then how do you know there is a good chance 2030 will not be warmer than 2010?
Nothing to reconcile. Different authors have different estimates. It means great uncertainty.
From a purely random chance, 2030 has a 50% chance of being colder. If one considers low solar activity until 2033, and a high chance of the AMO turning cold before then, I would say the chance of 2030 being colder than 2010 is bigger than 50%.
Just because there are different measurements does not mean we don’t know. It just means there are different measurements. And of the estimates shared here we have +0.87 ± 0.12 W/m2, +0.77 ±0.06 W/m2, and +0.9 ± 0.3 W/m2. Note that Dewitte & Clerbaux defer to Trenberth et al. 2016 for the actual EEI with figure 14 agreeing with it. Anyway, they all happen to be consistent with each other. Using the GUM type A method this gives us a best estimate of EEI of +0.85 ± 0.11 W/m2. 0.74-0.96 W/m2 is not what I would describe as “we don’t know”. In fact, it is infinitely better than “we don’t know” and even significantly better than first principle constraints.
What does this tell us about 2030? By exploiting the 1st law of thermodynamics we know that 2030 being warmer isn’t purely random since the climate system is unbalanced. As long as the EEI remains positive each year will have slightly higher chances of being warmer than 20 years ago. And since we also know why the EEI is positive and that it is unlikely to go negative anytime soon we can thus confidently conclude that 2030 is more likely to be warmer than 2010 than not.
Very good.
It means exactly that. If I tell you different measurements of my weight go from 55 to 95 kg, you will have to conclude that you don’t know my weight.
Could EEI be 0.5? Yes, it could. It could even be -0.5 one year and we wouldn’t even know.
It doesn’t have to. It can be positive one year and negative the next. That could explain why a year is colder than the previous. If the Earth has been cooling since 2016, the EEI could have been negative since then.
Thinking that you know is not the same as knowing.
JV said: “It means exactly that. If I tell you different measurements of my weight go from 55 to 95 kg, you will have to conclude that you don’t know my weight.”
We do know your weight though. It is constrained to be between 55 and 95 kg. And if those are the 2σ tails on a normal distribution of measurements then we know your weight is 75 ± 10 kg.
And this true for any quantity in which science has measured repeatedly like the mass of a proton, the temperature of the triple point of water, etc. Just because scientists take multiple measurements (often millions or more of them) of a measurand does not me we don’t know the value of the measurand.
JV said: “Could EEI be 0.5? Yes, it could. It could even be -0.5 one year and we wouldn’t even know.”
Of course we know. We know because we measure it.
JV said: “It doesn’t have to. It can be positive one year and negative the next. That could explain why a year is colder than the previous. If the Earth has been cooling since 2016, the EEI could have been negative since then.”
The Earth hasn’t been cooling since 2016. In fact, it has continued to take up about 10 ZJ/yr. It is only the atmosphere, which only accounts for < 1% of the thermal mass, that has cooled.
JV said: “Thinking that you know is not the same as knowing.”
We don’t just “think” we know. It really do know because of the abundance and consilience of evidence.
“And if those are the 2σ tails on a normal distribution of measurements then we know your weight is 75 ± 10 kg.”
Why do you assume a normal distribution? It is entirely possible for there to be a skewed distribution because of non-linearity in the measuring device! The measurement uncertainty of each measurement may actually be skewed with a larger positive range than negative, or vice versa. It may not even be an issue of the measurement device but a difference of what is in the pants pockets when the measurement is taken. Or different shoes. Or even a holiday eating spurge!
“And this true for any quantity in which science has measured repeatedly like the mass of a proton”
But in those measurements every attempt is made to eliminate systematic bias and non-linearity in the measuring device. That simply isn’t true for most commonly used human weight measuring devices. What is the measurement uncertainty of your bathroom scale? What is its measurement linearity?
“Of course we know. We know because we measure it.”
Those measurements have measurement uncertainty. A concept you just absolutely refuse accept.
“We don’t just “think” we know.We don’t just “think” we know.”
Of course you don’t know. You *do* just think you do. With an measurement uncertainty of +/- 0.2C you can’t discern a difference smaller than that. *YOU* think you can but it is physically impossible. I can’t measure a crankshaft journal with a measurement device giving +/- 0.2″ measurement uncertainty and tell what it is to a resolution of 0.05″. I can’t measure a second journal using the same device and tell if the difference in diameter between the two is 0.005″. It’s a physical impossibility.
2020’s warmth came from a temporary stratospheric warming along with weaker El Niño conditions. Same goes for 2019. Nonetheless, I’m going to save this comment, so 6-7 years later we’ll see how your prediction holds out.
I think it is 7-8 years. But yeah we should definitely revisit it. I will say that my previous statements that there’s a good chance we hadn’t seen the minimum in 2021/04 isn’t looking so great at the moment. I was thinking the triple dip La Nina could have sent us lower.
Well, one thing we can be almost positive of Walter is that nothing that their climate models are predicting will come to pass.
Besides which, our Beloved Leaders who “own” The Settled Science (not to mention fat wads of Ruinable Energy shares) don’t have to guess.
They have lots of corrupt and incompetent ‘scientists’ pumping fake data into Supercomputers designed to show there is a problem.
bdgwx,
You mention forecasting from numbers.
Rick Will mentions fundamental mechanisms.
Rick is at text book level.
You are at comic book level.
Geoff S
That is good to know there’s a better way. I look forward to you or Rick’s presentation on predicting monthly UAH TLT anomalies using “fundamental mechanism” and without exploiting auto-correlation with a root mean square difference considerably better than 0.12 C.
Geoff, I am very interested in your input here. Would you mind posting a link to your or Rick’s model so I can replicate it?
It’s been 3 days. I guess I don’t have much choice at this point but to accept that your “text book level” method is not even on par with my “comic book level” method.
”the linear model I have says”
Lol.
Quite silly it is. The variability from one year to the next can be about 1°C but 1°C change over a century is thermageddon.
Apparently Global Warming didn’t stop in 2001. You quote von Schuckmamm. ” The ensemble spread gives an indication of the agreement among products and can be used as a proxy for uncertainty. The basic assumption for the error distribution is Gaussian with a mean of zero, which can be approximated by an ensemble of various products. However, it does not account for systematic errors that may result in biases across the ensemble and does not represent the full uncertainty. ”
Who knew models were that accurate and of course reliable indicators of anything.
That’s Schuckmann et al. 2020. It shows about 100 ZJ/decade of heat uptake since 2001. UAH TLT shows +0.14 C/decade of warming since 2001 so we can’t say warming stopped in 2001 in the atmosphere either. That’s mostly moot though since I didn’t use Schuckmann et al. 2020 as part of my analysis here.
From models, How many ZJ to 1C again?.;)
And remember how woeful the OHC was before 2000. So has it cooled or warmed since 1900?
leefor said: “From models, How many ZJ to 1C again?”
Assuming the UAH TLT layer is about half of the total mass of the atmosphere then 1000 j/kg.C * 2.5e18 kg = 2.5 ZJ/C.
leefor said: “So has it cooled or warmed since 1900?”
Schuckmann et al. 2020 only goes back to 1960.
Comparing inaccurate models with each other does not provide a measurement uncertainty. The “basic assumption” here is that the models are all accurate measures of something and any differences are due to random errors, i.e. the error distribution is Gaussian with a mean of zero.
If you are timing laps at a racetrack using several inaccurate stopwatches, no amount of “ensemble” analysis will let you predict what the next lap time will actually be – even if all the stopwatches have an error distribution that is Gaussian with a mean of zero. Averaging inaccurate values does not produce an accurate answer.
The *ONLY* comparison that is meaningful is comparing the model ensemble to actual observations and the models fail spectacularly in such a comparison. They almost all RUN TOO HOT.
Fans of meaningless trends will be delighted to know that “the Pause” now starts in July 2014, and that Monckton will be able to say it grew three months this month – though only because he didn’t set it back to August last month.
The meaningless trend since December 2010 is 0.29°C / decade.
”Fans of meaningless trends”
You mean like you?
Now that’s funny, coming from you.
Meanwhile CO2 goes up each and every month. The CO2 ==> Temperature driver remains broken. No substitute drivers offered for consideration.
Not correct. CO2 goes down almost half of the months. It is called the annual cycle.
How many more times does it need to be explained to you that ENSO is the only other factor needed to explain, the pause and current temperatures.
I show this to you every month, but you you just mumble on about linear trends, with no comprehension of the point.
In this simple model, based on data up to 2015, the green dots I am getting the best linear fit to the log of CO2, lagged ENSO values, with some data to cover the big volcanoes in the 80s. I then use exactly the same model to see how it would predict the anomalies after 2015, the blue dots.
It seems quite a good fit. It shows that temperatures would have been expected to cool from 2016, because ENSO conditions were cooling, and the slight rise in CO2 isn’t enough to compensate. It would be really worrying if CO2 sensitivity was sop large as to mask a massive El Niño.
None of this is intended to prove that CO2 warms the temperature. But it does show that temperatures over the last 8 years have not in any way refuted the hypothesis.
Here by contrast is what happens if I don’t include CO2 as a variable.
ENSO conditions are not sufficient to explain the warming we are seeing. There has to be some factor that causes an overall trend, whether it’s CO2 or postage stamps.
Bellman you are automatically assuming that CO2 is the primary driver. There are plenty of things in this world that we don’t understand that could be causing the temperature to rise. Your problem lies within the mainstream community: you guys are oversimplifying it way too much.
The analysis is not a statement that CO2 is necessarily the primary driver. It is a statement that the CO2 cannot be eliminated from consideration.
I am not saying that it proves CO2 is the driver, just that it doesn’t falsify the hypothesis that it is.
It’s difficult to do much else with UAH as COL2 has been increasing at a roughly linear rate and so any correlation with it could just as well work with any other linearly increasing factor.
The point, however, is that there is a long stated hypothesis that increasing CO2 will increase temperatures, and part of testing any hypothesis is to see if it agrees with reality.
And it doesn’t.
But it does
Look, it should be clear to anyone that looks at the data that it’s the AMO that drives temperature changes in NH. Look at the US temp graph posted above. The 70s were cold due to the negative phase of the AMO. Unlike the ENSO, the AMO looks like a very reliable oscillator over the last 150 years or so. Of course, you need lots more data to have confidence in it’s regularity.
It isn’t clear to me. If you are only looking at the UAH data it’s difficult to tell, because the AMO has been increasing over the entire run, and so difficult to tell if it’s AMO or CO2 that is causing the warming, or if the warming is causing the AMO to warm.
If I look at a longer data set, such as HadCRUT, I can get a reasonable fit without using the AMO.
Adding the AMO doesn’t improve things that much.
Nor does it reduce the coefficient fitted to CO2, about 2.3°C per doubling of CO2.
But if I remove CO2 and just use ENSO and AMO…
I looked at the data. I don’t get a very good fit to UAH TLT just by using AMO alone or AMO+ONI. The AMO+ONI fit is better but has a -0.06 C/decade low bias when the RMSD is minimized. I can remove the bias at the expense of increasing the RMSD. But to get the bias down to 0 C/decade I have to allow RMSD to increase to 0.2 C. The other issue is that the model fully relies on AMO to explain the trend. What if CO2 is causing the AMO to increase?
CO2 only explains the long-term trend if given enough oomph by feedbacks. This is a fudge factor because feedbacks cannot be measured. Nothing says the trend cannot include several factors not adequately included in current theory reflected in models. For a discussion see:
de Larminat, P., 2016. Earth climate identification vs. anthropic global warming attribution. Annual Reviews in control, 42, pp.114-125.
This is a similar post to Bellman’s. I have shown you repeatedly that the CO2 being a driver of UAH TLT temperature anomalies is not inconsistent with the data. This is what happens when cyclic/random variability is superimposed on a linear component. I compare the result with UAH TLT. The root mean square difference is only 0.12 C.
This La Niña is weaker than the past two yet we are at the almost exact same values. Why is that? Are we going back to the 1998 pause? If so that means I will have never known global warming.
Walter how old are you? I wouldn’t be so sure about that prediction by the way.
I turn 20 this October.
“Are we going back to the 1998 pause?”
Currently, the temperatures are about 0.6C cooler than 1998.
The year 1998 and the year 2016 are statistically tied for the warmest year in the satellite era (1979 to present) going by the UAH satellite chart.
My prediction, based on nothing but the first two months of the year and a linear trend is that 2023 will be 0.09 ± 0.14°C. This would put it close to 2018, as tenth warmest year. But there is still a large amount of uncertainty at this stage.
You are aware of the coming El Niño correct?
There is a 4-5 month lag between ENSO and UAH TLT response. Assuming an El Nino does form later this year it’s effect won’t be realized until 2024.
The ENSO/TLR response seems to be becoming more disproportionate. It takes a smaller ENSO increase to create a greater TLT response. By contrast, negative ENSO creates a weaker TLT response. Several monthly warmest records have been set in UAH over the course of the recent negative ENSO period. This does not bode well for future positive ENSO episodes.
That could explain why I originally thought a new minimum exceeding that of 2021/04 had a decent chance of occurring. I’m certainly not stupid enough to say for certain that we won’t see a lower value, but the fact that February came in higher and the La Nina is now waning deals a devasting blow to my original position.
There’s a forecast for ENSO, but my simple method is purely based on past performance. I wouldn’t like to speculate on how likely or strong any change in ENSO is, and the large uncertainty in the prediction reflects the variation in the past, including changing ENSO conditions.
This has been a very average start to the year. The current average is +0.02°C, and I can’t see it warming up too much over the next few months. I suspect any large El Niño this year will only really be noticed in 2024’s average. Compare the current start to all the other warm years. They were all somewhat warmer over the first two months of the year.
Your method is interesting. I see no a priori reason why 2023 should be cooler than 2022. A declining Niña versus a full-fledged Niña, more CO2, and a more active Sun should make 2023 warmer.
According to BOM we are no longer in Niña temperature range, although Niña condition needs three months of data to deactivate to account for variability.
http://www.bom.gov.au/clim_data/IDCK000072/nino3_4.png
For the planet to cool in 2023 with respect to 2022 if there is no increase in the temperature remaining in the ocean because La Niña ends, it would need to lose more heat at the top of the atmosphere or increase its cloud cover. If CO2 is higher in 2023 the planet should lose less energy than in 2022. The Sun is also going to be more active in 2023 than in 2022. That leaves the clouds as the only possibility. The IPCC believes an increase in clouds would warm the planet (positive feedback), so I guess if 2023 turns out to be colder it would be because of a reduction in cloud cover.
It’s not really my method. I say several sources doing similar thing a few years ago, and thought it would be interesting to try it myself, just to see if I understood what they were doing.
As I said though, it isn’t trying to model any physical systems, it’s purely looking at how good the start of the year is in predicting the year as a whole. At this stage of the year, I think it’s more interesting in seeing what it says about the range of possible values, rather predicting the final value.°
At this point last year it was predicting 0.07 ± 0.14°C. Slightly colder than this years prediction. The final figure was 0.1°C warmer, but still within the 95% confidence interval.
I did try looking at more detailed models when I started, including ENSO values, but concluded it didn’t make much of a difference. I might try again to see if current ENSO conditions help the prediction, but I don’t want to rely on other forecasts, as that adds too many complexities.
Lol.
Honestly, the comments here have the same flavor of a horse-racing site I used to frequent 20-odd years ago.
Commenters there all had their own “special take” on why Flatulent Floozie would / would not win the 4th at Werribee next Saturday.
Here it’s about whether The Pause can / cannot continue past the 31st July.
Linking flatulence and Werribee, now that’s something the waste water treatment plant, (the big one that serves most of Melbourne), didn’t see coming…… Or did they?
Here’s a question for Monckton … who I assume is reading & will be updating his “pause” analysis shortly : For this month , we are roughly at the same temp levels we were ~20 years ago. The question: If we stayed at current temp levels, how much longer would it be until the “current” pause becomes part of the early 2000’s pause … and the pause becomes 20+ years?
Hope you can address & looking forward to seeing he answer.
Actually the globe first achieved this temperature in 1988. So is that no warming for 35 years or have I got something wrong?
Yes, comparing a high with a low and declaring victory. The planet is still warming, just not as much as models say it should.
Missing the point. If CO2 is the driver, CO2 is higher now than in 1988, no matter what the (SUPPOSEDLY inconsequential) “other” factors are doing.
Of course, CO2 is NOT the driver, since there is ZERO empirical evidence supporting that notion, just hypothetical bullshit and extrapolation on top of that.
I am also asking that question.
I’m not CMoB but I’ve looked at this issue, though strictly as “an interested amateur” not in a serious manner.
The attached graph shows how adding linear extrapolations can be “tweaked” to result in a “merge” of the two pauses by a fixed date, in my example December 2030.
Basically they all go though a common point roughly 5/9ths of the way towards the end-point, at a level of roughly -0.07 for my specific set of parameters.
At “current temp levels” of +0.08 the pauses will never “merge”.
An alternative to the “current temp level” is to extend the “recent trend” instead.
I use “recent trend = trend from December 2015”, which gives the steepest (negative) trend possible ending at the latest datapoint.
Disclaimer : Yes, this is indeed a textbook example of “cherry-picking”.
Last month (January, UAH anomaly = -0.044) this gave a “merge” of the two pauses in June 2035.
This month (anomaly = +0.08) the “merge” now occurs in February 2035.
NB : As long as the values are below the trend line the date will advance in time.
Note also that this is a purely theoretical exercise, with precisely zero physical justification … AKA “math-turbation” …
I saw the question and immediately thought…Mark BLR has answered this numerous times.
An alternative viewpoint.
I have written under several previous “UAH pause” articles here variations on the theme
Hopefully the attached graph will help explain this position.
I doubt, short of a major cooling event the pause as calculated by Monckton will ever have a pre 1998 start.
It would take over 10 years of temperatures at last months -0.04°C to reach that starting point. Even if every month from now on was -0.39 (the coldest single monthly value this century), it would still take about two and a half years.
A more interesting question is how large an El Niño would we need to end this pause and start a new one. Or, if there isn’t a large one, how long at current warming rates would you need to end it.
The hard part is explaining why temps keep returning to a baseline. The pulses have been reducing in amplitude, and still they go up and then they come down! Curious.
Gosh darn it I keep forgetting to add an image. Next message.
Here is the image!
How does it know it has to return to the 1991-2020 baseline, rather than the 1981-2010 one?
Does it matter? All I see are pulses where the temps go up and the temps come back down! That was my question! Why don’t you discuss that? Your trend should give you answer, right?
You’re the one who keeps insisting “temps keep returning to the baseline” without explaining why you think any particular baseline is significant.
If there was no trend in global temperatures, you would expect temperatures to keep returning to an average. That average should not change.
“All I see are pulses where the temps go up and the temps come back down!”
Back down to where? If that’s all you see then I don’t think you are looking hard enough. Whatever baseline you use it’s clear that nearly all the red stuff is on the right of the graph and the blue stuff on the left.
“That was my question! Why don’t you discuss that?”
What question? We’ve been discussing the ups and downs of global temperatures for decades – ever since people started looking for short term trends to claim that global warming had stopped. The problem for your claim is that temperatures are not fluctuating about a static base line, they are fluctuating about a rising trend. The fact you consider there to be something unusual about a month which is slightly above the average of the previous 30 years is a clue to that.
“Your trend should give you answer, right?”
Here’s what the graph looks like using the trend as the baseline, i.e. showing the residuals.
80% of the past 450 million years the average temperature of the planet has been between 17-21ºC. It is now about 14.5ºC which is among the 10% coldest it has been. So, I would say we are experiencing a tiny return to the mean, soon to be interrupted by the next glaciation.
“soon” is relative, right?
It is a geological “soon.”
In the current Epoch, the Holocene, the “long term trend” is still DOWN. All of this “since pre-industrial” is nothing but a convenient cherry pick of “vs. the Little Ice Age.”
It is interesting to note that both the statistical & dynamical models have been consistently forecasting warmer than ultimately observed ENSO states for the last 2 years. It is going to be interesting to see if the forecasted coming El Niño also falls short of what’s expected :
Ultimately the bigger El Niño the more cooling we get, so the preferable anomaly (not that we get any say) would be a moderate one maybe 1.5. That’s enough for some more cooling but that also won’t come any close to the 2016 high. It’s pointless to hope for this though, because, as I said above, nature will do what it wants without anyone’s say.
Starting with the verity that “The average of measurements is not a measurement!”, the good Doctor takes the yearly average of the daily average of a gazillion measurements of temperature around the world (filling in missing data with WAGs). Then he averages several years together and takes that number, as if it is a useful measurement, subtracts it from a similarly calculated number supposed to represent an “average” from last month, and comes up with a number (0.08K) that is supposed to be accurate to within one onehundredth of a degree! Really, Dr Spencer? Sigh!
The most recent analysis UAH provides assesses the uncertainty at ±0.20 C [Christy et al. 2003].
With an uncertainty of +/- 0.20C how do you distinguish a difference of 0.08C? The difference gets lost in the uncertainty interval!
No, it doesn’t “get lost”. You use your tools to find the chance that the difference is, in fact, positive. In this case, assuming that the +/- covers 95% of the cumulative, it’s ~78%. If that’s not good enough for you, then what would be?
Unfortunately, “your tools” have been demonstrated to be valid only when dealing with measurements. They have very limited utility when applied to “differences between averages” (of averages of averages of…) of measurements.
The Guide to the Expression of Uncertainty in Measurements agrees with bigoilbob.
Malarky! The GUM is *still* about measuring the same thing multiple times in order to generate a set of measurements whose distribution can be assumed to be Gaussian and whose errors cancel.
There is *NOTHING* about UAH that measures the same thing multiple times! NONE!
Therefore statistical tools simply do not work to adequately describe what you are seeing.
Why you and bigoilbob can’t understand that the average of 6′ board and an 8′ board doesn’t actually exist in reality is simply unfreakingbelievable. The average of 7′ DOES NOT DESCRIBE A TRUE VALUE OF ANTHING!
Only measurements of the same thing multiple times where the random error of the multiple measurements can be assumed to be Gaussian and therefore cancel CAN YOU GET A TRUE VALUE!
Even Possolo’s method of analyzing Tmax over several days DOES NOT GENERATE A TRUE VALUE FOR ANYTHING. There *is* no true value in such a situation.
I’ll ask you the same question I asked big. If I tell you the difference in length between two boards is 0.01″ +/- 1″ how do you *KNOW* the difference is 0.01″?
I *know* why you think you do. You and all your ilk ALWAYS assume all uncertainty is random, Gaussian, and cancels out. Therefore the stated values are always, ALWAYS, 100% accurate. It’s what the GUM assumes.
It just occurred to me that I made a fundamental error in my assessment. If both measurements had the same c.i.’s and their expected values were separated by 0.08 degC, then we would need to assess a normal distribution with an expected value of zero, and a standard deviation of 0.144 degC (not of 0.102 degC), evaluated at 0.08 degC. So, the chance that the difference was, in fact, positive dropped to 71%, instead of 78%.
But the process is unchanged. TG would assess whether that is enough to proclaim it “higher” or not, rather than just throw up his hands.
“then we would need to assess a normal distribution “
And here we go circling back to the same old, implied and unstated, assumption that all measurement error is random and Gaussian and therefore cancels.
The c.i. you reference is based on the stated values only and not on the measurement uncertainty of the stated values. You simply cannot gain precision in this manner. Unless you can measure temperatures to the hundredth decimal point with zero uncertainty then you are only fooling yourself that you can distinguish differences between two measurements in the hundredths digit.
““then we would need to assess a normal distribution “
And here we go circling back to the same old, implied and unstated, assumption that all measurement error is random and Gaussian and therefore cancels.”
The parameters and uncertainties were referred to by you. Sooner or later, you’ll have to Come To Jesus and fully describe them. How are they distributed? You can’t legitimately Rope A Dope forever with claims that parameters and confidence intervals that YOU refer to aren’t complete. Finally, kindly, tell us The Rest Of The Story….
“Unfortunately, “your tools” have been demonstrated to be valid only when dealing with measurements.”
Nope. Those who evaluate petroleum reservoir data would beg to differ. Wherever you have data, it is applicable. For example, take just one of the dozens of rheological and geological parameters used for reservoir assessments, permeability. That data is gathered from the evaluations of many, many different measurements, using vastly different tools, all with their own confidence intervals,, at different times, in different wellbores. Then, all of the GUM tools spoon fed to you by bdgwx are applied by those reservoir engineers, to decide on ranged permeability estimates to use for each of the thousands of cells in modern reservoir models. Not only that, permeability data is checked against other parameters – porosity for example – to check for simple or sophisticated correlations that will improve model outputs.
Data from dozens of other comparable processes from other industries are used by the tech pro’s to do the same thing – that is, to make more money for their employers. Apparently it works, given the $ we tech pro’s make.
Come out from under the rock….
“That data is gathered from the evaluations of many, many different measurements, using vastly different tools, all with their own confidence intervals,, at different times, in different wellbores.”
You are measuring the *same* thing multiple times. Supposedly using *calibrated* tools that are *NOT* unattended field devices that are not calibrated before measurements and which are measuring different things each time.
“Then, all of the GUM tools spoon fed to you by bdgwx are applied by those reservoir engineers, to decide on ranged permeability estimates to use for each of the thousands of cells in modern reservoir models. ”
Those GUM tools *only* apply when you are measuring the same thing. It’s why it always specifies measurements as x(i) and not x(i) +/- u(i). The GUM assumes that u(i) always cancels out and the standard deviation of x(i) is the measure of uncertainty.
Show me in the GUM where it ever states x(i) +/- u(i) and then uses u(i) for multiple measurements of different things as the uncertainty associated with the measurements.
Why do you continue to ignore what uncertainty is? If the difference between two objects is less than the uncertainty in their measurement then you have no way to determine if there *is* actually a difference!
If I tell you the difference in length of two boards is 0.01″ +/- 1″ how do you *know* the difference is 0.01″?
There aren’t any kind of statistical tools that can tell you that the true value of the difference is 0.01″ no matter what you think.
“Why do you continue to ignore what uncertainty is? If the difference between two objects is less than the uncertainty in their measurement then you have no way to determine if there *is* actually a difference!”
!!!! – love ’em! Read what I actually said. I made no claim that one months temperature was – without doubt – higher than the other. What I did was refute your instatisticate assertion that the “difference gets lost in the uncertainty interval”. It doesn’t. For any 2 measurements with uncertainty intervals – whether independent as in this case, or with known dependence – you can calculate the chance that one is higher than the other. And for your example, I did so.
For once, read bdgwx’s link – a source that you have never been able to refute with another – to find out why. To bone throw – your brain probably knows better deep down. But you’re Dan Kahan System 2’ing to deny the truth – perhaps even unconsciously.
“I made no claim that one months temperature was – without doubt – higher than the other. What I did was refute your instatisticate assertion that the “difference gets lost in the uncertainty interval”. “
Did you ACTUALLY read this before you posted it? You made no claim one temperature was higher than the other but you don’t think that means the difference gets lost in the uncertainty interval?
WHY did you not claim that one temperature was higher than the other? Because you don’t know and can’t tell?
“What I did was refute your instatisticate assertion that the “difference gets lost in the uncertainty interval”. It doesn’t. For any 2 measurements with uncertainty intervals – whether independent as in this case, or with known dependence – you can calculate the chance that one is higher than the other. And for your example, I did so.”
Nope. You didn’t refute anything! You just showed that you do not understand measurement uncertainty. MEASUREMENT uncertainty does not have a probability distribution. u(x) has two components. u(x) = u(random) + u(systematic). Since you cannot know what the components are there isn’t any way to calculate the chance that one is higher than the other.
You have the same blind spot bdgwx and bellman have – to you and them all uncertainty is random and cancels. You simply don’t realize that the *true value* of a measurement can be *anywhere* in the uncertainty interval – ANYWHERE! There is no probability distribution to use in determining where it might be. If there *was* such a distribution then you wouldn’t need to state the uncertainty interval. In fact, the uncertainty interval could even be unsymmetrical depending on the measurement device!
“For once, read bdgwx’s link – a source that you have never been able to refute with another – to find out why.”
In other words you don’t have a clue. This is nothing more than the argumentative fallacy of Appeal to Authority. The GUM, bdgwx’s link, depends on the measurements being of the SAME THING, using the SAME DEVICE, with no systematic bias in the measurements. The assumption then made by the GUM is that all the actual measurement uncertainty cancels and the variation in the stated values are the uncertainty associated with the distribution.
There is *nothing* about field temperature measurements that meet the restrictions in the GUM. NOTHING.
Well, it looks like a lot of us are in for more global warming caused winter weather next weekend.
https://pbs.twimg.com/media/FqY9bbtXoAELOF_?format=jpg&name=medium
Tweet

See new Tweets
Conversation
Joe Bastardi
@BigJoeBastardi
Wild storm is exiting New England. Monster likely next weekend or early following week. top 3 or 4 cold March 11-20 on the way. Winter making up for lost time
https://pbs.twimg.com/media/FqYjFbRWwAIkjKM?format=jpg&name=large
In five days, another stratospheric intrusion over California with a cold front.


Currently, there is no chance of an increase in surface temperatures in the tropical Central Pacific.
There is most likely an El Niño coming. Possibly of moderate strength. I suggest you watch Joe Bastardi’s Saturday summary.
Scroll down to the free video. Joe gets into the indicators after the 20 min mark.
https://www.weatherbell.com/
This is not the end of winter in North America and Europe.


Weather in North America and Europe will become even more complicated by mid-March.


Snow cover in the northern hemisphere.


Heavy snowfall in the Sierra Nevada.


Large temperature spike in the middle stratosphere over the North Pole, above the summer average.


Another cold front over California and it’s not the end.


always fun to get a sine-wave breaker on the 13-month
best fit ECS model estimate 1.2-1.7, meaning that (as has been the case since at least 2013) trillions in direct spending has been wasted (per Climate Policy Initiative, which wants many multiple more!) in a misguided and futile attempt to cool the Earth based on incorrect climate models
that’s not even considering the secondary economic effects of climate legislation whose cumulative reduction in living standards presumably reaches into the tens of trillions
on an Earth in which large portions of the surface are too cold for most life most of the time, and winter excess deaths far exceed summer
even though we’ve known for years now that the CERES data shows no DWLR signature since 2000
history’s greatest blunder unfolds before our rolling eyes