Guest Post by Willis Eschenbach
After an unusual three years in a row of La Nina (cool) ocean temperatures, alarmists are all in a lather about the sea temperatures as we approach El Nino conditions. We get claims like this:
Global oceans are so hot right now, scientists all around the world are struggling to explain the phenomenon. Sea surface temperatures in June are so far above record territory it is being deemed almost statistically impossible in a climate without global heating.
These overwrought claims are generally accompanied by charts like this:

Figure 1. Title says it all.
YIKES! Thermageddon is just around the corner! Be very afraid!! …
So … what’s not to like? Well, for starters, they’ve omitted the colder areas of the ocean, those near the poles. That’s cherry-picking to exaggerate any warming. But that’s just the start.
Those who read my work know I don’t generally trust the numbers until I run them myself. So I went to their data source. The data they used is the NOAA Optimum Interpolation Sea Surface Temperature (OISST) data. From the OISST website:
“The NOAA 1/4° Daily Optimum Interpolation Sea Surface Temperature (OISST) is a long term Climate Data Record that incorporates observations from different platforms (satellites, ships, buoys and Argo floats) into a regular global grid. The dataset is interpolated to fill gaps on the grid and create a spatially complete map of sea surface temperature. Satellite and ship observations are referenced to buoys to compensate for platform differences and sensor biases.”
Downloading the data took a while. It’s in 15,259 files, one for each day, each one 1.7 megabytes, total of about 26 gigabytes… good fun.
After downloading it all, I graphed up the daily values. But not the anomaly values shown above. I graphed the actual daily values of the sea surface temperature (SST) of the entire ocean, so I could see what the SST is actually doing.



Figure 2. OISST sea surface temperatures (SST) for the global ocean.
There are a few things worth noting in Figure 2.
- You can see the peaks of the previous El Ninos in 1998-99, 2010-11, 2016-17, and the currently developing El Nino.
- As is common with natural datasets, it changes in fits and starts, warming for a while, then cooling, then warming a bit more, then cooling …
- You can see the recent cool La Nina years just before the 2023 peak
- The temperature peak occurred on April 2nd, 2023, and the temperature has dropped about a quarter of a degree since then.
Finally, is this “far above record territory” as folks are claiming?
Well … in a word, no. The April 2nd temperature is 0.04°C warmer than the previous record set back in 2016.
Four. Hundredths. Of. A. Degree.
(And if we only look at the cherry-picked ocean from 60°N to 60° south [not shown], it’s a whopping 0.06°C …)
To put this into perspective, as everyone who has climbed a mountain knows, as you go up in elevation, the air gets cooler. This cooling goes by the fancy name of the “adiabatic lapse rate”. In general, it cools about 1°C for every 100 meters in altitude.
So in human terms, 0.04°C is about as much warming you’d get by going from the second floor to the first floor in a building … in other words, not even detectable without a very expensive thermometer.
Of course, the question arises: why is there such a difference between Figures 1 and 2?
The reason is simple. The warming in 2023 is occurring earlier in the year. The temperature is not unusually high. It’s unusually early, which is not surprising since we’re coming off of a few years of La Nina (cool) temperatures.
And this is why using anomalies rather than actual values, while useful in some situations, can lead you far astray in other situations.
Moving on, much of the hyperventilation involves the North Atlantic. Here are the SST anomalies for that part of the ocean.



Figure 3. As in Figure 1, but just for the North Atlantic.
Again, this looks like impending Thermageddon … but here are the actual temperatures of the North Atlantic.



Figure 4. OISST sea surface temperatures (SST) for the North Atlantic.
Unlike the global ocean, because this is the northern hemisphere only, there is a strong annual signal.
And again, there’s nothing out of the ordinary regarding maximum temperatures. In fact, maximum North Atlantic temperatures have been pretty steady since 2010. All that’s happening is that, like the global ocean, this year it’s warming earlier than usual.
TL;DR version?
- The 2023 Thermageddon Festival is canceled, and there will be no ticket refunds.
Best to all on yet another cold, foggy Northern California day. Me, I say bring on the global warming, or at least some dang sunshine.
w.
As Usual: I politely request that when you comment you quote the exact words you are discussing. I choose my words very carefully, and I am happy to defend them. But I cannot defend your interpretation of my words. Thanks.
Might be able to take my jacket off at the beach, so long as there’s little wind.
It was 42F this morning in W. Denver area.
That hail storm in Boulder on Monday brought the temperature down drastically. It’s beautiful right now though, sunny and 78F, no wind.
Very interesting pattern out west. Lake Powell up fifty feet. Lake Mead up over ten when it usually is falling in June. Gully Washers east of LA and north of Palm Springs, and still snowing on peaks just north of there. Haven’t got a clue why the pattern is different, but it is facinating to watch, and also to watch the antics of those who were preaching Mega-drought, eleven months ago.
I haven’t had to water my lawn for the last 3 weeks, which is nice. Mosquitoes are about as bad as they get for Colorado, however.
Moths are mostly gone. They were terrible this year, but at least they don’t bite.
50 miles west of Los Angeles.. Looking at a high of 66°F. Might be in the low 70s in the next week, maybe.
The local NPR station has labeled the past year as strange. Dear wife says this was the 60s 70s and even a few times in the 80s. Not even unusual never mind strange.
Yeah Willis. I live 2 states north of you, and I spit on the promise of global warming. Been waiting 50 years – where is it.
Right on! Any sane person welcomes a slow, 1-2°C/century growth in warmth, especially since it is supposed to favour the poles and be negligible in the middle.
I really wished humans had control of the Earth’s heat control knob – I would turn it up to biosphere-loving 11 – but I fear we’re at the mercy of Sun and Earth and the most humans can do is learn to knit… (🤪 yes, melodramatic, I know but it should be about 10°C warmer than it is now, and it dark and damp – I didn’t ask to live in Scotish-like conditions!
Willis, In Figure 2, you said you “graphed the actual daily values of the sea surface temperature (SST) of the entire ocean:. Is this for the entire Pacific Ocean? Also, it appears the temperature has increased 1/2 C since 1980. Is this correct?
thanks,
Paul
It’s the data for the totality of the global oceans, not for any ocean in particular. And yes, the data says that it’s been warming at an average of 0.016°C per year.
w.
Is there anyway to filter out the interpolations? NASA/NOAA has been shown to over do it, replacing actual high quality measured data with their interpolations – I believe it was the Iceland meteorological authority complaining that NASA had their country showing warming trends whereas their real data showed nothing of the sort.
“There are a few things worth noting in Figure 2.”
I believe you meant 2010-11, not 2001-11.
Thanks, fixed. The best thing about writing for the web is that my mistakes won’t last long.
w.
It’s the best peer review process as far as I can gather.
If I may add a further small point of clarification, these three El Niño events are usually referenced (at least based on the Oceanic Niño Index, ONI) as 1997-1998, 2009-2010, 2015-2016. This is because (a) they all peak around December between these two years, and (b) the ONI value has dropped below 0.5C (i.e. ENSO neutral) by mid-year in 1998, 2010 and 2016.
(Source: https://ggweather.com/enso/oni.htm)
A few comments:
These numbers are in hundredths of a degree. That’s some mighty accurate measuring.“the dataset is interpolated to fill gaps on the grid” which brings in what we all used to call making up numbers (to the hundredth of a degree). Stephen Mosher would be proud. I can’t say just how accurate all this is but it’s probably a pretty good guess. But fractions of a degree is nothing really. 5 degrees maybe but unless you hovering around the freezing mark and are trying to grow crops or sitting in your living room with the temperature already very hot, a 5 degree difference isn’t going to kill the pooch.
Here in Ohio it is the middle of June and we are supposed have highs in the 80s. Yesterday my wife turned on the heat.
Where is my Global Warming?
I want, my Global Warming, and I want it NOW.
It is just like ever other government program, all smoke and mirrors, all hip hooray and ballyhoo. billions of dollars and no results.
Thanks for the reminder, Walter:
The Lullaby Of Broadway, Harry Warren/Al Dubin
Get in the queue!
Certainly you know the deal by now!
https://pbs.twimg.com/media/FygffYWaUAYPG5b?format=jpg&name=small
‘Downloading the data took a while. It’s in 15,259 files, one for each day, each one 1.7 megabytes, total of about 26 gigabytes… good fun.’
Willis, that’s a heck of a lot of work, and very much appreciated. Given the similarity in time spans, I’m wondering how much, if at all, the decadal trend of your Figure 2 differs from Dr. Roy’s results from this past May:
‘The linear warming trend since January, 1979 remains at +0.13 C/decade (+0.11 C/decade over the global-averaged oceans, and +0.18 C/decade over global-averaged land).’
Thanks, Frank. This data gives a decadal trend of 0.16°C/decade.
I suspect the difference is that most temperature datasets use NMAT, nighttime marine air temperature, as a proxy for SST. But that’s just a guess.
w.
“The NOAA 1/4° Daily Optimum Interpolation Sea Surface Temperature (OISST) is a long term Climate Data Record that incorporates observations from different platforms (satellites, ships, buoys and Argo floats) into a regular global grid. The dataset is interpolated to fill gaps on the grid and create a spatially complete map of sea surface temperature. Satellite and ship observations are referenced to buoys to compensate for platform differences and sensor biases.”
All the data sources and data manipulations and the best NOAA could do is a lousy 0.05℃/decade trend difference from UAH6 (0.16-0.11)? NOAA has to d alot better than that to support the boiling oceans narrative.
Isn’t UAH about the atmosphere? Liquid water is somewhat different.
Thanks Willis, it’s always good to read your sensible counterpoints.
And by the way, whatever has been happening in the North Atlantic is not causing scary weather so far.
Here in Southern Wisconsin my wife turned on the furnace yesterday as it was 40 F outside at 7:00am and 58 F in the house. 2 weeks ago we were running the A/C everyday with outdoor Temps in the 85-94 F range. The government is doing a lousy job of controlling the weather.
You just described a desert – is it nice there.
Do you have pyramids. Camels maybe?
Or cactus, they grow in deserts but apart from that, I’m struggling to find anything else nice to say about them.
Southern Wisconsin, Peta? Its about as different from a desert you can get.
He has problems with sugar intake.
A confidence interval would be interesting to see on that chart. I dont think it could reasonably less than 1 degree, and more likely 2 or more…
Which chart are you referring to. My understanding is all charts should have a confidence interval.
Good question, d. Here are the errors claimed by the OISST folks on a gridcell basis. The three charts are for the start, middle, and end of the dataset.
Striping in the earlier datasets presumably reflects the satellite passes. It’s not until recently that the addition of the ARGO float dataset has reduced the errors.
Having said that, call me crazy, but I never trust the claims of the scientists producing such datasets …
w.
I sometimes wonder about the estimation of uncertainty. Suppose that there are a number of different factors affecting uncertainty – method accuracy, instrument accuracy, interpretation accuracy, external interference, etc – it’s very likely that you don’t know within reasonable bounds what some of them are, and you can’t just add them together because there is a sub-100% probability of them all operating in the same direction at the same time (bear in mind that your uncertainty estimate is looking for a sub-100% probability so you mustn’t treat the alignment of uncertainties as being 100%). If for example I see a graph where the numbers go up but not above the uncertainty limits of the other numbers, then I’m looking at an increase that is ‘not statistically significant’. But the closer the graph gets to those upper limits, the more likely it is that there really was an increase – if the graph had been drawn to 89% certainty instead of 90% certainty, for example, then the increase could have been statistically significant.
Now I suppose that this is all covered in university statistics courses and textbooks, but to my mind there are so many uncertainties about uncertainties that the exact numbers don’t matter, it is enough to look at the chart and say yes the numbers go up and down but they aren’t gospel. Anything they tell you is interesting but be careful and ask do they make sense and can they be confirmed by other observations. So I don’t agree that every graph must have error bars – they may quite simply not be known or take far too long to work out and anyway the error bars should have fuzzy edges – so all graphs with or without error bars simply need to be treated with the same amount of care.
As you can tell, I didn’t do statistics, and I’m glad I didn’t do statistics because they seem dodgy (a bit like the guy who said I hate cabbage, and I’m glad I hate cabbage because otherwise I’d be eating it all the time and it tastes horrible).
These values are all hokum. They are obviously what is know as the Standard Error of the Mean (SEM). If the distribution of sample means is normal, as the Central Limit Theory implies, then the SEM tells one the size of the interval surrounding the Sample Mean where the population mean may lay. In other words, the uncertainty interval.
However, if all the temperatures were XX.x, then the samples would consist of data elements of XX.x. Basically 3 Significant Digits. The average of each sample would then also be required to have 3 Significant Digits. Thus the sample means distribution is made up of sample averages with 3 Significant Digits. Therefore the Sample Mean (the mean of all the means of the samples) should also be XX.x.
Consequently, the population mean (μ), the population standard deviation (σ), and the SEM (s) should all have no more than 1 decimal digit. The exact same Significant Digit rules apply to calculating the baseline average temperature. As a result, anomalies should have no more than 1 decimal digit.
Calculating averages out to one hundredths or one thousandths digits so one can compute an anomaly showing the same number of decimal decimal digits is fiction. Not one university physical science lab would allow one to do this. Not one commercial laboratory meeting NIST standards would do this. Climate science has and still is totally fudging measurements to do this.
There is a reason that most physical science endeavors require the end result to have no more resolution than what the measuring device measured. It is because it is truthful and not made up numbers.
I have asked folks on twitter why the stop their anomaly calculations at the one thousandths digit instead of using 5 or 6 decimal digits. I know that is possible because one has to tell Excel to round properly or you get 20 decimal digits. No one has an answer other than that is what the real scientists do. LOL
with your permission I will copy and paste this post and keep it handy. it is much cleaner explanation than I have tried to give to others. Working in regulated industries, and asked to provide test protocols and results to support claims is important. If you attempt this type of hocus pocus the reviewers will immediately reject your findings. You simply cannot have more accuracy and certainty than the original measurements.
That statement is not consistent with JCGM and NIST. Read JCGM 100:2008 and NIST 1297 and apply the law of propagation of uncertainty to various measurement models. You will see that when the measurement model has a partial derivative ∂f/∂x < 1/sqrt(N) then uncertainty decreases. It is a mathematical fact…you CAN have measurement models that result in less uncertainty than the original input measurements. I encourage you to play around with the NIST uncertainty machine and prove this out for yourself. Or if you want we can work through the mathematical derivation and proof together.
You don’t even understand the difference between resolution and uncertainty. Tell everyone, did you ever take any higher level lab courses in physics, chemistry, or engineering disciplines?
Look carefully at the NIST document. It shows the following.
ms = (100.021 47 ± 0.000 70) g
Count the decimal digits in the measurement quantity, i.e., the resolution. I count 5 Significant Digits. Now, count the number of decimal digits in the uncertainty. I count 5 Significant Digits. Funny how they couldn’t reduce the uncertainty by using 6 or 7 or even 8 Significant Digits for the uncertainty.
The following defines resolution.
2.4.5.1. Resolution (nist.gov)
Another reference:
Measurement Uncertainty | NIST
Note the decimal digits in the measurand => 6. Note the decimal digits in the uncertainty => 6.
I have given you a number of references in the past from university labs that show an experimental average of measurements should not exceed the resolution of the measurements. This holds true in every physical science I am aware of with the exception of climate science. Climate science appears to be driven by statisticians who have no regard for measurement science. Climate science invariably takes measurements recorded to the units digit and finds averages and SEM’s that are at least two magnitudes smaller than the data.
Likewise, SEM’s (σ/√n) is not a measure of uncertainty in measurement. To be honest, the data that makes up samples consists of numbers that (prior to 1980+) only have resolution to the units digit. Using Significant Digit rules the means of the samples should only have units digit resolution and the Sample Means (the mean of all the sample means, I know confusing) should only contain units digit also. that applies to absolute temperatures and anomalies.
It has become far too easy to confuse the SEM with accuracy/resolution of the mean. It does not indicate the number of significant digits available in the mean calculation. It only defines the interval within which the estimated mean lays. And a last point, this all only applies if the sample means distribution is normal. I have examined many studies and not one ever shows the distribution of temperatures to be normal. They all do just like you, they divide by the √n, while making an unproven assumption that the distribution is normal.
Standard Error of the Mean—probably the single worst choice for a technical term, ever.
It is unfortunate that its real purpose has been used inappropriately and without ensuring that the appropriate assumptions for its use have been met.
The Standard Error of the (sample) Means is nothing more than the standard deviation of the distribution made up of the individual means of the samples.
The Central Limit Theory says that Sample Means distribution should be normal if certain assumptions are met. That allows one to use the standard deviation of the sample means as the SEM. However, one must show that the sample means distribution is normal, not just assume that it is. The equation for SEM is σ/√n, but it only applies if the sample means distribution is normal. If it is skewed then all bets are off.
Thanks, Jim. There’s a further issue, autocorrelation. If a time series is autocorrelated, the SEM is σ / √effective n.
And effective n can be much, much smaller than n. This is particularly true in heavily averaged natural observations. For example, the Berkeley Earth global (land and ocean) dataset has 2079 observations (n = 2079). It has a standard deviation of 0.41°C, so you’d think the SEM would be 0.41°C / sqrt(2079) = 0.009°C.
However, calculated using the method of Koutsoyiannis, the effective n is only … wait for it …
3.
And this means that the SEM is actually 0.22 …
Note that this doesn’t just affect the SEM, it changes the significance of trends as well.
Best regards,
w.
Willis,
Thanks for the reference. This is exactly what you run into when statisticians try to use their training in math departments. No idea of physical measurements with uncertainty and autocorrelation is never a problem because there is no connection to what comes prior or post. Everything is exact and decimals are good for however far you want to carry them.
Both Tim and I have poured over 4 university statistics textbooks and not one ever discusses that the data could be uncertain. When numbers are given, they are exact with no standard deviation. Basically, a temperature is 65 ± 0.0!
“This is exactly what you run into when statisticians try to use their training in math departments.”
What are you claiming now? Statisticians have been looking at autocorrelational, and all the other issues for as long as I can remember.
Look at all the graphs I produced showing the uncertainty in Monckton’s trend lines. The ones that show the most uncertainty are because I’m using the ball park effective n method. It’s the same method, I think, used int he Skeptical Science Trend Calculator.
“Both Tim and I have poured over 4 university statistics textbooks and not one ever discusses that the data could be uncertain.”
Then either they weren’t very good books, or you and Tim didn’t understand them. It’s impossible to talk about statistics without uncertainty, because uncertainty is the main point of doing statistics. The whole point since at least the early 20th century has been to understand how uncertain statistics are, and when you can detect something from the cloud of uncertainty. That’s what significance testing is all about, amongst other things.
“Basically, a temperature is 65 ± 0.0!”
And you still don’t get this fairly basic point. The uncertainty in a statistical test does not normally, primarily, come from the measurement uncertainty. That does not mean that when you look at any figure you think it’s 100% accurate, just that it’s good enough for the purpose. The uncertainty, say in a mean comes from the variation in the random values you are sampling. That is usually a far bigger uncertainty than the measurement errors.
Secondly, the uncertainty caused by measurement errors is already present in the figures. You can look on each figure as being equal. to the mean, plus an error term indicating how far that individual value is from the mean, plus an error term caused by the imperfect measuring system.
Think about what happens when you measure the same thing multiple times and use those stated values to determine the uncertainty of your device. The standard deviation of those stated values is the uncertainty. There’s no need to add an additional term for the uncertainty you expect should be there, because it already is there.
You should know this. How many times have you demanded that the TN1900 example should be used to determine the uncertainty of a monthly maximum – despite the fact that that method just uses the stated values.
(And yes, before the usual klaxon goes off. All this is assuming random errors. If you have a systematic error in your instrument, all bets are off. Same if there is a systematic error in your sampling, or any other thing that can go wrong with a sample.)
Berkeley Earth says their equal area grid has 15984 cells. Is that not right? Do you have time to open the 2022/12 grid? What is the standard deviation? For some reason my go-to NetCDF reader doesn’t recognize their grids. I’ll have to research that when I get time.
“The Standard Error of the (sample) Means is nothing more than the standard deviation of the distribution made up of the individual means of the samples.”
I just start to feel embarrassed for you. You are just incapable or unwilling to understand what the Standard Error of the Mean actually is. I’d try and explain it again, but we both know it would be futile.
“That allows one to use the standard deviation of the sample means as the SEM.”
Gibberish.
The SEM is the standard deviation of the sampling distribution. No conditions are necessary for that to be true.
“However, one must show that the sample means distribution is normal, not just assume that it is.”
Gibberish.
It doesn’t matter what the shape of the distribution is, for there to be a SEM. But it is the case due to the CLT that the sampling distribution will generally tend to normal as sample size increases.
“The equation for SEM is σ/√n, but it only applies if the sample means distribution is normal.”
Wrong.
The advantage of having a normal sampling distribution is that you can use the equations for normal distributions to calculate confidence intervals etc. But the SEM is the SEM regardless of the distribution. Consider the case where you have a sample of size 1 taken from a non-normal distribution. It’s sampling distribution is identical to the population distribution, which is not normal. But it’s SEM is the standard deviation of the population.
You obviously don’t understand sampling and its assumptions.
The SEM IS A STANDARD DEVIATION. It is the standard deviation of the sample means.
If the sample means distribution is not normal then the probably is skewed. The standard deviation of a skewed distribution is meaningless. There are more values on one side of the mean than the other. A simple ±x is not appropriate. You would need a unsymmetric interval to properly describe values at the traditional 68%, 95%, etc.
Why do you think TN 1900 assumed a Students distribution of the data. That is a modified normal distribution in that it has longer tails but is still symmetric.
Again I am tired of educating you. I would hope that since no one posts anything supporting you that it might give you some doubt about what you are claiming.
“If the sample means distribution is not normal then the probably is skewed.”
Not necessarily, as I keep having to explain to you. Not being normal does not imply it is not symmetrical. You should know this, you keep going on about the student-t distribution. A perfect example of a symmetrical non-normal distribution.
“The standard deviation of a skewed distribution is meaningless.”
It is not meaningless. You might have to understand the distribution to get the full picture, and it won’t necessarily be defined by the standard deviation and mean in the way a normal distribution is. But that does not make it meaningless.
But you keep going on about these sampling distributions being possibly skewed and ignore the point that as sample size increases the sampling distribution converges to a normal distribution.
“Why do you think TN 1900 assumed a Students distribution of the data.”
Because they used an inferred value for the standard deviation. It’s a basic technique for handling the uncertainty when there is uncertainty in the standard deviation of a normal distribution.
“Again I am tired of educating you.”
Maybe if for once you could imagine that you might be wrong, then you could benefit from a two way dialog.
“I would hope that since no one posts anything supporting you that it might give you some doubt about what you are claiming.”
So we are down to an argument ad populum now. If people want to point out where I’m wrong and back it up with sensible arguments I’m quite willing to be proven wrong. But the fact that I get loads of down votes, on a board where I don’t expect anyone to agree with me in the first place, does not make a convincing argument.
The CLT only applies if the sample means distribution is normal. The normality tells you that the sampling was done correctly and that all the assumptions were met. If the sample means distribution is symmetric , but not normal, then the CLT conclusions are meaningless.
Central Limit Theorem | Formula, Definition & Examples (scribbr.com)
Perhaps you should read through this page.
6.E: Sampling Distributions (Exercises) – Statistics LibreTexts
Here is the deal. You need to provide a histogram of the sample means distribution to prove that the CTL applies. I have yet to see you confirm this with either a histogram or with the kurtosis/skewness computations that confirm a normal distribution.
Until you can provide references that confirm a non-normal sample means distribution means that the CLT assumptions were met or show graphs confirming a normal distribution of your sample means, there is no reason to discuss this further.
I thought I’d provide some reality to the CRT discussion.
Here’s a Weibull distribution:
As you can see, it’s definitely not normal.
However, here is the distribution of the sample means of the above distribution:
And here’s the code for the graphs:
Now can we move on to something else?
Thanks,
w.
Thank you for the elucidation. However, you have made sure that the sample size and samples along with the assumptions for the CLT are all properly met.
I want to see the sample distribution for global temperature to see if it is normal. If it is not, then some assumption(s) necessary for the CLT to properly work has not been met.
I think it is surprising that the folks standing behind the global temperature being accurate to 1/1000ths, especially those who consider themselves statistics experts, are unable to provide this simple proof.
“I want to see the sample distribution for global temperature to see if it is normal.”
Then you will need to invent a machine to travel the multiverse. We only live on one planet and it only has one May 2023.
“I think it is surprising that the folks standing behind the global temperature being accurate to 1/1000ths, especially those who consider themselves statistics experts, are unable to provide this simple proof.”
Nobody, certainly not me, a non-expert in statistics, thinks that the global anomalies, let alone temperatures, are accurate to 1/1000ths [of a degree]. Most uncertainty estimates range for monthly values from a few hundredths of a degree to a few tenths. Again, this is just down to your believe that if a temperature is published to 3 or more decimal places it’s implying that all digits are 100% certain.
This just shows how far out in left field you are!
You say that as if it’s a bad thing.
Global anomaly uncertainty estimates are not based on CLT estimates, or multiple samples. HadCRUT, for instance is based on ensemble runs, the details are available if you care to search for them.
Here, for example are the 200 ensemble members for 2022. Is it normal? Probably not exactly. But it doesn’t appear to be too far out, apart from one single outlier. (Dark red line is a normal distribution for comparison.)
I’ve looked at other years, back to 2019. They are all similarly close to, but not exactly, normal.
Here’s 2021
2020
Jim, you say if the temperature is not normally distributed, then “then some assumption(s) necessary for the CLT to properly work has not been met.”
I just showed it’s NOT necessary for the distribution to be normal for the CLT to be met.
In any case, here’s the Berkeley Earth temperature set, which is clearly not normal, and the distribution of the means for 3 sample sizes (10,100,1000), all of which are normal.
w.
You misunderstood what I said and I could have been clearer. As I said in several posts, if the CLT assumptions have been met, then the sample means distribution will be normal. That is exactly what you have shown.
The population from which you draw the samples can have most any distribution, but only if the assumptions are met for using the CLT will the sample means distribution of all the samples be normal.
If the distribution of the means of all the individual samples is not normal, then the assumptions for the CLT were not met.
As you have so adequately shown, the population is the temperatures from Berkeley Earth, you have sampled that population and arrived at a sample means distribution that is normal.
I have attached a screenshot of a simulation from:
Sampling Distributions (onlinestatbook.com)
What too many here don’t understand is the first requirement of sampling is to define your population. You have defined yours as the Berkeley Earth temperatures. You then sampled them numerous times with varying sample sizes and did end up with normal distributions.
The simulation I have attached shows the same thing. The most interesting thing is the verification of the the formula:
SEM * √n = SD
Where the SEM is the standard deviation of the sample means distribution.
Look at the info for the sample sizes of 5 and 25
Size 5 => 3.43 * √5 = 7.67
Size 25 => 1.53 * √25 = 7.65
And what is the SD of the population? 7.68!
This is what Bellman refuses to acknowledge. He declares the entire database as a “single sample”. He then wants to divide the standard deviation of the sample by the √9500 or some other large number to obtain an estimate of how accurate the mean of the sample is. It is all hosed up.
It is indicative of climate science. No one has ever set down and defined in an orderly fashion exactly what is the population, what are the samples of that population, and is the sample means distribution normal?
“This is what Bellman refuses to acknowledge.”
Please stop lying about me, it gets really tedious. I have never said that SEM * √n = SD is incorrect. It just follows from the more usual formula SD = SEM / √n.
What I keep trying to figure out is why you think it a useful formulation. Even if you think taking large samples of samples is the usual way to estimates the SEM, it still makes no sense to use that to estimate the standard deviation of the population, when you already have a very large sample which will be a better indicator of the standard deviation.
“He declares the entire database as a “single sample”.”
What database? If it’s a data base of values randomly selected from the population, than of course it can be considered a single sample. If it’s large enough you could also do statistical studies to better estimate the uncertainty if the data base is not random. But that’s not the same as saying you take multiple random samples to determine the SEM.
“He then wants to divide the standard deviation of the sample by the √9500 or some other large number to obtain an estimate of how accurate the mean of the sample is.”
Well if it is a truly random sample, and it actually consists of 9500 random independent and identically distributed values, then yes, that’s how you would determine the SEM. But in cases, you don’t collect that size of a sample because it’s far too time consuming, and just impractical.
“It is indicative of climate science.”
Climate science does not do that.
“No one has ever set down and defined in an orderly fashion exactly what is the population…”
The population is whatever is being measured. If for instance you are talking about a global monthly anomaly, than the population is the entire range of anomalies across the globe over that month.
“…what are the samples of that population…”
The sample (singular) is all the readings used to calculate that global anomaly – but don’t for a second think that means adding all the readings and dividing by N, or that the uncertainty is obtained by taking the standard deviation and dividing by root N.
“The CLT only applies if the sample means distribution is normal.”
If you would only stop trying to “educate” me, and actually listened you might learn something. You still don;t get that you don’t generally take many different samplers, and see what their histogram is like in order to determine if the CLT is correct.
When you take a, one, singular, sample of a specific size, it comes from a distinct sampling distribution – but you can’t tell what it is because you only have one sample. You use the CLT or other methods to estimate what the sampling distribution is, and then work on the assumption that that distribution is the correct one.
Even if you did take multiple samples, I doubt if it’s distribution would tell you you had done the sampling correctly. The sampling distribution of a biased sample will still be normal.
“Here is the deal. You need to provide a histogram of the sample means distribution to prove that the CTL applies.”
And how do you do that with a single sample? I know what your answer normally is, and it just demonstrates you understand nothing. But I’ll give you the rope again.
If you want to take an empirical approach to this, you can use Monte Carlo methods, bootstrapping and the like, to estimate the standard error and distribution. But you do that in place of using the CLT. It’s what BEST and other data sets do to estimate uncertainty, but I couldn’t tell you how accurate it is.
“””””When you take a, one, singular, sample of a specific size, it comes from a distinct sampling distribution”””””
You take “sample” from a population that has a distribution. Find a reference that discusses taking a sample of a a sample.
“””””You use the CLT or other methods to estimate what the sampling distribution is”””””
You have everything backwards. The CLT can only be used with multiple samples from a population.
“””””And how do you do that with a single sample?”””””
If you have only one sample then that single sample supplies the estimated mean and the standard deviation, i.e., the SEM. If the distribution of the sample is not normal then the CLT did not work did it?
Remember, without a distribution of sample means (multiple samples) the single sample size is large, I believe 9500 is reasonable.
The equation for population standard deviation is:
σ = SEM• √n = SEM •√9500
and the mean is
μ = x(bar) ±SEM
(only if the sample distribution is normal)
Here is a description of SD for a skewed distribution.
https://www.smartcapitalmind.com/what-is-skewed-distribution.htm
“””””A skewed distribution is inherently uneven in nature, so it will not follow standard normal patterns such as standard deviation. Normal distributions involve one standard deviation that applies to both sides of the curve, but skewed distributions will have different standard deviation values for each side of the curve. This is because the two sides are not mirror images of each other, so the equations describing one side cannot be applied to the other. The standard deviation value is generally larger for the side with the longer tail because there is a wider spread of data on that side when compared to the shorter tail.”””””
Other distributions get more complicated, which is one reason that normal distributions are popular.
Your whole post is so out of whack it is obvious you are just making stuff up.
No more responses from me until you show some references that explain what you are trying to assert.
Make sure they discuss singular samples from a sample distribution.
“You have everything backwards. The CLT can only be used with multiple samples from a population.”
I’ve tried to explain why you are wrong on numerous occasions, and it just goes straight through your head. I’m puzzled though, why you think this but also insist that TN1900 Ex 2, is a protocol that must be followed at all times. Where do you see multiple samples being used in that example? There is just one sample, the 20 days of maximum temperature, yet they still use the CLT to estimate the uncertainty range.
“If you have only one sample then that single sample supplies the estimated mean and the standard deviation, i.e., the SEM.”
Hilarious misuse of the abbreviation i.e. Once again, the standard deviation of a sample is not the standard error of the mean.
“Remember, without a distribution of sample means (multiple samples) the single sample size is large, I believe 9500 is reasonable.”
Sorry, but you’ve completely lost me there. The single sample size is whatever it’s size is. It could be large it could be small – it could just be 1. The larger the better of course, but I’ve no idea where you have pulled 9500 from.
“The equation for population standard deviation is:
σ = SEM• √n = SEM •√9500”
This is even more confused than your usual nonsense. If you are saying you have a single sample of size 9500, how do you know it’s SEM unless you know σ, or at least a very good estimate of it from the sample SD?
“Here is a description of SD for a skewed distribution.”
Which is completely true, and irrelevant to this discussion.
“No more responses from me until you show some references that explain what you are trying to assert.”
That’s not the threat you think it is. But fool that I am:
https://statisticsbyjim.com/basics/central-limit-theorem/
https://openstax.org/books/statistics/pages/7-3-using-the-central-limit-theorem
…
My emphasis.
“Or if you want we can work through the mathematical derivation and proof together.”
“Now, class.”
“NOW CLA-ASS!”
“SHUTTUP!”
bdgwx,
Your error occurs because you are cherry picking a special meaning of uncertainty and writing as if it is the whole meaning.
You have erred by not including all factors that contribute to uncertainty. To illustrate this, you should be able to provide measurements showing how sea surface temperatures are affected by factors such as surface wind speed, the spectral changes of incoming radiation as they vary over time; and with cloud cover and cloud type.
The type of uncertainty about which you write is more applicable to experiments in which factors of influence are few, such as tossing coins.
Please disagree, with numbers, if I am wrong.
Geoff S
What post of mine are you referring to? What error?
Aaaand this is exactly why I ask people to quote the exact words that you are referring to …
Best to all,
w.
Go for it. These ideas and protocols are not mine.
Amen. The climate fools lack even Clue #1 about real measurement uncertainty. Just a bit farther down in the comments bg-whatever again quotes his cherished milli-Kelvin “uncertainties”.
Most ‘uncertainty’ calculations simply deal with the OBSERVED variation within the data set. Errors in site, equipment, methods, operator bias, calculations, reporting, and more, are generally unknown and not quantified. Often the latter variations will overwhelm the observed variations. Sometimes ‘error bars’ are useful, sometimes they’re irrelevant.
Grrr … the values in the images just above are in °C, not W/m2, and the data is from OISST, not CERES. Moving too fast.
w.
Was wondering about that.
Thanks for the rapid correction
Geoff S
Based on HadSST and ERSST the OISST uncertainties for monthly global SST values are likely on the order of 0.02 C or so. Uncertainties on daily spot SSTs are between 0.1 and 0.5 C with 60S-60N generally being around 0.15 C and polar regions generally around 0.3 C. These are 1σ. The grid cell uncertainties are available here.
Why are you quoting “1σ” to more decimal places than what was measured? Give a good explanation of how you determine how many decimal digits to use when the measured values are in 1/10ths resolution.
How many significant digits are in the number 10? How about 100? How many significant digits does the following result have?
10/100 = 0.1
How did I produce a result with more decimal places than the original numbers?
When you can answer these questions you will have begun to develop a grade school understanding of statistics.
There is one significant digit in 10, 100, and 0.1.
w.
I got in serious trouble once because I followed these rules. It had to do with writing that 20 mph would convert to 30 kph. The reason that I got in trouble was not because I was wrong, it was because the person I attempted to correct was my boss. The approved report was written: “20 mph (32.2 kph)”, clearly more significant digits.
This touches on what Bellman is saying. It’s nearly impossible for me to see if there is a . at the end of the number at a glance. Nor is the rule universally applied anyway. The obvious solution is to adopt the practice of explicitly stating the uncertainty in compliance with JCGM 100:2008 section 7.2.2 which removes all ambiguity.
Read 7.2.4 in the GUM.
Funny how closely TN 1900 follows this recommendation.
One thing you should remember is that climate science has defined the monthly average as the MEASURAND, just like NIST TN 1900. It is NIST’s recommendation that the protocol in the TN be followed In order to find the uncertainty in that measurand.
Climate science and you should consider why you do not follow the U.S. experts on measurements.
“Funny how closely TN 1900 follows this recommendation.”
Why’s that funny? It’s doing exactly what I keep saying you should be doing – working out the actual uncertainty, quoting the uncertainty using any of the recommended formats, and if necessary explain what the the uncertainty represents.
What they do not say, is that you should just assume or let others assume that the number of decimal places you write define the uncertainty.
“One thing you should remember is that climate science has defined the monthly average as the MEASURAND, just like NIST TN 1900.”
there are lots of different climactic things that can be measured, each their own measurand. The main one being discussed is the global monthly anomaly. But it can also be annual, or for a specific regions, e.g. the sea in this case.
In the NIST example the measurand is the maximum temperatures for one month at one station. This is not really the same as a global anomaly using multiple stations.
“It is NIST’s recommendation that the protocol in the TN be followed In order to find the uncertainty in that measurand.”
It’s an example, not a protocal. They even show how different models can result in a different values. I think this is one of your problems – you see everything as a set of instructions that must be followed, rather than as an exercise in how to think through a problem.
“Climate science and you should consider why you do not follow the U.S. experts on measurements.”
Personally, I prefer it that we have a range of different approaches being used. It gives a better hint at the uncertainty in any one method.
Since you only show 10 => 1 SF, and 100 => 1 SF.
10/100 = 0.1 which has 1 SF, so no problem.
Now if you had shown 10. and 100. that is another whole story. The decimal point says that there is an assumed 0 after the decimal point. That means you have 10. -> 2 SF and 100. => 3 SF.
Divide 10./100. = 0.10 => 2 SF.
Statistics CAN NOT add resolution to physical measurements. Statistics only summarizes information about data, it can not change or add information to the physical measurements themselves.
From Wikipedia:
Note that the speed of light has been DEFINED not measured. Using statistics as done in climate science, one should be able to actually measure the speed to the nearest meter a large number of times and determine the actual speed to μm values by averaging all of the readings and finding the uncertainty by dividing by the √n. Wonder why that hasn’t been done before?
“Now if you had shown 10. and 100. that is another whole story.”
Which very much depends on everyone knowing your convention, and being able to spot the subtle difference between 100 and 100. Also, it’s no help if you want the first zero to be significant but not the second.
From Wikipedia:
The significance of trailing zeros in a number not containing a decimal point can be ambiguous. For example, it may not always be clear if the number 1300 is precise to the nearest unit (just happens coincidentally to be an exact multiple of a hundred) or if it is only shown to the nearest hundreds due to rounding or uncertainty. Many conventions exist to address this issue. However, these are not universally used and would only be effective if the reader is familiar with the convention:
https://en.wikipedia.org/wiki/Significant_figures
The better suggestion is to explicitly state the number of significant figures, or the uncertainty range.
“Statistics CAN NOT add resolution to physical measurements.”
But it can add resolution to the average of physical measurements.
The point is that none of these conventions were used were they?
Do I need to list all the university lab requirements that say this is untrue?
I will show one. You show one that confirms your assertion.
https://www2.chem21labs.com/labfiles/jhu_significant_figures.pdf
The mean cannot be more accurate than the original measurements. For example, when averaging measurements with 3 digits after the decimal point the mean should have a maximum of 3 digits after the decimal point.
You should realize that this is John Hopkins, not a fly by night outfit.
For grins, what is the average of temperature measurements of 63, 65, 62, 68, 67, 64?
“The point is that none of these conventions were used were they?”
My point is you wouldn’t have to worry about conventions if you actually quoted the uncertainty. Just quoting the number 100 as an indication of it’s uncertainty is meaningless when it’s uncertainty could be ±100, or ±10 or there may be zero uncertainty.
“Do I need to list all the university lab requirements that say this is untrue?”
No because then you will just argue it’s a fallacious argument from authority.
Seriously, you keep quoting introductory texts making some meaningless claim that it’s impossible for an average to be known beyond the resolution of the individual measurements, but none of them do anything other than handwaving to explain why that is the case. If you could produce an actual mathematical, meterological or statistical source that claims or better explains why it’s impossible, then maybe I could demonstrate to you why it’s wrong.
“You show one that confirms your assertion.”
Somehow I don’t think that’s what you are intending to do.
“The mean cannot be more accurate than the original measurements. For example, when averaging measurements with 3 digits after the decimal point the mean should have a maximum of 3 digits after the decimal point. ”
And just asserting it doesn’t make it true.
If it were true, why would you ever take an average. It’s was your, or maybe Tim’s, claim from the start that measuring the same thing multiple times did improve the accuracy, and it was only when you averaged different things the rules of statistics failed to apply.
Now quote the next paragraph.
“For example if the average of 4 masses is 1.2345g and the standard deviation is 0.323g, the uncertainty in the tenths place makes the following digits meaningless. The uncertainty should
be written as ± 0.3. The number of significant figures in the value of the mean is determined using the rules of addition and subtraction. It should be written as (1.2 ± 0.3)g”
Now explain how that works with the first rule. Say you measured the same value to the nearest gram 100 times. 50 times it comes up 11g, the other 50, 12g. Mean is 11.5g, and the standard deviation is 0.5g. Do you quote this as 11.5 ± 0.5g, or as 12 ± 0.5g? Which do you think it more informative and honest?
“For grins, what is the average of temperature measurements of 63, 65, 62, 68, 67, 64?”
As always you fail to provide any context to your toy example. Do you want an exact average or is this a sample. I’ll assume the later. I’ll also assume the temperatures are in °C, and not in K.
mean of temperatures is (63 + 65 + 62 + 68 + 67 + 64) / 6 = 389 / 6 = 64 5/6 °C. ~= 64.83333°C.
var ~= 5.36°C²
sd ~= 2.32°C
SEM ~= 0.94°C
Impossible to say if this is a normal distribution with such a small sample size, so lets assume it is and use the student-t distribution with 5 df, to determine a 95% expanded confidence interval. Coverage factor is 2.57, so the 95% confidence interval is ±2.43°C.
So you could say
64.83 ± 2.43°C
Or
64.8 ± 2.5°C
Or if you go the more obsessive, “only 1sf” rule
65 ± 3°C
OK?
Your turn. (Taken from Taylor Ex 4.15)
You measure the time for a ball to drop from a second-floor window three times. The results in tenths of seconds are
11, 13, 12
What should you state for your best estimate of the time, and its uncertainty?
Oh, and there’s a hint:
Your answer will illustrate how the mean can have more significant figures than the original measurements.
Here’s another example from Taylor.
30 times measured in seconds
8.16, 8.14, 8.12, 8.16, 8.18, 8.10, 8.18, 8.18, 8.18, 8.24,
8.16, 8.14, 8.17, 8.18, 8.21, 8.12, 8.12, 8.17, 8.06, 8.10,
8.12, 8.10, 8.14, 8.09, 8.16, 8.16, 8.21, 8.14, 8.16, 8.13
Calculate the best estimate for the time involved and its uncertainty, assuming all uncertainties are random.
Answer given by Taylor:
mean ± SDOM
8.149 ± 0.007 s.
My answer to this is that I would follow NIST TN 1900. With the SDOM being 0.007 with thirty values, the t factor with a DOF of 29 will be 2.045 for a confidence level of 95%.
0.007 * 2.045 = 0.014
Following Taylor’s rules
Rule for Stating Uncertainties
Experimental uncertainties should almost always be rounded to one significant figure. (2.5)
Rule for Stating Answers
The last significant figure in any stated answer should usually be of the same order of magnitude (in the same decimal position) as the uncertainty. (2.9)
The answer would then be 8.15 ± 0.01.
I am no authority on Dr. Taylor’s problems or answers. You would need to take that up with him.
I would suggest that this is an introductory text and the answers are probably designed to be sure that you understand the material. You will notice that his textbook does not refer to either the GUM nor NIST references.
Also note that NIST TN 1900 does refer to appropriate sections of the GUM for determining expanded experimental standard uncertainty. If you have ever been responsible for measurements, then you know that in the U.S., NIST is the agency that one should follow to insure a consistent and defendable basis for dealing with measurements.
“Experimental uncertainties should almost always be rounded to one significant figure.”
You missed the exception where the first digit is a 1, you should use 2 digits. So your expanded uncertainty should still be reported as 8.149 ± 0.014. But Taylor is just reporting the SEM, that is in GUM terms the Standard Uncertainty rather than a 95% confidence interval.
Regardless, you are just playing games now. The principle is clear. The size of the uncertainty determines the number of significant figures, not the uncertainty of the original measurements. Taylor couldn’t be clearer that regards it as possible for the average to be reported to more decimal places than the original measurements.
“I would suggest that this is an introductory text and the answers are probably designed to be sure that you understand the material.”
It’s the book I keep being told is the standard text on the subject. The one I had to understand before I could even comment on metrology.
“Also note that NIST TN 1900 does refer to appropriate sections of the GUM for determining expanded experimental standard uncertainty.”
Will you ever realize that expanded uncertainty has nothing to do with the question of how many digits to report. All it means is you multiply the standard uncertainty by a coverage factor to give you a percentage confidence interval.
All the GUM and NIST documents ever say as far as I can tell, is that you calculate the uncertainty, report it to a reasonable number of digits (GUM says 2 usually suffices, but you may want 3 to avoid rounding errors in future calculations) and then report the result to the same magnitude as the uncertainty.
None of them make a big deal about any of these rules, and none of them say the uncertainty of an average cannot be less than the uncertainty of an individual measurement.
“NIST is the agency that one should follow to insure a consistent and defendable basis for dealing with measurements.”
Then point me to the NIST document that explains how they want the significant figure rules to be implemented, especially in relation to means of large samples.
You can whine all you want. Read TN 1900 again. This document is from the U.S. accepted standards keeper of measurement methods, NIST. If you don’t like using an expanded experimental standard uncertainty, take the issue up with them.
Why should I keep having to read all these documents for you? We both know that if you could find a part of any of this NIST documentation that makes the same point as you, you’d quote it for my benefit. I can find nothing to suggest they say anything about the legal number of digits, and nothing to suggest they demand that the number of digits in a mean by less than in the individual measurements.
I was going to write a long response to you but it just isn’t worth my time trying to educate you. Your disbelief and attempt to rebut protocols used by engineers and labs everywhere is a simple enough demonstration that you have never had the responsibility of accurate and reproducible measurements.
Your response informs all that you have never had to justify why you add resolution to measurements by simple statistical calculations. I KNOW for m your comments that you have never answered to a lab instructor, a boss, a customer, or a court why you made attribution of accuracy and precision well beyond what was actually measured.
Here are some other references.
Britannica
For calculations involving measured quantities, the first step in determining the precision of the ANSWER is to determine the number of significant figures in each of the measured quantities.
Wikipedia
Numbers are often rounded to avoid reporting insignificant figures. For example, it would create false precision to express a measurement as 12.34525 kg if the scale was only measured to the nearest gram.
https://eng.libretexts.org/Bookshelves/Electrical_Engineering/Electronics/DC_Electrical_Circuit_Analysis_-_A_Practical_Approach_(Fiore)/01%3A_Fundamentals/1.2%3A_Significant_Digits_and_Resolution
When performing calculations, the results will generally be no more accurate than the accuracy of the initial measurements. Consequently, it is senseless to divide two measured values obtained with three significant digits and report the result with ten significant digits, even if that’s what shows up on the calculator.
Finally read this for a treatment on why engineers and commercial labs are meticulous about using measurements When you specify a precision, people do rely on it being factual.
Take these for what you will. But, be aware that I will not answer your dismissal of common protocols in displaying data in science and engineering.
“Your disbelief and attempt to rebut protocols used by engineers and labs everywhere is a simple enough demonstration that you have never had the responsibility of accurate and reproducible measurements.”
And yet you are quite happy to describe everyone who’s worked in statistics, uses probability theory or who has ever created a global data set that is truncated to the nearest degree, as frauds. This presumably includes Dr Roy Spencer and Taylor. Maybe, as Tim keeps saying, you need to get out of your own box.
I’m sure all the guidelines (not rules) are fine for what they are, but none of your documents actually address how to calculate uncertainty when taking a large statistical sample, or establishing a global temperature anomaly.
“Your response informs all that you have never had to justify why you add resolution to measurements by simple statistical calculations. I KNOW for m your comments that you have never answered to a lab instructor, a boss, a customer, or a court why you made attribution of accuracy and precision well beyond what was actually measured.”
And as so often, rather than justifying your attacks on statisticians you just resort to ad hominems. Rather pointless ones given I’m a furtively pseudonymous old Bellman. (And while I have no intention of bringing my own real identity into play, you are simply wrong on some of those counts.)
Here’s one of your quotes. I’ll highlight an important word for you.
“When performing calculations, the results will generally be no more accurate than the accuracy of the initial measurements.”
The rest of that chapter goes on to talk about multiplying or adding two numbers or so. Nothing about the uncertainty of a sample of thousands of independent values. Nothing about averaging at all.
The Briannica article makes no mention of averages, but does include the standard guidelines for division.
I’ve pointed this out to you before, but this rule implies that if you have say a sum of 100 values, say to one decimal place, eg, 1234.5, then dividing by an exact number like 100, with an infinite number of significant figures, means the answer will have the same number of figures as the sum, i.e. 12.345.
(Note, I don’t think this is a sensible number of digits, because the rules for adding are wrong. They don’t take into account the way uncertainty propagates when adding. But that’s another problem.)
The Wiki page has the same guidelines (not rules), and also neglects to mention the special case of taking an average.
Will you at least comment on whether you agree or disagree with the two small examples from Taylor?
HAHAHAHAHAHAHAHAHAHAHAHAAH
In your dreams, Stokesboi.
We seem to agree that the number of decimal place is not equal to the number of significant digits. Ergo there is no issue in showing uncertainties to more decimal places than what was measured. If we measured 0.5 and showed uncertainty as 0.0134298734, that would be an issue. But I don’t see anyone doing that.
The issue is that if multiple measurements with resolution in the 1/10ths digit gave an average of 0.5352 with an uncertainty of ±0.0134298734, what is the correct depiction of what was measured.
Significant digits would require reporting the average as 0.5 and the uncertainty as ±0.01, that is, 0.5 ±0.01. This would give a range of 0.49 – 0.51.
Seeing a measurement with one decimal digit yet with an uncertainty in the 1/100ths digit would be unusual and I would question your use of a measuring device.
Example: (using NIST TN 1900 Ex. 2)
readings of 0.6, 0.3, 0.4, 0.5, 0.8, 0.7
mean μ = 0.55 (will use this for interim calcs)
variance σ² = (0.6 – 0.55)² + (0.3 – 0.55)² + (0.4 – 0.55)² + (0.5 – 0.55)² + (0.8 – 0.55)² + (0.7 – 0.55)² = 0.1159
standard deviation = √0.1159 = 0.34
s = 0.34 / √6 = 0.13
t Table k factor for DOF 5 => 2.571
expanded experimental uncertainty = 0.13 * 2.571 = 0.3
So the measurement would be stated as:
0.6 ±0.3
“Significant digits would require reporting the average as 0.5 and the uncertainty as ±0.01, that is, 0.5 ±0.01. This would give a range of 0.49 – 0.51. ”
Can you not see how misleading that is? If the average is 0.535 with an uncertainty of ±0.01, you know the average is very unlikely to be as low as 0.49, or even 0.5. But it is much more likely to be 0.53, and could even be 0.55, well outside your uncertainty interval.
He won’t.
The only thing that’s changed since 1980 is the the development of MS Excel, and the ability of people who can’t operate an abacus to screw around with it.
Furthermore, modern temperature measuring equipment is now so accurate they create their own problems (see Jennifer Mahorasy’s objection to the Australian BOM’s policy on that particular subject) and although I’m largely mathematically illiterate, even I know all measurement come with a margin of error, none of which ever seems to be included in ‘official’ press releases.
If a warm breeze passes over a digital thermometer it will record that as a high temperature. It might last one second, but it’s still recorded, whilst the rest of the day may have been well below that momentary spike. As I understand it, this is the basis of Jennifers objection to the BOM’s policy on temperature recordings, they take second by second reading instead of allowing a time lapse (I explain this very badly, sorry) to compensate for temperature spikes.
Traditional mercury thermometers in a Stevenson screen naturally compensates for these anomalies by being slow to react.
In other words, the more accurate technology has become, the more temperatures have risen. Whilst this might not explain all warming that may have taken place over the last ~125 years, it likely explains much of it, and the deliberate and persistent elimination of margins of error might explain yet more.
I don’t often play devil’s advocate, HotScot, but wouldn’t the mechanism you suggest produce also a downward bias in daily minimums? Is there any evidence of this?
I don’t think there would be a downward bias on minimums because that same inertia would also ignore any temporary breeze that would drop temps for a little bit but shouldn’t count as the real measurement for the day, but the hyper sensitive electronic temp probes would. I think there is some agreed upon averaging standard to avoid that situation, that Australia refused to implement, another thing Mahorasy pointed out.
Wow, the climate science is settled, yet Australia doesn’t even agree on how temps should be measured!
I believe there’s a TMax and TMin for any daily temperature, but as the objective is to push the global warming mantra you’re never going to be told what the TMin is.
As has been very well illustrated by WUWT over the years, global temperatures are an excruciatingly complicated subject with lots of mathematical and scientific understanding required to even get close to the truth.
Therein lies the problem. 90%+++++ of the worlds population do not have a higher scientific qualification so just don’t understand the complexities.
The alarmists/greens twigged to this 50 years ago and adopted propaganda which everyone understands.
In addition to (Tmax+Tmin)/2 the Tmin and Tmax are also available both on a station level and on a global average level.
https://berkeleyearth.org/data/
The narrative-compliant Guardian would have you believe
“”Accompanying the rise in temperature is increasing ocean acidification from carbon dioxide levels in the water, which eats away at shellfish, leading to smaller, less healthy creatures.””
https://www.theguardian.com/world/2023/jun/10/new-zealands-warming-seas-threaten-maori-food-sources-relied-on-for-generations
Can anyone explain how an ocean pH >8 eats away at shellfish?
By telling lies lies lies and even more lies.
They talk about ‘The Ocean’ and we see a picture of Danny Paruru on the bank/shore of a really tiny ocean – we can see both sides of it in the one photo.
They do actually say in the smaller print that he’s on the banks of an estuary and it is there where he harvests these shrivelling clams and other aquatic bugs.
But estuaries are not = oceans – the chemistry of the water is a world apart.
Mostly it’s fresh water and in an estuary that size, the water flow from the river feeding into it will be glacial – it’ll move 2 yards out when the tide goes out but come back a yard 6 hours later when the tide returns.
And it will have the exact same issue as why Scandinavians put ground Limestone into their rivers and lakes.
No, it wasn’t because of Acid Rain from UK power stations – it was because of:
The New Zealanders did it all to their very own selves but such is the great beauty of Climate Change – you pass the buck onto everybody else.
The Thames is between 7.6 and 8.1
Any hydrochloric acid rivers round your way?
From the Grauniad article, regarding dropping numbers of whitebait (emphasis mine):
So … they used to catch all they could, far more than they needed, and use them as manure.
And from Nat Geo New Zealand, December 2017:
And now they’re shocked, shocked I say, that numbers are dropping, and it’s all the fault of climate change?
Medice, cura te ipsum!
w.
It doesn’t. But 3 1/2 decades of unlimited about human induced climate change clearly eats away at brains.
Actually, they are just making excuses for those who really do the eating.
Nice article. Isn’t the adiabatic rate 1.0 C per 1,000 meters not 100 meters?
6ºC/km more or less, so about 1ºC in 150 meters. That’s where the warming from a doubling in CO2 comes from, a 150 m rise in the average emission height. Then 2 extra ºC from feedback.
Javier, that’s the environmental lapse rate … but when you’re walking between floors in a building, the relevant rate is the dry adiabatic lapse rate.
w.
From NOAA:
I usually abbreviate it to 1°C / 100m.
w.
Willis
5.5°F per 1000 ft or 9.8°C per km.
I usually abbreviate it to 1°C/km.
Should that be 10°C/km?
10C/km. You dropped a zero.
I don’t get: “I usually abbreviate it to 1°C/km.”
I thought that was a mistake but indeed it is 0.98 degrees C per 100 meters. I’ll be more careful before making a comment and then sticking my foot in my mouth.
Grrrr … I hate typos.
Fixed, thanks for the input.
w.
No cause for alarm at present. The main argument I’ve seen for the high temperatures is a low level of dust from Africa.
But I’m not sure what lessons we are meant to take from these non-anomaly graphs. Of course temperatures aren’t as warm as they are at the warmest time if the year. The point is they are warmer than they would normally be for this time if the year.
Here’s a chart that shows “actual daily values”, not anomalies.
https://climatereanalyzer.org/clim/sst_daily/
It shows the peak of global (60S – 60N) temperatures at the start of April at 21.1°C whilst the peak in 2016 was 21.0°C. Current temperatures are still 20.9°C, whilst at the same time of year in 2016 it was 20.7°C
The normal keeps changing as the planet warms. This is pretty obvious and I don’t see what is worrisome about the planet’s warming or cooling. It is always doing one of the two. Personally, I prefer glaciers getting shorter than getting longer.
Things to expect when the planet is warming:
-higher temperatures
-higher sea level
-less sea ice
Is the planet warming?
Yes
Do we know why is it warming?
Some people think they do, but they lack the evidence to prove it. Their hypothesis has more holes than Swiss cheese.
WIllis,
your last two sentences are mutually contradictory. You say that
“And again, there’s nothing out of the ordinary. In fact, maximum North Atlantic temperatures have been pretty steady since 2010. All that’s happening is that, like the global ocean, this year it’s warming earlier than usual.”
Yet looking at the graphs “warming earlier than usual” is completely out of the ordinary. When has there been such warming this early in the year? In Fig. 3 the temperature appears
to be about 0.6 degrees higher for early June than at any time in the past. Which given the heat capacity of the oceans means that there is an extraordinary amount of extra heat sloshing around. How is this normal?
Izaak, you say “at any time in the past” when you mean “in the last forty years”, which is pretty hilarious …
I’ve clarified my text to avoid your misunderstanding of my point.
Thanks.
w.
Anomalies can cause mischief, and not just concerning El Niño. The mandatory CMIPx 30 year hindcasts are reported as anomalies, from which IPCC wants you to conclude they all do a fairly good job reproducing history. But when these hindcasts are reported as actual temperatures, the models disagree by up to about +/- 3C—after having been tuned to best hindcast. They are in profound disagreement amongst themselves. Was covered in essay ‘Models all the way down’ in ebook Blowing Smoke.
“after having been tuned to best hindcast.”
They are not tuned to best hindcast.
If they were, of course, they would not have such discrepancies.
“not tuned to hindcast”???
GET REAL! That’s the most outrageous thing you’ve said in a while. Everyone knows they’re tuned to hindcast.
w.
http://wattsupwiththat.com/2013/10/01/dr-kiehls-paradox/
Nick is right. It is well known that the free parameters of a model are tuned to make the absolute values best match observations. The anomalies, however, are formed from the absolute values on a per monthly basis no different than how it is done anywhere else. The anomalies are consistent with the absolute values. Not only are the anomalies not used to tune the free parameters, but there is no 3 C discrepancy that I can find. You can verify this via the KNMI Climate Explorer.
Your link dead ends on me. But not the one you included a few comments down. I’ll look at that.
Fixed: https://climexp.knmi.nl/
Rud is referring to the 3ºC difference in the temperature of the planet between different models as reported in:
Hawkins, E. and Sutton, R., 2016. Connecting climate model projections of global temperature change with the real world. Bulletin of the American Meteorological Society, 97(6), pp.963-980.
Figure 1:
The difference between the Holocene and the Last Glacial Maximum is 6ºC, so half of that is a huge difference.
When they use anomalies the difference is hidden from view.
Ed Hawkins says the cold models and the warm models respond the same to the same forcing, which is in itself pretty astounding. It appears the temperature of the planet is absolutely irrelevant.
Oh got it. Rud is talking about the range of the baselines of the individual members. I got a totally different interpretation. I thought he was saying that there was a discrepancy between a specific member’s absolute value and its anomaly. The coincidental factor here is that CMIP5 members show about a 3 C range on the absolute values due to the seasonal cycle while the anomaly is stable since it removes that cycle. Obviously the members are not tuned to the anomalies because if they were you wouldn’t see the seasonal cycle in their output.
Anyway, yeah, the differences in baselines is well known. I don’t see a 3 C range when doing the 1 member per model type though. That makes me think it is the perturbed members within the model type accounting for the range here. That would be consistent with what Hawkins & Sutton said about the MPI-ESM-LR where the modelers intentionally split the baselines to see if has an effect on the warming trend.
The difference is not due to the seasonal cycle. Different models believe the planet is at different temperatures. Figure 1 above is a display of annual averages.
The rest of what you say doesn’t make sense. A serious problem does not diminish just because it is known and not given importance. And the range is due to the spread of all the models, not just a few rogue ones.
The model climate is to real climate as the Sims are to real people.
I’m not saying that difference in the baseline is due to the seasonal cycle. I’m saying that I recognize that it is not.
I think I found the two extremes.
MIROC5: https://climexp.knmi.nl/data/icmip5_tas_Amon_one_piControl_0-360E_-90-90N_n_032.dat
IPSL-CMSA-LR: https://climexp.knmi.nl/data/icmip5_tas_Amon_one_piControl_0-360E_-90-90N_n_029.dat
I think this shows that although free parameters may be tuned to matched the changes in absolute values in regards to the seasonal and trajectory changes they clearly can’t be tuned to the baseline. I think that is what Nick was actually saying. I think I interpreted both Rud and Nick’s posts in way neither intended.
And while model members within a model may have had their baselines intentionally split (eg MPI-ESM-LR) there is clearly a difference between the models as well. I think Hawkins & Sutton are showing the ensemble mean for each of the models.
It seems like the simple solution is to have modelers calibrate their baselines to reanalysis. You’d still want to perturb the individual members within the model to represent the uncertainty of the absolute temperatures represented by reanalysis, but the model mean should be congruent with the mean of reanalysis through the overlap period.
Averaging the output of these climate games is total nonsense.
Can you imagine an artillery strike using several programs with varying points of impact averaged to get the correct aiming info. Or how about aiming a surface to air missile using the average of multiple models.
They have a bad case of Average On The Brain.
Nick, read the essay and check footnotes 7 and 8. Dr. Judy Curry first made this observation with respect to CMIP5in 2013. CMIP6 is no different.
And see my long ago guest post here ‘The Trouble with Models.’ It describes and gives official examples of the two different tuning processes used.
I searched for the essay. It appears it costs $10. What am I missing?
Where is the actual evidence that models are tuned to best hindcast? They are not.
As I said, if they were, they would get a consistent version of absolute temperature.
Technically I actually agree with you here.
If I was being pedantic (perish the thought …) I agree that Rud should not have phrased his objection using those precise words.
However …
From the AR6 WG-I assessment report, FAQ 3.3, “Are Climate Models Improving?”, on page 519 :
My understanding (which is probably a massive over-simplification !) is that the outputs of each “Model + Parameterisation + Input forcing scenario” combination are subjected to post-run verification, and anything in the hindcast period that is “too far” from observations will result in that individual model run being declared “not physically realistic”, and dropped from inclusion in the “model ensemble” calculations.
This is more of a “post-run filtering” than a “pre-run tuning” of the climate models.
NB : Looking at other people’s reactions to your posts it would appear that the “exclusion thresholds” for GMST that were used for both CMIP5 (AR5, 2013) and CMIP6 (AR6, 2021) are something like “(absolute ?) measured temperature ± 1.75°C”.
Attached to this post is a copy of FAQ 3.3, Figure 1, which can be found on page 520 of AR6 WG-I.
Let’s approach this using a (massively distorted …) version of The Scientific Method (TSM).
Conjecture 1 : From CMIP3 (TAR, 2001) to CMIP6 (AR6, 2021) model run selection — for inclusion in or exclusion from the “ensemble mean” — has been optimised to improve GMST correlations.
Conjecture 2 : Model run selection has been optimised to improve precipitation correlations.
Conjecture 3 : Model run selection has been optimised to improve surface pressure correlations.
.
.
.
After taken the time to check my idle musings, please return and tell me (and everyone else) which of those “scientific” conjectures is most “consistent with” the IPCC’s latest assessment ?
At the time of posting the very last comment on this page is from “bigoilbob”, and includes the following question :
I have the same request.
I obviously wasn’t one of your downvotes since I never downvote anyone. And I especially would not downvote any of your posts since they are always high quality and informative. Your graphs are top notch as well.
I don’t downvote (or upvote) anyone either, I don’t see the point … I guess my brain is [ / our brains are ? 😉 ] just wired differently from those of “normal” [ ! ] people.
The genuine cause of bafflement for me is that someone has to actually go through the rigmarole of logging in before they can “vote” at all.
“Drive-by down-voting”, with zero explanation of the precise detail(s) that they found so objectionable that they simply “had to” login and click the minus button(s) — given that I am neither omniscient nor psychic nor able to read minds at a distance — is not just “odd” to my way of thinking, it’s downright rude !
I have had the occasional “bad hair day” reaction in the past, maybe you missed those …
Libreoffice Calc plus my “screenshot” application are OK, but I wouldn’t go as far as to call the results “top notch”. To me that level is reserved for whatever graphics packages Willis et al use …
Just what do you think the individual CliSciFi models are tuned to, Nick?
What Rud said.
w.
Help me out. When I download the data from the KNMI Climate Explorer how do I see the 3 C disagreement between anomalies and absolutes?
3ºC disagreement in the absolute temperature between models. It was in CMIP5, it is still in CMIP6. When using anomalies the difference is 0ºC for the chosen baseline.
One more of the many fishy things that plague model-based climate studies. As a scientist myself, I don’t understand why they call that a science. Climate studies should be right there with social studies. They use mathematics and statistics, but they don’t apply the scientific method.
These appear to be the extremes.
MIROC5: https://climexp.knmi.nl/data/icmip5_tas_Amon_one_piControl_0-360E_-90-90N_n_032.dat
IPSL-CMSA-LR: https://climexp.knmi.nl/data/icmip5_tas_Amon_one_piControl_0-360E_-90-90N_n_029.dat
That the sea surface temperature is so warm so early points in my opinion to the heat for the El Niño being extracted ahead of time and it could result in the 2023-24 El Niño being weaker than otherwise. That is my prediction. A normal Niño, not strong, not weak.
Interesting thought, Javier. Let’s keep an eye on that.
w.
Here’s a GIF animation with the last 90 days of SST anomalies. We can see the Northern Atlantic warming up in late May and early June.
You can also see the El Niño forming at Peru’s coast and start to spread westwards.
Your senseless point illustrates that you have no idea what is happening right now, so let’s bring you up to speed. What is happening right now? Basin-wide high-TSI warming has warmed and is warming the ocean, driving El Niño conditions, after TSI exceeded my decadal threshold.
In the present situation a common denominator is driving both.
As the sub-solar point nears the annual northern solstice happening in one week, anomalously high ocean anomalies are being driven by high solar cycle activity (sunspots/TSI).
https://www.timeanddate.com/scripts/sunmap.php?iso=20230615T1500
The strength of this building El Niño solely depends on the continuation of high TSI>threshold.
In one week, after the sub-solar point starts traveling south again, the tropics will continue to absorb more high TSI on it’s way to the perihelion, when, if TSI still remains as high as it is now, the El Niño will continue to strengthen as global ocean and land temperatures spike.
The source of the heat, higher solar activity, is going to drive the ocean warming beyond the 1.5°C ‘limit’ within a year after the solar maximum, up to two years away.
Javier Vinos:
Utter nonsense!
The heat for an El Nino is NOT extracted from the ocean, it is supplied by increased global temperatures due to decreased levels of dimming SO2 aerosols in the atmosphere.
The global temperature increase precedes the formation of an El Nino, so that it, of itself, does not increase global temperatures.
Regarding your prediction, you will find that this is NOT a normal El Nino
Apart from:
……do we, or anybody in fact, have any other viable (thermodynamically correct) mechanism by which the water is warming?
Yes. Figure 4 is the most revealing. The maximum temperature, occurring in August, Is trending up faster than the minimum temperature occurring in February.
The temperature increase over water in August is due to the increasing solar intensity over the NH in June that causes the northern land mass to warm up in July, reducing ocean advection through August resulting in lower ocean evaporation and warmer surface. The NH summer water cycle is slowing down but there is more water in the atmosphere by the beginning of September – essentially saturated over a warmer ocean surface.
The June peak solar intensity at 40N was 477.1W/m^2 1700 years ago. It is now at 479.2W/m^2 on its way to peak at 502.2W/m^2 in 7,800 years.
Right now, only the Indian Ocean north of the Equator gives a glimpse of what the future holds. It has already experienced two intense convective cooling events that put massive amounts of moisture into the atmosphere. Similar convective storms will be a more common feature of the North Atlantic later in the year as the peak solar intensity moves northward.
The North Atlantic is a warm body of water but its widest point is around 23N not the Equator. The Mediterranean is a also a large solar panel around 35N that contributes heat to the North Atlantic.
Summer warming of the NH is in very early stage of the cycle 11,000 year warming cycle. Ocean time constants are hundreds to thousands of years and have only turned the corner from cooling to warming. The December solar input at 60S peaked 3,800 years ago and we are only now observing a mild cooling trend there.
These dramatic changes in seasonal solar intensity are ignored by climate modellers. They want us to focus on some imaginary CO2 forcing being stored in oceans over hundreds of years but completely ignore the real seasonal changes and they impact on ocean heat.
That extra 2 W/m2 is in the same ballpark as what’s blamed on the atmospheric fertilizer.
Glad to hear the warming will continue for several millennia – but I thought there was a ~1000yr cycle to temps (For example the Roman, medieval and present warm periods)
The climate optimum in the NH for the current interglacial peaked around 8,000 years ago about 3C warmer than present. It has been downhill since then. The shorter period cycles have lower temperature swing.
Bob Weber:
You are correct in all that you are saying, except that the warming is NOT being driven by sunspots/TSI .
This El Nino is largely man-made, being caused by decreased levels of SO2 aerosols into our atmosphere due to the Net-Zero abandonment of the burning of fossil fuels (and their SO2 aerosol emissions) and reduced global Industrial activity.
The fallout of the volcanic SO2 aerosols from the Dec 21 2021 underwater Tonga eruption may also be beginning, although it normally takes 2 years or more.
The contribution of North Atlantic SST to global ocean temps is not coming back any time soon. That is in the context of a longer-term AMO ocean cycle that is turning down now as seen in the peaks.
NOAA SST-NorthAtlantic GlobalMonthlyTempSince1979 With37monthRunningAverage.gif (880×481) (wp.com)
Please can you send some satellites up before 1979 so that we can see a bit more of the graph.
ARGO is your best bet.
Check out Job One for Humanity
and their take on this.
Taking the trends of maximum and minimum temperatures in Figure 4 is more revealing than the average. You will see the maximum, occurring in August, is rising substantially faster than the minimum. This is why the snowfall in the northern hemisphere is trending upward. More water in the atmosphere by September ends up over northern land in October as the land is cooling below freezing resulting in snowfall trending strongly upward in November. The increased advection continues through December and January.
The summer solar intensity over the northern hemisphere has been increasing for almost 2000 years. The surface temperature has been warming now for about 500 years. This process is accelerating.
So far only the northern Indian Ocean is reaching its maximum potential with both Arabian Sea and Bay of Bengal reaching or overshooting the 30C limit in May:
https://earth.nullschool.net/#2023/05/10/0000Z/ocean/primary/waves/overlay=sea_surface_temp/orthographic=-292.93,10.55,336/loc=64.336,14.462
Both these regions have already experienced rapid convective cooling since the start of May.
Reynolds OISST chart attached showing Indian Ocean temperature north of Equator.
The oceans south of 40S has no trend over the satellite era and south of 45S is cooling.
All this is consistent with the peak solar intensity moving northward and the long lag times of the oceans. We are witnessing the end of the modern interglacial:
Heat_Ice_Stores.pdf
These changes being observed now will accelerate. The ocean surface level rebound has not yet ended. A much larger portion of the North Atlantic will hit the 30C limit before the ice begins to accumulate again. During the peak of glaciation, the sea level will fall 4mm/year. That rate of sea level fall requires 16mm gain in altitude across all the land north of 40N. So far only Greenland is showing substantial gain in elevation and calving is still overtaking accumulation.
Willis: “The 2023 Thermageddon Festival is canceled, and there will be no ticket refunds”
WR: Great. After a great evening in the Netherlands (Amsterdam) visiting a presentation by Marcel Crok on the Clintel critique on the last IPCC AR6 report where your name has been mentioned in a very positive way, this is a good sentence to end the day and to search my bed.
If up to me: all Thermageddon Festivals have to be canceled.
Willis,
Thanks again for another much-needed overview. You mention uncertainty.
Most estimates of uncertainty/confidence limits/accuracy in the Earth Sciences are useless, or worse than useless when they mislead.
This is because a correct calculation of measurement uncertainty requires identification of all confounding factors – constants or variables that affect the measurement – plus a detailed understanding of the size of these factors.
This knowledge is seldom available, so other approaches are common. One is to reduce the confounding factors by experiment. For measuring water temperature, we can create laboratory conditions that remove or minimise confounding factorss. Many countries have bodies that do this type of work to estimate the best measurement possible at the time. In the UK, staff at the National Physical Laboratory, Teddington, gave me this estimate of “best” water temperature measurement ability in year 2019:
NPL has a water bath in which the temperature is controlled to 0.001 °C, and our measurement capability for calibrations in the bath in the range up to 100 °C is 0.005 °C.
This summary over-simplifies a more complex topic, but it indicates that Ocean temperatures are not likely to be measured more accurately than this, because of many known confounding variables, such as clouds over the water.
What is the claimed performance of Argo floats? See for example:
https://www.frontiersin.org/articles/10.3389/fmars.2020.00700/full
The accuracies of the float data have been assessed by comparison with high-quality shipboard measurements, and are concluded to be 0.002°C for temperature, 2.4 dbar for pressure, and 0.01 PSS-78 for salinity, after delayed-mode adjustments.
It follows that claims of accuracy of Argo float temperature measurements of 0.002 °C are no more than pop-science poppycock. Sadly, the 104 authors of this 2020 paper must be severally unaware of their juvenile errors in science and statistics. They seek legitimacy using a multi-author approach perhaps derived from an invalid interpretation of the Law of Large Numbers. In this case, the authors are misleading readers by writing of Dodgson’s fantasy as if it was reality.
“Alice laughed: “There’s no use trying,” she said; “one can’t believe impossible things.” “I daresay you haven’t had much practice,” said the Queen. “When I was younger, I always did it for half an hour a day. Why, sometimes I’ve believed as many as six impossible things before breakfast.”
WUWT blog has hosted many discussions of uncertainty. Here is the last of a series of 3 from late 2022:
Uncertainty Of Measurement of Routine Temperatures–Part Three • Watts Up With That?
My main conclusion from the comments of readers is that there are two broad approaches, even two schools of thought, about what uncertainty is. One school is close to statistics based on clean numbers. For example, if one used the NPL water bath above and took temperatures each minute for a day or two, one could determine statistics like standard deviations on which a numerical uncertainty could be claimed. The other school of thought claim “What about the confounding factors in real settings, like clouds over the oceans, about which we know little?” The uncertainty estimates between these schools of thought can be magnitudes apart in the Earth Sciences.
Willis notes about the data used in the present article:
“The NOAA 1/4° Daily Optimum Interpolation Sea Surface Temperature (OISST) is a long term Climate Data Record that incorporates observations from different platforms (satellites, ships, buoys and Argo floats) into a regular global grid. The dataset is interpolated to fill gaps on the grid and create a spatially complete map of sea surface temperature. Satellite and ship observations are referenced to buoys to compensate for platform differences and sensor biases.”
Then school of thought with concern for confounding factors, might be expected to say “You cannot combine measurement types, like ships, buoys, satellites, into statistical estimates of uncertainty (or confidence or accuracy) because you violate basic statistics principles. Statistics takes a sample from a population to represent it. What is your population here? You have several. You can combine for example, factors that are IID (Independent and Identically Distributed) but these various platforms are not IID and are so far apart that they must be treated separately. Then, there is the problem of how to treat invented numbers.”
I rest my case. Geoff S
Hip Hip Hooray!
“pop-science poppycock” love it!!!
Scientists can’t see the ocean for thermometers.. no matter how good their device is, they can not claim that the whole chunk of ocean in the grid measured – square kilometres of ocean – is accurately measured down to microdegrees C.
It’s the rest of the paragraph that defines the climate preference for ‘pop science poppycock’.
104 authors, 47 institutions to boost their 20 years of Argo.
“Climate Scientists” work hard to create this stuff and get paid for doing it! Then Willis comes along and busts their bubble for free!
Thank you Sir.
Willis, thanks for evaluating the data. Here’s the UAH air temperatures above oceans only for comparison. From Ron Clutz.
Ref: “This cooling goes by the fancy name of the “adiabatic lapse rate”. In general, it cools about 1°C for every 100 meters in altitude.”
My comment: You reference the lapse rate for dry air. My recollection from my flying days is that the standard atmosphere lapse rate is 2degC per thousand feet from sea level to about 35,000 feet. That equates to about 1degC per 500 feet. You are using about 1degC per 328 feet. The difference is the moisture content in the real atmosphere. Not a huge issue but a real (easily measurable) difference. I recognize using the dry air lapse rate gives the alarmist the most benefit of the doubt. A standard atmosphere reduces the temperature difference by about 50% over an elevation change below 35,000ft when compared to a dry atmosphere.
Still is a nice article. I appreciate the work you put into this field.
Bob, the term “dry” adiabatic lapse rate does not mean that the air is dry.
It simply means that the moisture in the air is not condensing. If it’s condensing, that’s called the “wet” adiabatic lapse rate.
What you are describing is the “environmental” lapse rate, which allows for the fact that there are likely to be clouds in the mix, so the result is somewhere between the dry and the wet lapse rates. It’s the most reasonable assumption for things like flying.
Since there are no clouds in the mix when I walk down a flight of stairs, the appropriate lapse rate is the dry lapse rate.
My best to you,
w.
My concept of El Niño has more to do with the release of the Western Pacific Warm Pool and the weather effects that follow.
The emphasis on sea surface temperatures seems misleading because that surge of warm water moving eastward across the Central Pacific Ocean, and pushing other water around, will seem like elevator music when serious weather events happen. Folks should get ready for AC/DC.
A book by Madeleine Nash ” El Niño: Unlocking the Secrets of the Master weather-maker” tells of the weather events of the ’97-’98 season. She starts with a description of events in Rio Nido, CA. It is a good read.
She does get a few things mixed in that need not be there.
You have to wonder if these “scientists” ever stop to admit they know they are INTENTIONALLY misleading the public. And to do it time and again, without ever having any solid evidence that is realistic. They have to know 0.04C is a meaningless increase. You would think they would start to question if they are on the right side of this issue.
Willis:
I do not agree with you that there is nothing to worry about at this time!
In my article “The Definitive Cause of La Nina and El Nino Events” I found that ALL El Ninos (for the 72 year 1950-2022 period analyzed) coincided with a decrease in atmospheric SO2 aerosol levels
Four causes for the decreased SO2 aerosol levels were identified:
For (1), global Industrial activity is slowing down, reducing the number of SO2 aerosols in the air.
For (2), Net-Zero requires the abandonment of the burning of fossil fuels (which also produce SO2 aerosols), thus reducing their amount in the atmosphere.
For (4), SO2 aerosols from the underwater Tonga eruption of Dec 20, 2021, are just now beginning to settle out
Since 1900 (and probably always), all El Ninos have been ended by increased levels of SO2 aerosols in the atmosphere, primarily due to volcanic eruptions (2 were caused by increased industrial SO2 aerosol levels).
So, temperatures are rising because of decreased SO2 aerosol levels in the atmosphere, and there is nothing to stop them, barring another large volcanic eruption, which will take about a year to have a significant effect
It will not take long before it is obvious that recent temperature increases are abnormal. I believe that we should be VERY worried!
Burl, I’ve looked deeply into your claims that SO2 is the secret temperature control knob. I’ve not found the slightest evidence that it’s true.
And I’m not going to try to dispute that with you again. Your mind is made up, and from experience, I know there’s nothing I can say and no facts I can present that will make the slightest difference to your belief.
My best to you,
w.
SO2 is certainly a control knob. It’s just not the control knob.
bdgwx:
THE control knob being what?
In the final analysis, the intensity of the Sun’s rays striking the Earth’s surface is what determines our climate. This intensity is modulated by the amount of SO2 aerosols in our atmosphere.
Without them, temperatures rise to those of the earlier warm periods, when there were very few volcanic eruptions, and the atmosphere was normally free of SO2 aerosols.
There is no single control knob. A lot of agents modulate the planetary energy imbalance. There are even more that modulate the inflow and outflow of energy to/from the atmosphere.
Willis:
?? At this point I cannot recall any facts that you have ever presented that disprove what I have been saying.
Just that you have “looked deeply into my claims and found not the slightest evidence that it is true”. This in spite of the fact that the SO2 from VEI4 and > volcanic eruptions ALWAYS affect global temperatures, both cooling and warming them!
However, as I mentioned above, it should quickly be obvious that the current temperature increases are abnormal, certainly within this year, if I am correct.
Worst June for southern Ontario in as long as I can remember – where’s that little Spanish kid? We were told to expect him after living with his evil sister for the past few years, but he hasn’t shown his face.
I guess the wildfires, which I blame on environmentalists for blocking forestry and proper arborial management, have caused the sky to darken, hiding our friend in the sky. I hope all those silver spoon, trust fund enviro-nazis in the New York-ish area are enjoying the taste of their victories, taste and smell – literally.
North Atlantic is warming the most.
Strange as another parameter is at top in the north.
Fram ice export sets record this spring:
https://sites.google.com/view/arctic-sea-ice/home/fram
Is the current more active or has it taken a new path?
Well another dose of reality from the data meister! I’d like to add that these inconvenient facts then totally demolish the CO2 as control knob hysteria… Let me splain:
Oceans have 1,000 times higher heat capacity than does the atmosphere. Therefore if oceans have been heating up by 0.016 deg C per decade – this cannot be due to the effects of CO2. Thus for the air or components of the air (CO2) to have caused the oceans to heat up this much, the air would have had to contribute 16 deg C rise per decade on average to have “caused” this ocean heating.
Since this is provably false, it demolishes the CO2/warming/thermageddon nonsense.
It’s either the sun via less cloud cover, or geothermal emission into the oceans, or both.
Conversely, assuming the extra heat comes from solar or geothermal source – with Willis’ emergent thermal regulation theory this extra heat release to the air from the warmer oceans (which is 1,000 times* more heat capacity) should have raised the air temp by 16 deg C per decade and since this is not true, the self regulating system as in the emergent thermal regulation works rather well!
*(a 0.16 deg C/decade rise in ocean temps equates to the amount of heat which would heat the air by 16 deg C/decade at the 1,000x ratio of respective thermal capacities)
Or the data is garbage and has been routinely “adjusted” as in the US land data….that remains a plausible reality given how many other areas so called authority has been shown to be willing to fudge the numbers to support a narrative.
This article is about the sea surface temperature; not the bulk ocean temperature.
Thank you to Willis for his insightful post and his reminder of the importance of looking at measured temperatures, and not focussing solely on anomalies.




The Oceanic Niño Index (ONI) is one of the key indicators of variations in the El Niño – Southern Oscillation (ENSO). ONI is the rolling three month average of the sea surface temperature (SST) anomaly in the Niño-3.4 region in the equatorial Pacific Ocean. The SST in the Niño-3.4 region (measured, not the anomaly) has been increasing since 1950 (at least) and is shown in the plot below. The maximum range of SSTs is consistently about 4 degC at any particular period in time, but we see a gradual shift to higher temperatures. Both the warming peaks (El Niño) and the cooling troughs (La Niña) have reflected increased SSTs by about 1 degC which, over 70 years, equates to 0.14C/decade (very much in the same ballpark as UAH and Willis’ Figure 2).
We know that peaks and troughs in the SST anomaly in the Niño-3.4 region are usually reflected in peaks and troughs in global atmospheric temperatures about 4 months later (except where global temperatures are cooled by major volcanic eruptions). Therefore, we would expect this longer term warming trend of the SSTs to also lead to a warming trend in global atmospheric temperatures. In terms of recent changes in SST in the Niño-3.4 region corresponding to ENSO events, there is a clear distinction between a ‘very strong’ El Niño event and all recent La Niña events (only ‘strong’ or lower). The following plot shows the variations in temperature together with the 30-year climatology used to compute anomalies (i.e. the difference between the red and blue lines).
What we see is that following a La Niña event, SSTs always reach their minimum value at, or very close to, the end of the year and always increase back up at least to the bottom of the range covered by the climatology by mid-year (strong La Niña events in 1998-99, 2000-01, 2007-08, 2010-11). In contrast, the two very strong El Niño events (1997-98 and 2015-16) and, to a lesser extent, one moderate El Niño (2009-10) show that the usual cooling part of the annual cycle is missing and the event extends from one warm part of the cycle to the next.
So far, nothing appears to be unusual or diagnostic in the rate of increase in SST in this region so far this year. See, for example, 2016-17 and 2017-18.
However, since 1979 the ENSO 3.4 region SST has actually cooled at a rate of -0.05 C/decade. This is interesting because the global SST has increased. This has the effect of attenuating El Nino and amplifying La Nina relative to the global background.
To address the deviation scientists have developed the RONI index. [Oldenborgh et al. 2021]
You previously stated (8 days ago) that “The trend since 1979 is -0.04 C/decade.” Now it’s -0.05 C/decade. New data? Whatever, I responded to you on that point (here), but I’ll provide a summary of my response here. If we were doing this same trend calculation in January 2020, it would have been positive, at +0.03 C/decade. Both estimates are invalid because of the effect of end-point selection on a highly variable parameter. The flip to an apparent negative trend is clearly the result of the recent La Niña events.


.
More importantly, your statement about cooling is incorrect. The ONI data that you are using are actually based on increasing measured SSTs in the Niño-3.4 region as shown by the moving 30-year climatology. Here are the actual data:
Indeed, the paper that you reference (which does not include the graph shown or define “ONI global” also agrees with me: “The problem is that the observed Niño3.4 series has a clear trend …” and “… the warming trend since the industrial era has accelerated over the last 50 years”. However, the distinction is that, unlike the authors, I do not make the assumption that the observed warming of SSTs in the Niño-3.4 region is “due to global warming”. The evidence is that increasing SSTs in the Niño-3.4 region lead to an increase in global atmospheric temperatures (after about four months). They also lead to increased rates of growth in atmospheric CO2. The key question is whether or not increased levels of atmospheric CO2 can cause warming of SSTs. I hear plenty of arguments here at WUWT that say this is not possible and I am minded to agree.
Typo…it’s -0.04 C/decade.
ONI average 0.0 from 1979/01 to 2023/04. Yet the SST values cooled at a rate of -0.04 C/decade while the global SST values warmed at rate of +0.11 C/decade.
Yes. I know.
I made the graph. It is not the RONI value. As such you won’t find it in the publication I linked to. The RONI value compensates for the warming in the tropical band. My graph compensates for the warming globally.
It is not meant to agree or disagree. It is only an interesting topic worthy of discussion.
I’m not making that assumption either. I’m just pointing out that SSTs in the ENSO 3.4 region have cooled since 1979 simultaneously with warming of SSTs globally.
Yep. The opposite is true as well. When SSTs in ENSO 3.4 decrease it leads to a decrease global atmospheric temperatures after about four months. I assess the relationship at 0.13 C of UAH TLT change per 1 unit change in ONI.
Yep. The opposite is true as well. When SSTs in ENSO 3.4 decrease it leads to decreased rates of growth of atmospheric CO2.
Yes it can.
Anyway, the point of my post was not to challenge you. It was only to spur a discussion that there could be change in the typical global circulation response due to the relative magnitude of the ENSO against the global backdrop. This is the impetus behind adjusted indexes like RONI. Scientists have observed that global circulation response is not behaving the same way to ENSO cycles as it once did.
I agree that the hyperfocus on the anomalous el nino warmth is as misguided as the contrarian’s hyperfocus on every single la nina “pause.” The warming is steady and following projections, that is more than enough cause for concern.
Your attempts to downplay the amount of warming the oceans are experiencing by likening it to our everyday experience of climbing a set of stairs is nonsensical. No human can possibly experience the temperature of the entire ocean and the amount of energy required to raise the temperature of that body of water by that much is astounding.
Citation(s) please.
Citation(s), and some concrete numbers, please.
“Supporting evidence” please … lots and lots of “supporting evidence” …
https://www.ipcc.ch/assessment-report/ar6/
Please let me know if you should have any specific questions while reading.
In section 9.2.1.1, “Sea Surface Temperature”, on page 1223 you will find just one example of the variability of the trends in SSTs around the world.
NB : “The trend from 1950 to 2019 is positive” does not equal “the current rate of warming is steady“.
There is absolutely nothing whatsoever in the entire 2409 pages of the AR6 WG-I report that is even remotely equivalent to “GMSST warming is steady“.
Question 1) Are you able to provide a page number, and a copy of the exact “citation”, that supports your original bald assertion ?
Question 2) I also asked for “concrete numbers”. Are you unable to provide them (along with page number and citation) ?
Question 3) Which “projections” that the GMSST warming is supposedly “following” are you talking about ?
A long term positive warming trend is the same as a steady warming trend. We are seeing a long term positive warming trend. We don’t need to focus on a single anomalously warm El Nino event to be concerned with the potential impacts of long term warming of the oceans. That’s my point.
Feel free to continue quibbling over semantics, that sounds tiresome to me.
No it isn’t (see attached graph).
Translation : I know I’ve lost the argument, and am completely incapable of providing actually citations and data … so I’ll accuse the other person of “quibbling over semantics” …
I asked for “supporting evidence” for your initial bald assertions.
That isn’t “quibbling”, it’s “curiosity”.
Note that so far your (inferred) answers to my questions are
Answer 1) No, I am not able to provide “citations”
A2) Yes, I am not able to provide “concrete numbers”
A3) I do not have any actual “projections” to discuss / debate
At the time of posting the very last comment on this page is from “bigoilbob”, and includes the following question :
I have the same request (for the second time in one comments section).
While you may be incapable of finding the sections discussing GMSST in the AR6 WG-I report, I do not have that handicap … though my “mild” version of OCD means I couldn’t not check your claims against what the IPCC actually said.
Attached is a “zoomed in” version of Figure 9.3, which can be found on page 1222, with the most recent (almost) linear trend segment highlighted.
The teal “Observational Reanalysis” line, which goes from ~1870 (?) to 2019 (the “cutoff year” for AR6 inputs) shows a “long term trend” leg after ~1995 that actually slowed down from its previous trajectory, despite atmospheric CO2 levels (and the associated radiative forcing) rising “rapidly” from 1950 onwards.
NB : I conclude nothing from this. It is merely an observation.
Please examine the teal line carefully, and then come back to explain … without entering “avoid answering the question” mode … your initial “the warming is steady” claim.
The subject may be “tiresome” to you, but it is “interesting” to me.
For both context and reference, attached is a copy of the entire middle (1850-2100) part of panel a of Figure 9.3.
Spot on analysis, as always by Willis.Short version:
The 8 year pause ends as a new El Niño begins. While temperatures will probably remain on their post-1950 warming trend, expect hysterical pointers to new record high temperatures and select regional metrics – such as Antarctic sea ice and N. Atlantic sea surface temperatures.
Follow the trends yourself at the great Climate Reanalyzer website:
https://climatereanalyzer.org/
What do you figure is causing things to warm up earlier than previous years? Does Earths weakened magnetic field or wandering magnetic poles have something to do with it?
“They say there is so far no evidence that the planet has passed some climatic tipping point — though it is also too soon to rule that out.
“It’s a possibility, however small,” said Tianle Yuan, a senior research scientist at the University of Maryland, …
”Regardless of what is behind the spike in ocean temperatures, scientists are on edge about it. On Twitter, viral posts sounding alarm bells have triggered heated debates about the potential causes and whether the rise is reason to panic. …
”Scientists nonetheless agree on this: Conditions are ever ripening for extreme heat waves, droughts, floods and storms, all of which have proven links to ocean warming.”
https://www.washingtonpost.com/weather/2023/06/14/record-warm-ocean-temperatures/
It is a possibility that we are into temperature swings preceding another glaciation, however small!
Which possibility is greater?
”Scientists are on edge about it.” But then, aren’t they always?
“Conditions are ever ripening” for extreme weather. We can’t disagree with that!
Larry,
Thanks for the quotes.
Since when has a scientific issue been settled by viral treets?
Twitter is where children chatter, I am told. I never use social media.
Geoff S
Over the past decade, science has been pushed aside in the public policy debate about climate change. Now it’s almost purely political. Which means effective propaganda can decide the result.
Twitter and other media have been flooded with climate alarmism in the past week. Current rapid warming and melting here and there, climate-caused wildfires, predictions of accelerated warming (see this from Hansen).
Is this coordinated, perhaps a prelude to the introduction of Emergency Climate legislation – or an Executive Order? Such tactics work well on modern Americans.
The most reliable century-long data records indicate that the global-average estimate of air temperatures reached its lowest level in 1976. The very fact that SSTs are shown here by alarmists only for the subsequent decades manifests their unwillingness to present data in a scientifically objective manner.
I’m interested in seeing this data record. Would you mind posting a link to it?
bdgwx doesn’t seem to see thumbs. But would the -1’er for this comment please acknowledge and justify it?
Australia is supposed to be in an El Nino, but Canberra’s weather is quite cold. I can’t understand it.
Just because winter started two weeks ago…
Presumably sea temperature is not affected by CO2-driven global warming which warms the air in the first instance. How does this warm the seas as long wave radiation does not penetrate the surface layer of the sea? The oceans have a thermal mass of 1000 times the atmosphere so any warming would be unmeasurably small. Global warming does not warm the oceans – right?