By Andy May
The global average SST (Sea Surface Temperature) is a very important component of the global average surface temperature for the simple reason that the global ocean covers almost 71% of Earth’s surface. So, we downloaded the gridded SST data from 1850 through 2024 from the Hadley Centre (HadSST v4.1), NOAA (ERSST v5), and NOAA’s SST data repository (ICOADS v3) and then plotted the data in figure 1.

There are good reasons for the large spread of SST values in figure 1 and we will go through some of them in this post, but the original question remains, “What is the global average SST?” The global average surface temperature has supposedly increased around one degree since 1850, but the differences in the records plotted are larger than that.
ICOADS v. 3
ICOADS (International Comprehensive Ocean-Atmospheric Data Set) is the ultimate source of nearly all the data plotted in figure 1. Nearly all the original ship, buoy, and other raw data used by the Hadley Centre and the NOAA ERSST (Extended Reconstructed SST) group comes from ICOADS v. 3. In addition to collecting the raw data they also provide a simple mean gridded product of their own as shown in figure 1 in green. All the global average temperature data plotted in this post is area weighted. The grids utilized by the agencies are latitude and longitude grids and while the latitudes are spaced the same everywhere, the longitudes are not, they are around 111 km apart at the equator and converge to zero at the poles so the area of each grid cell changes with the cosine of the latitude, which I corrected for. The ICOADS simple mean plotted in figure 1 is as close to the raw data as you can get.
HadSST v. 4.1
The Hadley Centre provides a lot of information about the steps they take to reach their final global average SST and at several steps they provide an intermediate product (see here). This is very commendable and educational. We downloaded their final “core” temperature anomaly, the unadjusted anomaly, and the “actuals” dataset in degrees C. These are plotted in figure 2.

The process that the Hadley Centre uses to get from the ICOADS raw data plotted in figure 1 to the “core” HadSST product shown in figure 2 is described in several papers (Rayner, et al., 2006), (Kennedy J. J., Rayner, Smith, Parker, & Saunby, 2011), (Kennedy J. J., Rayner, Smith, Parker, & Saunby, 2011b), and (Kennedy J. , Rayner, Atkinson, & Killick, 2019).
Their process is made difficult because their measurement “stations” are constantly moving, except for a few tethered buoys. So, their first step is to construct a reference “climatology” grid. In the case of the Hadley Centre this is a global ocean one degree by one degree grid. Each usable ocean grid cell must have monthly average values for the reference period of 1961-1990. Most of these values are measured from ships. Not all grid cells had actual measurements in the reference period for some years or months and interpolation and some extrapolation in both time and space was required to complete the 1961-1990 climatology (Rayner, et al., 2006).
A special averaging function that trims extreme values, called a winsorized mean, is used to compute the values for each reference and monthly measurement cell (Rayner, et al., 2006). Later the one-degree grids are combined into the larger five-degree grids that are used to produce the averages shown in this post and the final “core” SST anomaly product plotted in figure 2 in orange.
The Hadley Centre starts with the ICOADS v. 3 raw data shown in figure 1, but they exclude some of the data either to use later as a quality control check, or because they consider it inferior data (Kennedy J., Rayner, Atkinson, & Killick, 2019). The usable data is first placed in its grid cell and converted into an anomaly by subtracting the climatology reference mean for the month from the monthly average value (Kennedy J., Rayner, Atkinson, & Killick, 2019). The measurement is obviously not from the same source as the climatology reference value.
After the initial anomalies are computed, the potential biases are computed for the individual values based on the data source. It might be a bucket sample taken over the side of a ship, in this case the type of bucket and bucket insulation, if any, is taken into account. It might be from a ship engine water intake, in that case the location of the thermometer relative to the engine is taken into account, and so on. Using the data they have about each measurement, a possible suite of “realizations” of the biases are generated and the median value of these hypothetical biases is selected to compute a “bias-corrected” anomaly. The resulting bias-corrected actual temperature values are averaged in figure 2 and shown as an “Actual” temperature in degrees C in light green in figure 2. The raw anomalies, without a bias correction, are shown in figure 2 as a dark blue line. It is interesting that the bias corrected actual temperature has a different shape than either the final core anomaly or the uncorrected anomaly. I’m not sure what to make of that.
Since most of the measurements used to compute SST are moving (ships and drifting buoys) and they all have different biases and measurement methods that change over time, building a coherent and consistent SST global average temperature record is a challenge.
ERSST v. 5
The most heavily processed estimate of global ocean temperatures is the ERSST v. 5 reconstruction. We don’t have any intermediate data like we have for the HadSST reconstruction, but we have Boyin Huang, et al.’s description of the process (Huang, et al., 2017). Like the HadSST process, ERSST v. 5 starts with the ICOADS v. 3 dataset. The ERSST team then goes through a process that validates observations, discarding those that do not meet their quality control checks, bias corrects the data, cross checks with neighbors, and excludes outliers. They directly use ARGO data rather than using ARGO as a validation dataset like the HadSST team do. In ERSST, ARGO observations are weighted by 6.8 times the weight of ship observations.
The Areal coverage
The ERSST process utilizes the HadISST data to locate ice cover. When a grid cell is 90% covered with ice or more the SST in the grid cell is set to -1.8°C. Partial coverage is linearly interpolated between the reconstructed grid cell value and -1.8°C (Huang, et al., 2017). Since -1.8°C is the temperature where seawater freezes this makes some sense, but currents do exist under ice caps and clearly the sea surface temperature under the ice is not a uniform -1.8°C. Their assumption is a speculative oversimplification when we are trying to estimate surface warming rates on the order of 0.1°C/decade.
Their reconstruction process includes both interpolation and extrapolation. By making these assumptions, the ERSST reconstructed grid is more complete than the HadSST grid, as shown in figure 3.

ERSST provides a very full grid. HadSST is more conservative but still uses interpolation and some extrapolation to produce as full a grid as they can. Compare these two maps to the ICOADS map of the actual data in 2024 shown in figure 4.

The ICOADS simple mean temperature uses all values, whereas both HadSST and ERSST reject anomalous values, so the ICOADS coverage as mapped in figure 4, is as good as it gets, at least with respect to the actual measurements.
Comparing the maps in figures 3 and 4 shows us one of the reasons why the ICOADS global average temperature is the highest in figure 1 and the ERSST temperature is the lowest. The ERSST average includes a lot of assumed low values under polar ice that are unused null grid values in the ICOADS and HadSST averages. The differences between HadSST and ICOADS are due, at least in part, to the difference in the cell sizes. The ICOADS cells are 2×2 degrees and the HadSST cells are 5×5. These are areas of 49,000 sq km and 308,000 sq km respectively at the equator. The larger HadSST cell size allows for smaller areas without any measurements to be incorporated into larger cells that have values. In other words, the larger cells spread what data exists over larger areas.
Observations over time
So far, we’ve only looked at the distribution of measurements and final values for 2024. How does the amount of data vary over time? We have detailed data for both ICOADS and HadSST on observations and have plotted it in figure 5.

Figure 5 shows that HadSST reports a lot more observations per cell than ICOADS, even more than would be expected due to the larger HadSST cell size. Further, at latitude 30S, HadSST reports as many observations as at 30N for 2024, these observations are not seen in the ICOADS 2024 dataset. The ICOADS data is not interpolated or manipulated, thus the observations plotted in the lower half of figure 5 are real. HadSST does not interpolate, extrapolate, or infill cells to the extent that ERSST does, but they obviously do some. The methods used to interpolate, infill, and extrapolate values in HadSST are described in part in Rayner et al. (Rayner, et al., 2006). A lot of this is accomplished when combining the initial one-degree cells into the final five-degree cells.
Discussion
It is apparent that we don’t know what the global average SST is at the accuracy required to detect a warming rate of 0.1°C/decade. The raw data (ICOADS) does not compare well to the processed results, as shown in figures 1 and 2.
The ICOADS simple mean is the closest to the actual measurements and is preferred for that reason. Comparing it (in green) and the HadSST 4.1 Actual bias-corrected values (light blue) in figure 6 to the more highly processed standard anomalies shows that all estimates, anomaly or otherwise, are suspect before 1990 and even afterward.

In figure 6 the ERSST, ICOADS and HadSST 4.1 actual temperatures are converted to anomalies by subtracting their respective 1961-1990 means from each grid cell value. This is different from the normal procedure of computing each anomaly by subtracting its individual reference mean from each value before processing and populating the grid. The normal procedure was used to build the plotted final median core HadSST 4.1 anomaly in figure 6. However, even though the other anomalies were computed differently, the ICOADS anomaly is a fairly close fit to the ERSST and final HadSST anomalies from the early 1990s to 2024.
The HadSST bias-corrected Actual values, converted to an anomaly, do not match most of the other anomalies very well, even after 1990, which is a bit confusing. The data quality over World War II (WWII) is very poor as discussed by Huang, et al. Kennedy, et al. (2019) also discuss the data quality over this period, especially the sharp drop at the end of war, and say it is due to a large change in the areas sampled by the global fleet. Some of the sudden post-war drop may also be due to a change in the ship sampling methods. This anomaly is hidden in most datasets because the anomalies in most datasets are computed before processing begins masking the measured jump and fall in actual SST. Sometimes when viewing Earth through a microscope you can miss the mountains. The original papers do not include plots of actual temperature, but the WWII data problem can be seen in Kennedy, et al.’s (2019) bias plots in their figures 6, 7, 8, 9, 10, and 13.
The agreement between the estimated anomalies is also poor prior to 1912, which is largely due to poor sampling, the ICOADS observations prior to 1912 generally never exceed 5,000 and peak between 30N and 45N at only 9,000 observations. While good reasons exist for all the problems identified in this post and for all the corrections applied by the HadSST and ERSST teams, this does not mean they are getting the right answer.
Thus, the global average SST, the most important component of global average surface temperature, is largely unknown before the early 1990s and even after 1990 there is some doubt. The doubts about the SST average and the doubts about the land record combined with the very small amount of warming since the beginning of the 20th century (about one degree) cast considerable doubt, at least in my mind, about estimates of modern global warming. I don’t doubt that the world is warmer, on average, since 1900, but I don’t think we know how much warming has occurred with any accuracy. I also don’t think we necessarily have the trend or trends right, is part of the WWII “hump” real? I don’t think we know.
Works Cited
Freeman, E., Woodruff, S., Worley, S., Lubker, S., Kent, E., Angel, W., . . . Smith, S. (2017). ICOADS Release 3.0: a major update to the historical marine climate record. Int. J. Climatol., 37, 2211-2232. doi:10.1002/joc.4775
Huang, B., Thorne, P. W., Banzon, V. F., Boyer, T., Chepurin, G., Lawrimore, J. H., . . . Zhang, H.-M. (2017). Extended Reconstructed Sea Surface Temperature, Version 5 (ERSSTv5): Upgrades, Validations, and Intercomparisons. Journal of Climate, 30(20). doi:10.1175/JCLI-D-16-0836.1
Kennedy, J. J., Rayner, N. A., Smith, R. O., Parker, D. E., & Saunby, M. (2011). Reassessing biases and other uncertainties in sea surface temperature observations measured in situ since 1850; 1. Measurement and sampling uncertainties. Journal of Geophysical Research, 116. Retrieved from https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2010JD015218
Kennedy, J. J., Rayner, N. A., Smith, R. O., Parker, D. E., & Saunby, M. (2011b). Reassessing biases and other uncertainties in sea surface temperature observations measured in situ since 1850: 2. Biases and homogenization. J. Geophys. Res., 116. doi:10.1029/2010JD015220
Kennedy, J., Rayner, N. A., Atkinson, C. P., & Killick, R. E. (2019). An ensemble data set of sea-surface temperature change from 1850: the Met Office Hadley Centre HadSST.4.0.0.0 data set. JGR Atmospheres, 124(14). Retrieved from https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1029/2018JD029867
Rayner, N. A., Brohan, P., Parker, D. E., Folland, C. K., Kennedy, J. J., Vanicek, M., . . . Tett, S. F. (2006). Improved Analyses of Changes and Uncertainties in Sea Surface Temperature Measured In Situ since the Mid-Nineteenth Century: The HadSST2 Dataset. J. Climate, 19, 446-469. doi:10.1175/JCLI3637.1
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
How to deal with a very noisy dataset. The noise, particularly the WWII anomaly, looks much larger than any possible signal.
Notice how the temperature rise prior to the WWII anomaly is aligned with the temperature prediction from my simple 99-year moving average of sunspot data? This was my first clue that the predictions probably weren’t a spurious correlation.
Tom Halla:
It took me a while, but a few years ago, I finally figured out the cause of the WWII anomaly. It is NOT noise..
Between May of 1937 and Feb of 1943 there were no VEI4 volcanic eruptions, so the atmosphere was largely free of volcanic SO2 aerosol pollution for 6 years, causing temperatures to rise because of the cleaner air.
(This is always observed whenever there are at least 2 1/2 – 3 years between such eruptions).
In addition, global industrial SO2 aerosol pollution levels fell from 57 million tons in 1939 to 50 million tons in 1941 and 1942, aiding the warming..
I also wonder if the large volume of shipping during the war increased the number of measurements, and if they might have occurred in warmer waters, such as the Gulf Stream, because of the destinations, than is common today.
Clyde Spencer:
The rise actually began before the start of WWII
SO2 amounts do not explain the cyclical nature of the climate, where the temperatures warm for a few decades and then they cool for a few decades.
How does SO2 explain the U.S. temperature profile? A profile that is similar to every other regional chart in the world.
Hansen 1999:
You need to explain how SO2 caused the warming from the 1910’s to the 1930’s, and how SO2 caused the cooling from the 1940’s to the 1980’s, and how SO2 caused another warming starting in the 1980’s to today, where the current warming is no warmer today than it was in the Early Twentieth Century.
And those who think the Sea Surface Temperatures represent reality should explain why the resulting Sea Surface Hockey Stick temperature profile is not representative of the land temperatures. Why do their temperature profiles look so different? Answer: Science Fraud.
Tom Abbott:
Decreasing SO2 aerosol levels were responsible for the warming between 1910 and 1930.
The cause of their decrease was recurrent American business recessions, where idled factories, smelters, foundries, etc. decreased the amount of SO2 aerosol pollution of the atmosphere.
Recessions occurred Jan 1910 and Jan 1912, Jan 1913 and Dec 1914, Aug 1918 and Mar 1919, Jan 1920 and Jul 1921, May 1923 and Jul 1924, Oct 1926 and Nov 1927, Aug 1929 and Mar 1933.
Five of the recessions caused temperatures to rise enough to form an El Nino.
The cause of the cooling between the 1940’s and 1980 was increased industrial SO2 aerosol pollution, from 50 million tons to 141 million tons.
The cause of the warming after 1980 was decreased industrial SO2 aerosol pollution due to “Clean Air” legislation, falling from 141 million tons to 73 million tons in 2022.
And current warming is MUCH higher now than it was in the early 20th century 1910:( -) 0 .53 deg C:, 2024 (+) 1.17 deg. C
SO2 aerosols ARE the control knob of our climate.
“And current warming is MUCH higher now than it was in the early 20th century 1910:( -) 0 .53 deg C:, 2024 (+) 1.17 deg. C”
Nope.
Phil Jones got the temperature profile since the end of the Little Ice Age wrong, but he got the magnitude of the three warming periods after the Little Ice Age ended correct, which shows you are incorrect with your numbers.
Tom Abbott:
The Phil Jones graph is NONSENSE. Our temperatures are NOT cyclic.
According to NASA’s “Facts Online” publication “Atmospheric Aerosols: What are they, and why are they so important?, in the section on Volcanic SO2 aerosols, they state that “they reflect sunlight, reducing the amount of energy reaching the lower atmosphere and the Earth’s surface, cooling them”.
And for Human-Made Sulfate aerosols, they state that “they absorb no sunlight but they reflect it, thereby reducing the amount of sunlight reaching the Earth’s surface”. Thus both have the same climatic effect!
SO2 aerosols are micron-sized particles of Sulfuric Acid (H2SO4) suspended in the stratosphere, and they cause cooling for about 3 years after a VEI4 or larger volcanic eruption, before they eventually settle out..
Human-made SO2 aerosols enter the troposphere, and last for about a week before being washed out. However, they are emitted from relatively constant sources such as power plants, factories, foundries, etc., so that those that are washed out are immediately replaced, and they are always present in the troposphere. Their levels are reduced only when the emitting sources are shut down, or are modified to reduce emissions.
Thus, SO2 aerosols are an atmospheric pollutant that affects our climate, causing temperatures to decrease when their levels increase, and to increase when their levels decrease, making them the Control Knob for our climate.
WITHOUT EXCEPTION, every increase or decrease in our anomalous global temperatures is the result of changing levels of atmospheric SO2 aerosol pollution, and the various temperature data sets actually do a good job of recording those changes (although there may be some fudging of their magnitudes)
Causes of some of the major changes are given in my previous post, and they are NOT cyclic..
This is a long post, but you needed to have a better understanding of our climate.
Not just noisy, sparse in the early days:
https://icoads.noaa.gov/index_fig3.html
Very sparse. So sparse as to be unusable.
Sea Surface Temperatures before World War II in the 1940’s are almost non-existent.
The people who created the “Global Temperature Chart” just made up the Sea Surface Temperatures out of whole cloth. They had no data, so they just made it up. It’s called Science Fraud.
Dunno.
But I bet someone has determined that the constructed number is accurate to hundredths of a degree.
SST rise starting about 1910 and peaking during in the 1940s parallels the air temperature rise during the same period. Any insights about that and what caused it?
I don’t know anything for sure, but I suspect the early twentieth century warming is due to the rising AMO and PDO between 1920 and 1945 (see attached). Interestingly we see the same pattern from 1970 to 2000.
As for the World War II anomaly, one of the strongest El Nino periods in the 20th century coincided with WWII. See the bottom graph in the attached figure.
As I say in the post, the WWII data is flawed for many reasons, that does not mean nothing happened then or that ERSST and HadSST have properly corrected the data. We were at the peak of the AMO and the PDO and had a very strong El Nino during the war, unusually high warming ending quickly at the end of the war fits.
Don’t misinterpret this, I am not saying the raw ICOADS data is correct, I’m just saying I think something unusual happened during the war and the ICOADS records are giving us a hint of what it was.
A few years ago I read something claiming that the sea war in in the North Atlantic had a Climate impact. All the depth charges and torpedoes had an impact by mixing various layers and temperatures. I was and still am sceptical of this having any measurable impact.
If you assume 40,000 depth charges at 500lbs of explosive you get less than 10,000 Tons over about 5 years and a huge area.
I agree with you; I doubt the conflict had any impact on the climate at all. It was horrible for humanity, but Mother Nature just shrugged it off.
I’ve often thought the amount of oil spread out over the oceans might have had an effect. Every ship that went down had oil and many oil tankers were sunk. What effect would all that oil spread out over the surface have?
Nothing. Oil has been around for many hundreds of millions of years on Earth’s surface, bacteria that eats oil evolved long ago. Any oil spills were eaten within a few months of the spills.
But, while individual oil spills might not have lasted very long, they smoothed the surface of the water repeatedly, over large areas for each ship sunk, for a half-dozen years.
“I don’t know anything for sure, but I suspect the early twentieth century warming is due to the rising AMO and PDO between 1920 and 1945 (see attached). Interestingly we see the same pattern from 1970 to 2000.”
Yes, REAL interesting. What we are seeing is the global cyclical climate movement. It warms for a few decades and then it cools for a few decades and the warm and cold temperatures remain within certain bounds, about 2+C between the hottest and the coldest temperatures, since the end of the Little Ice Age in the 1850’s.
You can see it in the U.S. land regional chart:
Hansen 1999:
I wouldn’t trust sea surface temperatures any farther than I could throw the Charlatan who made them up.
Deeply negative North Atlantic Oscillation episodes drove the AMO warmer from the mid 1920’s. Which means the solar wind would have weakened then, as happened from 1995 with the current warm AMO phase. The AMO is colder when the solar wind is stronger, as in the mid 1970’s, mid 1980’s, and early 1990’s.
Mike McHenry:
Good question!
You may not like the answer, but atmospheric temperatures always closely rise BEFORE sea surface temperatures rise.
It takes a lot less heat to raise the temperature of air than water
Looked it up it takes 3200 times as much heat to raise the temperature of water than air a room temperature and equal quantities. So what you said I would expect
Mike McHenry:
See my previous comment
Mike McHenry:
The HadCRUT temp. data set also includes a separate data set listing Sea Surface temperatures.
A comparison of the two data sets (for 1980-2010) shows that land temperature changes always lead sea surface temperature changes by an average of 08-.09 Deg. C.
0.8-0.9 Deg C.
HadCRUT does not represent reality.
You are basing your theory on a nonsensical, fraudulent temperature profile.
Your results won’t represent reality, either..
Tom Abbott
My results are the ONLY ones that represent reality
Thanks, Andy.
I’m very interested in an explanation for this graphic primarily because SST shows elevated temperatures prior to 1900, which is a better fit to my model predictions.. Same data in both plots, but the upper plot is scaled for match, while the lower plot is both datasets on the same scale. I wasn’t as careful as you about accounting for latitude.
Thanks. I’ve made similar plots. The normal way to explain it is that land warms and cools faster than the oceans. But I’d like someone to explain all this quantitatively because visually it looks like the difference in rates is too large and too consistent over 175 years. You make a valid point, it is very suspicious.
“SST shows elevated temperatures prior to 1900”
Every other warm AMO phase is during each centennial solar minimum. The late 1800’s was the Gleissberg Minimum, with an AMO and El Nino warm spike in 1878.
One other possible factor in the 1878 spike is the “wet” eruption of Askja. Here’s a comparison of Askja and HT.
The ocean surface warming of 2023-2024 would have been driven by negative North Atlantic Oscillation conditions. Dry tropical eruptions promote positive NAO conditions, I have no idea what a wet eruption would do to the NAO. Though I had predicted plenty of negative NAO for 2023-2024 several years before from solar analogues.
Yes, there was a warm high point around the 1880’s, and around the 1930’s, and a similar warming today. Little Ice Age ends 1850, then temperatures go up to the 1880’s, then temperatures go down to the 1910’s, then temperatures go up to the 1930’s, then temperatures go down to the 1980’s, then temperatures go up to 1998, then temperatures go sideways to today (similar to the sideways movement of temperatures during the 1950’s (see the chart above), right before the bottom dropped out and climate scientists starting sounding alarms about the Earth entering a new Ice Age in the late 1970’s, then the temperatures warmed up again to today)
Below is an assessment of Grok to this article.
Overall Accuracy AssessmentStrengths
Weaknesses
Scientific Consensus
Final Verdict
ROFL the interpretation is mostly accurate but it overstates the doubts based on nothing more than hot air and bias and it should be treated with caution.
What we really want from you is an explanation of a “relative metric” as used. Not too many actually scientists will accept It as is a bit like using attribution statistics and carries caveats a mile long. Personally I never accept that sort of junk it’s just lazy research.
I wouldn’t trust any of these “reconstructions”.
It’s irritating that they are even taken seriously.
We should say that we don’t know what the sea surface temperatures were before the middle of the Twentieth Century. Because that’s the truth.
A clever person can get any of these AI programs to say whatever they want them to say. See here:
https://andymaypetrophysicist.com/2023/10/05/a-conversation-with-google-bard-on-the-consensus/
Except for using them to find sources, what they say is meaningless and useless.
And if one isn’t smart enough or knowledgeable enough to question the AI claims, the monologue ends with the first version. When presented with facts that weren’t reported by the first skim of the cream at the top, they invariably back off and apologize. Therefore, I think that such monologues should be changed into a dialogue where the claims are challenged (when legitimately possible) to see what is developed out of several rounds of back-and-forth.
https://finance.yahoo.com/news/mit-study-finds-ai-doesnt-164823370.html
So we have to lead AI around by the nose?
So AI isn’t all it’s cracked up to be.
Do you also think that the paper Grok published with Willie Soon and Cohler is useless too? Or now are we going to call the smartest being in the whole world stupid?
Im w you JK. Andy and others are overly sensitive. I found yr Grok post reasonable. I think it just triggered emotion and shut off thinking brain.
It happens to all of us at times.
Not cool f Andy to partake.
The paper you refer to was “guided” by Willie Soon and Jonathon Cohler, Grok only supplied the text and the bibliography. It takes a lot of work in my experience to “guide” these programs, but you can get them to say whatever you want them to say. They have no intelligence in the normal sense of the word. I’m always surprised at how easily led they are. Cohler and Soon have been working on this for a long time, see this post from 2023:
https://andymaypetrophysicist.com/2023/10/06/can-google-bard-ai-lie/
“Guided” That’s funny. Grok,the “smartest being in the world” has to be guided by apparently, a lesser human being.
Your critics need to pay attention. I think they are assuming too much.
Will you publicly denounce the absurd narrative pushed by Willie Soon, where he touts Grok’s supposed high IQ as proof of the authority of their co-authored paper? Or will you only criticize Grok when it dares to expose flaws in your own reasoning?
J K,
My only response is from my 2023 post on Google Bard, it is still appropriate:
“So, there you have it. Be careful with AI, and if you use it, use the techniques that Jonathan has used in this post to drill down to the truth. If there is bias in the answer provided, you can uncover it. AI is a powerful tool, but it must be used with care. The most important point of this post is you cannot take the initial answer to your question at face value, have follow up questions ready, do your homework.”
from:
https://andymaypetrophysicist.com/2023/10/06/can-google-bard-ai-lie/
Who is the smartest being in the world?
You are being too harsh and generalise too much. Plus, you attack someone for assuming bad faith.
I think his post is reasonable. Maybe a bit sensitive to criticism?
I have gotten both the Google and Meta AI to confirm garbled lyrics to rock songs.
That is a useful use of AI, so are any questions with factual clear answers. Asking AI for an opinion on a blog post is not a question for AI, asking “how tall is Stevie Nicks” is an appropriate question.
I often wonder why people ‘thumb down’ a post like this one. Sometimes it seems it has little to do w the data provided. One could disagree or make remarks and/ or comments but it seems a negative is just the easy and lazy way out.
But i have to be honest. I sometimes put in a positive just because of all the negatives about a reasonable post, even if i dont quite agree w the data..
Maybe someday AI will be credible, but it isn’t yet. I suspect people put in the “thumb down” (I was not one of them) because a critique from AI is useless. As explained above you can get these AI programs to say whatever you want. It’s the lack of credibility that is the problem, not the content. AI has no place in a forum like this.
I think you hit the nail on the head, Andy.
AI, at the moment, is a glorified search engine.
Ok, go and tell Willie..
“But i have to be honest. I sometimes put in a positive just because of all the negatives about a reasonable post”
I do that, too. 🙂
I don’t do down votes. I usually voice my objections, if I have any.
Tom Abbott:
Or, very frequently, you simply don’t respond to a post.
I only have so much time, and there are a lot of articles on WUWT to keep up with. Lots of misinformation to debunk.
My main criticism of your SO2 theory (and all the other theories, for that matter) is it does not explain the cyclical nature of the climate. It can’t explain the U.S. chart profile. You say, there’s more SO2 when it cools and less SO2 when it warms, but how does this happen to occur on a regular basis? In order for this to work, the circumstances would have to duplicate themselves every 30 or 40 years. You apparently claim they do, but there’s no evidence this is true, or that SO2 is doing what you claim it is doing, or the magnitude of the effect.
You haven’t proven your theory, as far as I’m concerned. You use HadCRUR as your guide, which is a completely bogus temperature profile. Your theory fits a bogus temperature profile. That makes your theory bogus.
Story tip – Gov’t Climate Propaganda Agency Caught Raking in Billions to Push Gov’t Climate Propaganda – PJ Media
I can’t think of anything more futile or useless than claiming you know the average global sea surface temperature. Adjusting and modifying your measurements and methods makes it even more useless.
I’m with you, Bob! You give a perfect description of the process.
And someone claiming they know the global sea surface temperature before 1940, with any accuracy, is utterly ridiculous.
Tom Abbott:
Nonsense:
If you know the global air temperature, the sea surface temperature is essentially the same.
We don’t know the average global air temperature.
Bob:
The HadCRUT temperature data set lists the average global temperatures from 1850 to the present,
HadCRUT is a bogus, bastardized Hockey Stick chart.
HadCRUT does not represent reality.
Basing your theory on HadCRUT is one place you are going wrong.
That’s because there isn’t one. And I still want someone in climate science tell me how temperature determines climate. I want them to explain why Las Vegas and Miami don’t have the same climate!
What are the assumed temperatures of the ocean water under the ice in the Arctic and the Antarctic? It can’t vary much from the freezing point for the water, or the water would be ice. Do they include this temperature in the calculation of the average sea surface temperature, or do they use the surface temperature of the ice?
It is in the post. The assumed temperature is -1.8 deg C. Only ERSST uses this assumption, the others just make the ice-covered ocean null. By assuming this value, they are assuming that there are no currents under the ice, which is clearly not true.
Problem is that there were basically less than 5% coverage of the much of the southern oceans up until the 1950s, so anything before that is pure guesswork.
Exactly right!
Historic global sea surface temperatures are unknown before the middle of the Twentieth Century (World War II-1940, and beyond).
Nice post, AM. I did some work on SST years ago, and concluded that until ARGO we really don’t know. Take ‘modern’ ship data. There are still two insurmountable problems.
Right. The nominal depth is 20 cm, but how many measurements are taken at a depth of 20 cm? Not many. Besides 20 cm is the mixed layer temperature at night or in strong winds and the skin temperature in the daytime with light winds.
Sampling is a huge issue.
Sometime between 1990 and 2007 we reached a point where we could estimate global average SST, but before then it is crap, and the global average surface temperature is crap as a result.
Andy,
This sampling depth/mixed layer matter is not capable of resolution for most historical data and is an important source of error. For example, there is day to night variation, variation from wind velocity, storms, rainfall and probably more factors.
When an observer can dip a hand in shallow water and sense a difference down to a couple of metres, what chance to we have of correction of past measurements by instruments?
Sure, there is an urge to compute a global average, there is probably an urge to understate uncertainty, so that there are estimates to report, as you have shown. The bigger question is whether they should be used, for some types of research or at all.
Geoff S
You put your finger right on the critical issue, Geoff. As per usual. Accuracy brushed under the rug. Facile prima facie acceptance of the numbers.
The or at all bit is a direct hit.
Argo floats are no different than land-based measurement stations. Microclimate makes a large difference in the measurement uncertainty budget and Argo floats *do* have a microclimate. As Hubbard and Lin showed for land-based stations, you can’t correct measurement station readings on a regional or global basis, it must be done on a station-by-station basis. How do you do that for Argo floats?
It’s my opinion that *NO* temperature measurement station, be it land-based/sea-based/satellite-based, have a measurement uncertainty less than +/-1C. And that is an optimistic evaluation. That makes the data totally unfit for finding “averages” down to the hundredths of a degree.
The simplifying memes of climate science of 1. all measurement uncertainty is random, Gaussian, and cancels, 2. averaging increases accuracy, and 3. averaging can increase resolution of single measurements of different things obtained from different devices are just garbage. They have no place in physical science or metrology.
Tim,
You know, Pat knows, Andy knows, I know that these alleged sea surface temperatures and their “anomaly temperatures” are not valid scientific representations because, inter alia, their estimated uncertainty when published is quite different to the real uncertainty.
Do you notice the “stunned mullet” response, maybe now named the “cancel culture code of silence” when this uncertainty problem is raised, again and again?It seems to be a recent, trendy way to try to avoid the embarrassment of being shown wrong.
There needs to be much more accountability for scientists who continue to promote stories that they know are wrong, but are needed to keep the meme going. They should try working in branches of science where there is no meme, only objectives like 5 sigma repeatability, where grown-up scientists work.. Geoff S
Not one study I have read or that has been sourced here at WUWT has a mention of an uncertainty budget and how the categories have been determined . Not one.
That is unconscionable conduct after 50 years or more research. It is bordering on unethical.
Not one study has quoted or propagated the uncertainty values in NOAA documents for MMTS, ASOS, or CRN.
Again, not one study has referenced anything directly from the GUM or ISO uncertainty documents. I can only attribute this to abject ignorance of proper scientific data treatment.
I’ve noticed that, too, Jim.
It occurred to me some time ago that the proper accounting of measurement uncertainty leads to too radical a result even for climate skeptics.
Climate models produce physically meaningless air temperature projections.
Meteorological stations with naturally ventilated temperature sensors, and error-prone SST measurements produce such poor-quality data that nothing is revealed about the rate or magnitude of the air temperature change since 1900.
And uncorrected Joule drift makes the data before 1900, certainly before 1885, unreliable in sum.
The outcome is that there’s nothing is known for certain about past or future temperature trends — certainly not at the (+/-)1 C level.
The uniformly poor quality of the date mean there’s nothing left to argue about.
I believe that’s why even climate skeptics have trouble accepting the obvious conclusion. Everyone wants the argument.
It’s wonderfully engaging entertainment even when it isn’t an income.
““stunned mullet””
That’s a new one for me! But it is quite descriptive of climate science and measurement uncertainty!
That deserves repeating !
Just because you can compute a global average, doesn’t mean it’s in any way meaningful.
Andy,
And best left to a qualified crapologist. “Climate scientists”, being ignorant and gullible, will polish and burnish the crap, and pretend they have discovered gold!
As the saying goes, you need a brain to know the difference between shit and Shinoleum.
There is also the issue of ‘Karlization’ of the temperatures by adjusting modern, high-accuracy, high-precision temperature from floating buoys to agree with older ship-intake temperatures with the issue of varying depth of intake, and boiler-room thermometers where there was probably little interest in assuring the thermometers were routinely calibrated.
The floating buoy’s still have about a 0.3C to 0.5C measurement uncertainty. Those buoys are *not* pristine, calibrated measurement devices once they are put in the ocean. Any collection of “stuff” in the sampling path of the buoy will affect its accuracy. Think dirt, barnacles, etc. They are not really any different than a land-based measurement station as far as accuracy is concerned. Just because they are in water instead of air doesn’t make them any more accurate.
Did the heat that was hiding in the oceans ever come back out? Or is it still skulking around in there, just waiting to jump out and say Boo!
And how did the heat sneak past the thermocline in the first place? Was Cerberus sleeping?
I think the ocean temperature time series from ships and Argo floats are a joke. Their locations vary constantly. The transport of water via numerous ocean currents means that the temperature of a 1 degree grid (69 x 69 miles or 111 x 111 kilometers) extrapolated from one, or even a few, locations, can’t possibly be accurate.
UAH and SST have extrapolated sea surface temperatures from satellite telemetry for decades. What are your thoughts on the accuracy of the time series developed from that data? Unlike measurements from ships and Argo buoys which never come from the same place, the satellite measurements cover the oceans consistently. Maybe Dr. Roy Spencer can weigh in?
Dr. Spencer is the expert and I hope he weighs in. But in my opinion, the satellite data on the SST is pretty poor. What depth does it represent? Is it just the skin temperature? The problem is the ocean temperature varies a lot, and irregularly, from the air-water interface to a few meters depth.
https://andymaypetrophysicist.com/2020/12/09/sea-surface-skin-temperature/
Not remotely an expert, but I should think the actual IR radiation observed by satellites is dominated by a very thin (1mm or less) layer at the surface of the ocean. The wavelength of blackbody radiation at 288K is a small fraction of a millimeter — 0.01mm according to Google’s AI agent. And thanks to hydrogen bonding liquid water is a very effective absorber of IR radiation across a wide band of wavelengths, so I reckon radiation from deeper down never makes it to the satellite.
Could my guess be wrong? Of course.
Until they prove to me that they can accurately compensate for path loss between the sampled medium and the satellite I can’t accept any measurement uncertainty estimate for any satellite data, be it land temperature or sea temperature. Radiation is radiation. There are all kinds of absorbing/reflecting/refracting media in the atmosphere and its density and dispersion characteristics vary widely from point to point on the globe.
Far too much of the data in climate science is based on the meme that all measurement uncertainty is random, Gaussian, and cancels out over a large number of single measurements of different things. It’s a simplifying assumption that has no actual basis in physical science or metrology. There is never a negative path loss in a passive measurement so path loss always varies from zero to a positive value. That makes the measurement uncertainty highly asymmetric so it can never “cancel”.
Path loss is like microclimate. As Hubbard and Lin showed for land-based measurement stations differences in microclimate make it impossible to develop a generic measurement uncertainty adjustment factor applicable over a regional bases let alone a global basis.
Excellent post!
“The problem is the ocean temperature varies a lot, and irregularly,”
An important point. Especially when Climate Alarmists insinuate all the time that the oceans are getting warmer and warmer, as if the oceans were one big bathtub that is warming all over, when in reality, the oceans, as you say, “vary” and some parts of the ocean are warm while other parts of the ocean are cold, and the temperatures change all the time, going from cold to warm and back to cold, depending on variables.
The NOAA satellite microwave sounding units measure the irradiance from oxygen molecules, which varies with temperature. For the lower troposphere channels, the frequency bandwidth results in a Gaussian-ish profile that spans about 0-10km altitude and peaks at about 5km. Over this range the air temperature is decreasing from the surface because of the lapse rate. The oxygen microwave irradiance is then a convolution of the MSR response function and the lapse rate; this irradiance is then converted to a temperature.
The only possible way to extract the air temperature at the surface would be to know both the MSR response function and the lapse rate, and then try to deconvolute them. I don’t think a unique solution is possible, especially because the lapse rate isn’t measured, not to mention that different pairs of 0km air temperature and lapse rate can produce the same value of lower troposphere “temperature”.
Unfortunately, like most things related to “climate science”, supposed “sea surface temperatures” are probably nonsense.
From Wikipedia –
In other words, nothing to do with the surface. Further, taking the “temperature” of the “sea surface” from 700 – 800 km away, with any reasonable accuracy, is laughable.
“Ah, but what about buoys and floats?”, they cry. Where are the sensors? Below the surface? Above the surface? Who cares anyway?
Just another exercise in self-delusion, achieving nothing at all of use.
Possibly something to do with a mythical GHE, which nobody can describe, I suppose.
“Unfortunately, like most things related to “climate science”, supposed “sea surface temperatures” are probably nonsense.”
Any “global average” is certainly nonsense.
The folks constructing SST assume that the measurement bias distribution is random for each and every platform (mostly ships) and that the distribution of mean biases among platforms is also random.
These assumptions of random error and random bias means allow the SST constructors to assume all the measurement error averages away to near zero.
One should also consult Stevenson (1963) “The Influence of a Ship on the Surrounding Air and Water Temperatures” who showed that the ship itself disturbed the marine thermocline. The impact of this result has been utterly neglected.
The mixing caused by the keel of the ship resulted in even careful ship-board temperature measurements not producing the true SST.
The keel problem means that SST measurements using either bucket or engine-intake sensors do not reflect the true SST, no matter how well carried out.
Only pointing the moving ship into the wind and having the sensor at the end of a submerged boom extending well forward of the prow produced a good SST.
Even more SST drawbacks discussed here.
The whole of the SST project pretends a silk purse from a sow’s ear.
Apples and oranges … waste of time … we don’t measure the temp of the soil … why is the temp of the water important ? It isn’t… if we measured the temp of the air several feet above the water then we would have apples vs apples … but we don’t so this is just an exercise in mathematics using corrupted incomplete and not fit for purpose data … waste of time
The sea is arguably a very dense layer of the atmosphere. It’s worth measuring. We just don’t do it particularly well.
Ag science uses soil temps so it *is* measured. The difference between soil temps and air temp at current temp measuring stations could be very illuminating. As usual, climate science ignores true physical science protocols and just creates data useful in getting government money.
Story Tip.
Great podcast by Ray Sanders on the farcical nonsense that is Met Office data.
https://youtu.be/ohosYzpdfJI
Andy. An impressive article and clearly an incredible amount of work went into it. Thanks.
Thanks
I see a problem I would like to discuss.
Sea ice is given a temperature of -1,8 degrees C.
As ice melts real temperatures are measured.
In currents warm water can occur under the ice sheet.
When currents melts the ice the temperature will go up from -1,8 to actual water temperature. This will give us an anomaly due to the water transportation not coupled to heat from above.
I did an error analysis on global SST, back in 2017. It was published here. I was prompted to do this by the arguments about the cool-down from around 1945 to 1975. The cooling trigger was supposedly the change-over from bucket sampling from British ships to engine intake readings from US ships after WWII. I made assumptions of systematic and random errors for both sampling methods. The result was an error bandwidth of 1.6 deg C for both measurement readings. How would anyone get assertions close to the needed accuracy of 0.1 deg C? Whatever the bare and manipulated numbers might come to, they are simply a pile of fiction with unscientific conclusions.
The internal measurement uncertainty of our best measurement systems ranges from +/- 0.3C to +/- 0.5C. This includes calibration error and drifting component values. When you add in other external microclimate factors it is not unreasonable to assume an interval of +/- 0.8C for a symmetric uncertainty distribution. Your interval of 1.6C allows for an asymmetric uncertainty distribution which is probably a better representation since very few measurement device types drift in both a positive and negative direction, they usually all drift in the same direction.
Andy ==> Absolutely marvelous — what a great piece of work. !
“I don’t think we know how much warming has occurred with any accuracy.”
I don’t think so either.
I agree that the reconstructions of past temperature records are very doubtful. However, the averaging out of errors in measurements does not assume the distribution to be Gaussian. It relies on the central limit theorem that works with almost any distribution.
The central limit theory only applies to how precisely you have calculated the average of the data you have. It does *not* tell you the accuracy of the data you have nor does it tell you the accuracy of the average you calculate from the data you have. If your data is inaccurate then no amount of sampling or sample size can make the average accurate.
The central limit theory holds that even if your data is not Gaussian, the means you calculate from samples of the data will tend to be Gaussian. This has two restrictions, 1. the sample size must be large enough to properly represent the population and, 2. you need to have multiple samples.
While it is never mentioned in statistical literature for some reason, it also implies that your data should be associated with the same thing if it is to have actual physical meaning. You can take lots of samples of good size on the heights taken from a herd of combined Shetland ponies and quarter horses and get a Gaussian distribution from the means of the samples you have taken. The average of those means will give you a pretty precise figure for the average height of the population – but it will be meaningless physically.
Don’t confuse the standard error of the mean for the measurement uncertainty of the mean. The standard error of the mean (I prefer to call it the “standard deviation of the sample means) only tells you how precisely you have located the population mean. It is *not*, however, the measurement uncertainty of the mean.
If each data point in the population is of the form “stated value +/- measurement uncertainty” then every data point in each of the samples will be of the same form. The mean calculated from the sample should be of the same form: “stated value +/- propagated measurement uncertainty”. When you combine those sample means into a data set the average of those means should have the same form: “stated value +/- propagated sample mean measurement uncertainty”.
The statistics textbooks will never tell you this. It’s why climate science ignores propagating measurement uncertainty. They’ve never learned how to handle measurement uncertainty so they just say “I’ve got a sample size of 10000 so I know the average accurately out to the hundredth of a degree”. Whether it is actually accurate or really represents anything physically just isn’t considered.
Because you are are totally confused, you are trying to confuse everyone else.
Cheers,
Bill
Dear Andy,
Could you contact me at http://www.bomwatch.com.au so we can have a private discussion?
Yours sincerely,
Dr Bill Johnston