By Andy May
This post has been translated into German for those who prefer it here.
In my last post I discussed the evolution from the ICOADS raw sea surface temperature (SST) data to the final ERSST and HadSST SST anomalies. These anomalies are compared in figure 1. The ICOADS anomalies are generated by subtracting the 1961-1990 mean value from all the final raw simple yearly means. This was done last after the simple global mean SST had been computed for all years.
The ERSST and HadSST values start out as anomalies. That is, they are created grid cell by grid cell before processing starts. Obviously, the measurements in each cell are normally from different vessels in different years or months, but the measurements are turned into anomalies by subtracting the 1961-1990 mean for each cell from each measurement captured in the same cell for any specific month. This is before any processing or corrections are performed. Since the cells may be as large as 12,300 sq. km this is of dubious value, but that is the way it is done.
On land with fixed measurement weather stations anomalies make more sense, the elevation of each weather station is different, and the individual weather station will often have been in the same location through the entire period from 1961-1990 and often with the same or similar equipment. Thus, making an anomaly at the beginning of the processing by subtracting the mean weather station value for each month from its 1961-1990 monthly mean is logical. On the ocean where every measurement in any given month is at about the same elevation, but from a different buoy or ship, with different equipment, it makes less sense.

This post is about the difference in the ICOADS data anomaly during the World War II era from 1939 to 1946 as marked in figure 1. It is very noticeable in the raw ICOADS data but disappears in the two final reconstructions shown. There are a lot of known problems that occurred during the war. Shipping lanes changed due to the presence of submarine “wolf packs,” SSTs were beginning to be measured inside engine water intakes rather than with buckets dipped in the ocean, and for those ships still using buckets the type of bucket often changed. These problems are apparent in the ICOADS data, but do they account for the entire radical correction shown in figure 1? The peak raw data anomaly is in 1944 (+2.14°C), yet the 1944 ERSST value is 0.091°C, thus the correction in 1944 is over 2°C. Is this realistic? The total warming since 1900 is estimated to be only about one degree, how is a two-degree correction for an entire year justified? ERSST and HadSST corrected the data, but how much confidence can we have in the corrections? What else was going on at the time?
The Climatic Conditions during World War II
Climatically there was a lot going on during the war. We are fortunate that Stefan Brönnimann of the University of Bern has dug out and digitized a very large database of meteorological data from Germany, the German occupied areas, Sweden, the United States, the Soviet Union, and the UK, and tried to come to some conclusions as to the climate of the Northern Hemisphere during the war. What is presented in this post is mostly from papers he published from 2003 to 2005 (Brönnimann S. , 2003), (Brönnimann, Luterbacher, & Staehelin, 2004), (Brönnimann & Luterbacher, 2004b), and (Brönnimann S. , 2005).
Using the data they collected, Brönnimann and colleagues built monthly maps of surface conditions, upper atmospheric temperature, and geopotential height. Brönnimann and Luterbacher reconstructed the upper atmosphere for the period 1939 to 1944. They found evidence of a weak and disturbed Northern Hemisphere winter polar vortex. This resulted in anomalously high winter temperatures in the upper atmosphere and the surface in Alaska, Canada, and Greenland and very cold winter temperatures in Europe. An example map is shown in figure 2 for January 1942 at an altitude of 500 hPa (around 18,000 feet or 5,500 meters).

Brönnimann and colleagues believe that the unusual weather during World War II was related to the very strong and persistent El Niño that occurred at the time. In figure 3 we show the Niño 3.4 index, the AMO and the PDO with World War II marked as “WWII.”
While the world as a whole was unusually warm during the war years on average, Europe, northern Siberia, and the central North Pacific suffered through three bitterly cold winters from late 1939 to 1942. The Southern Hemisphere was not spared, sea surface temperatures in the southern mid-latitudes were unusually cool and Australia suffered from a very prolonged drought from 1937 to 1945 (Hegerl, Brönnimann, Schurer, & Cowan, 2018).

In figure 3 the prominent World War II El Niño is very clear, and we can see that the AMO and PDO are positive. The North Atlantic Oscillation is not shown, but it is strongly negative during this period (see figure 4 for a plot of the similar Iceland to Aleutian sea level pressure anomaly). There is also evidence of high column ozone in both the Arctic and mid-latitudes, a weak winter polar vortex, and frequent stratigraphic warmings. As Brönnimann and colleagues report, “At the Earth’s surface and in the troposphere, the period 1940–42 represents an extreme climatic anomaly of hemispheric to global extent.”
Figure 4 compares the ENSO 3.4 temperature to central, northern, and eastern European temperature, the Iceland to Aleutian sea level pressure difference, the 100-mbar geopotential height difference between the poles and the mid-latitudes, and the total ozone measurement at Arosa, Switzerland. All these measurements show a distinct anomaly during World War II.

Figure 4 shows that analyzed in the context of the 20th century, 1940-1942 stands out as a climatic anomaly that is unique. Thus, it seems unusual that the final ERSST and HadSST records shown in figure 1 just carry on through 1940-1942 on the same trend as before as if nothing unusual were happening. They do show a bit of the 1946 cliff in the raw ICOADS data, but it is very subdued compared to the ICOADS data.
I do not question the problems in the raw data, they are firmly documented. I do question the corrections applied by the Hadley Centre and NOAA, however. The corrected data appears to be too consistent with the time before the giant World War II El Niño event and the time after. I would expect some of the anomaly seen in the raw data to survive the correction process.
Works Cited
Brönnimann, S. (2003). A historical upper air-data set for the 1939–44 period. International Journal of Climatology, 23(7), 769-791. doi:10.1002/joc.914
Brönnimann, S. (2005). The Global Climate Anomaly in 1940-1942. RMetS Weather, 60(12).
Brönnimann, S., & Luterbacher, J. (2004b). Reconstructing Northern Hemisphere upper-level fields during World War II. Climate Dynamics, 22, 499-510. doi:10.1007/s00382-004-0391-3
Brönnimann, S., Luterbacher, J., & Staehelin, J. (2004). Extreme climate of the global troposphere and stratosphere in 1940–42 related to El Niño. Nature, 431, 971–974. doi:10.1038/nature02982
Hegerl, G. C., Brönnimann, S., Schurer, A., & Cowan, T. (2018). The early 20th century warming: Anomalies, causes, and consequences. WIREs Climate Change, 9(4). doi:10.1002/wcc.522
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
Very interesting.
The enormous spike on the chart during WWII was always so far beyond the typical variation that one or more anomalies almost certainly had to affect the results. (In my non-expert opinion.)
I always thought that spike was due to two days in Japan in 1945, when the air temperature reached 100 million degrees for 10 microseconds each…
Location might matter too. Island hopping campaign might have favored warmer waters.
Wouldn’t that spike be in 1945, not 1943/1944?
No thermometer would have registered that spike. 😉
Eight downvotes. What, too soon?
Does anyone know if the Kuwait oil field fires (1991) showed up in any temperature records?
I vividly remember the billowing, dense black smoke.
No clue if that smoke absorbed/reflected sunlight differently than typical clouds. Or even if the massive event on a human scale, was still just too small to show up on a global scale?
I’ve wondered about WWII which must have been the largest amount of oil spilled in history including on coral reefs. Added to the natural seepage which was quite a bit in some places like where a German U-boat torpedoed a ship at the mouth of the Mississippi River but missed and blew up the jetty. Currents and density differences there are a mess. Wiggins, M. 1995.Torpedoes in the Gulf. Galveston and the U-Boats, 1942-1943. Texas A & M Univ. Press. 265 pp. Based on what I’ve read and the veterans I’ve talked to, including merchant marine, I would doubt that there was much concern about accurate temperatures, more about staying warm and surviving.
As for the Kuwait oil fire I was asked by a student if it was true that they would destroy the climate. I recall deferring some but pointing out it would be difficult to get into the stratosphere. Texas oil field firefighters shut them down rather quickly. 1990s was when the modern end of the world panic started.
Wish it had but no. Intake air temperature in the gulf was 100+ degrees during the day for the first gulf war. If we had followed heat stress monitoring like we were suppose to stay time in the engine room was a whopping 2 hours. Heat stress monitoring kicked in at >100 degrees so us watch standers refused to record any temperature over 100. FYI, watch station I mainly stood would record up to 130F once stepping out of direct ventilation air stream. That’s with 6 complete air exchanges per hour in the space.
If you’re curious why we refused…We had a 3 man watch rotation for each watch station or what we called a five and dime, 5 hours on watch 10 hours off (one 4 hour watch to keep it in the 24 hour day). Absolutely no one wanted to go down to 2 hours on, 4 hours off watch. Talk about getting no sleep! Also we were pretty sure command would implement the watch rotation but consider off watch time as still time to be down in the plant working because regs did not specifically call out off watch time meant off work time. Our work center was in the steam plant so getting off watch wouldn’t mean squat for escaping the heat.
The winter of 1944-45 was also brutal in Europe.
There was a great deal of increased war shipping in the equatorial Pacific.
I couldn’t help but notice the increased readings coincide with the increased numbers of ships (warships) as WW2 continued, only to rapidly fall as ships were decommissioned … starting around 1945.
It was all them bombers. Flying loaded to Germany, releasing all that explosive pressure in Germany, empty flying back, setting some polar vortex, gotta be.
Humor – a difficult concept.
— Lt. Saavik
I’m thinking that folks generally in 1939 – 1945 had more to worry about than whether the temperature was a poofteenth higher than it was 30 years ago, and ditto the sea levels.
+”a poofteenth”
There are images from the Battle of the Bulge and the German invasion of Russia (Barbarossa) that are striking. Soldiers frozen standing after being hit with bullets.
Seems it might have been a wee bit cool during the winter back then.
“The ICOADS anomalies are generated by subtracting the 1961-1990 mean value from all the final raw simple yearly means.”
Lesson umpteen in the boring series of trying to get Andy May to understand anomalies. This is not how you calculate them. You always subtract from each monthly reading the mean for that time and place.
The reason is that raw temperatures are not homogeneous. There are hot and cold places, and hot and cold times. So in taking an average it really matters how you sample. If some cold places have missing values, the average will go up, when it shouldn’t. And there were lots of missing values in WWII, which is why you get that crazy behaviour. It’s just bad maths.
It you subtract out the expected value for each time and place, there aren’t hot and cold places any more. You get a much more robust result, which is not nearly so sensitive to missing values. That is why HADSST and ERSST get reasonable behaviour in these hard times. It isn’t some special adjustment. It is just correct computation of the average anomaly.
Nick,
Read first, comment later. Your comment is explained and discussed in paragraphs 2 & 3.
Andy,
In para 2 you correctly explain what ERSST and HADSST do. In para 3 you give spurious reasons for not doing it. You just don’t seem to understand about homogeneity. And as usual, you just get very wrong results.
This is spurious?
How exactly is it spurious? In each year and month the ships are different, the location of the measurements are different and the average cells can be as large as 12,300 sq. km or more. The elevation changes are a few meters at most on the open ocean. You need to explain how making an anomaly improves things, to me it looks like it simply smears bad data over large areas and hides the truth.
Anomalies make sense on land where the stations don’t move about and the equipment stays the same for long periods, not so much on the ocean.
Yes, it is spurious. To get an anomaly, you subtract from each T the expected value. It doesn’t matter much how you get it, but an average of historic temperatures it pretty good. But you must take out the expected value, so the anomaly that you average is distributed about zero.
Each T is made up of T=E+A, where E is the climate value for that place and time, and A the anomaly. What you do just gives you an average of the E values where T was recorded that month, with all the vagaries of latitude and season. The variability of that swamps the average of A, without those vagaries.
You can actually test that. If you calculate the average as you did, but replacing each T with E (the 1961-90 mean or whatever), you’ll get the same spurious result that you do. But you have taken out all the weather information, which resides in A. Your result just reflects the interaction of missing values with climate E.
Three questions:
I guess I don’t see your point or reasoning. Why isn’t turning them into an anomaly simply masking the real raw measurements?
Andy,
We went through all this back here (more here).
_1 You get the anomaly by subtracting the expected value, and that can be done in various reasonable ways. But you are subtracting 0, which is in no way anything like the expected value.
_2-3. You are not talking about temperature trends, You are talking about averages over space and time.
There is no masking. Again, you have a field of raw:
T=E+A
where E is the climate for that month and place; all the weather variability is in A. But in averaging T you average a sample of E values, depending on where T was recorded. Because E has big variation, that dominates the result (again, you can test by omitting A) in a way that doesn’t return what you want.
You can see this in your top fig. Before and especially during WWII you get results all over the place. After WWII, when there is much less missing data, your results tend back into agreement with the correct calculation of HADSST and ERSST.
“There is no masking. Again, you have a field of raw:
T=E+A”
As usual, you just can’t seem to get it right.
The equation should be:
T +/- u_t = (E +/- u_e) + (A +/- u_a)
where u is the measurement uncertainty
This becomes A +/- u_a = (E +/- u_e) – (T +/- u_t)
from this you get
A = E – T and
u(a) = u_e + u_t (uncertainties always add)
Thus the measurement uncertainty of the anomaly is GREATER THAN the measurement uncertainty of either the estimated long term average or the current short term average (i.e. a monthly average).
If you want the most accuracy you should use the raw data as it is measured. Changing to an anomaly DECREASES your measurement accuracy.
Shifting the stated values of a set of data along the x-axis to center it on zero changes neither the variance or the standard deviation of the sample means of the distribution. Thus using the anomaly actually provides no benefit from the point of view of statistical analysis of the raw data, it only increases the measurement uncertainty.
Now come back and tell us that all measurement uncertainty is random, Gaussian, and cancels.
From your preceding post :
From the post I clicked “Reply” on :
Your “expected”, or “E”, values are monthly averages over an arbitrarily selected 30-year (or longer, e.g. NCEI uses “the 20th century”) period.
Any “average of (a sample of) E values” that is a multiple of 12 months will be a constant.
.
“Mr” gave a good response in words, but I am a more “visual” person.
The BEST GMST reanalysis datasets can be downloaded from the following link :
https://berkeleyearth.org/data/
Expand the “Global Time Series Data” section, then download the “Monthly Global Average Temperature” text file for both “Air” and “Water” versions of their algorithm.
From the header section of that file :
Highlighted are the monthly “expected / E” values for the BEST (Air) time series of numbers.
Attached is a plot of the “anomalies / A” values and the “expected / E” values for the entire dataset (January 1850 to December 2024 at the time of posting), as well as a “zoomed in” plot from January 1975, AKA “The Modern Warming Period”.
When you wrote phrases along the lines of “an average of the E values” above, what notion(s) were you attempting to convey ?
.
.
PS : I find that when I am “blocked” on an idea it is often productive to “take a break” and either have a cup of coffee (with some chocolate biscuits, obviously …) or work on a different subject for a while.
When coming back to the original problem, the process of “getting back up to speed” is frequently interrupted by “profanatory and self-derogatory remarks” (hat-tip E. E. “Doc” Smith) … e.g. “Mark, you are a [ multiple expletives deleted ] idiot !” … as the “obvious” thing that I was missing suddenly became clear to a brain that had been “reset” while thinking about other concepts.
I added a post yesterday, in a “Reply” to you (!), to a now (almost ?) defunct WUWT comments section :
URL : https://wattsupwiththat.com/2025/04/14/lowering-energy-costs-in-america/#comment-4062425
If you need a “distraction” please consider responding to that post to “reset / unblock” your brain, and then returning here to read through this sub-thread again.
“When you wrote phrases along the lines of “an average of the E values” above, what notion(s) were you attempting to convey ?”
For goodness sake, I have said it over and over. T,E and A are numbers for a grid cell/month. Everything must be done before you get to the aggregated figures that you quote from BEST. These are cell level calculations – I cited the data source. I don’t think you have understood anything at all.
On your ps I have responded back there. Although I clearly stated that perplexity was using 2022 numbers, you quoted 2023 WA numbers in an attempt to contradict. The 2022 numbers match up very well.
I object to the “anything at all” qualifier, but you are essentially correct, I did not understand your “argument”.
That is why I decided to ask some questions, in order to seek clarification.
Re-reading my posts I noted that I had added the following highlighting to an extract from one of your posts :
“… you have a field of raw T=E+A where E is the climate for that month and place; all the weather variability is in A.”
With the benefit of hindsight my sub-conscious had picked up on that “detail”, but I was unable to clearly / coherently formulate my subsequent “request for clarification”.
I do not know if BEST calculates anomalies per weather station / grid cell first and then the global equivalent numbers I copied from their file header afterwards, or if they calculate the absolute monthly GMST values first before deciding what Reference Period to use.
In either case my point … or one of them, at least … is that your generic statements about “T, E and A” do not apply to that specific counter-example.
.
I have just come here after responding to your response.
Your inferences about my “motivations” are incorrect, but unlike far too many other posters here you took the time to dig out some actual data to show where my honest mistake had been made.
That effort, despite your obvious frustration with my lack of comprehension, is greatly appreciated.
Note that the final formula I ended up with for 2022 was :
“Renewables” = Wind + Solar + [ Hydro + Pumped Storage (PS) ] + Wood + Biomass + Geothermal
The attached screenshot may help you understand why I decided to add those particular elements to the mix in an effort to reduce the “residuals” as much as possible.
From the post:”…subtracting the expected value…”
Where do you get this expected value?
Wearily, by averaging over the years 1961-90. Andy’s choice – I would have used 1995-2024.
Andy, you’re getting sidetracked and complicating about how climate temps anomalies are arrived at.
The correct way is, as I think Nick is trying to explain –
you just subtract the “expected” number.
So to simplify –
you just make shit up.
(Peer reviewers in climate papers will never notice anyway, and even it one did, “The Cause” dictates that they just ignore it).
you just make shit up.
Yep, that is what they have done.
The data just DOES NOT EXIST to get anything remotely real in the way of “ocean” temperature before ARGO was in place.
That’s right, the data does not exist.
Don’t you love how they confidently predict temperatures when they have no data!
The whole Climate Change Scam is based on making stuff up.
From a statistical analysis point of view, the precision with which these “average” temperatures are determined is “undefined”, even with the Argo floats. Too few samples over too long of a time covering too large of an area.
As usual, the adherence to statistical analysis protocols is non-existent in climate science.
Andy has said that some of these measurements represent an area as large as 12,000 sqkm. Assume that one measurement per 1000 sqkm would be a representative population and would give a good idea of the actual average temperature of that 12,000 sqkm. Since we don’t have the population values in order to calculate a standard deviation for the population we can estimate the population standard deviation by using the sample standard deviation, i.e. a sample size of 1. Now to calculate the standard deviation estimate from a sample you divide by (n-1) – which means the standard deviation of the sample is undefined (divide by zero) and, therefore, the standard deviation of the population can’t be determined or estimated. If the standard deviation of either the population or of the sample can’t be determined then you can’t calculate the SEM of the average – meaning you have no idea if that one sample of size 1 represents the actual area or not.
But Nick and climate science just assume that an undefined SEM is equivalent to an SEM of zero – meaning that sample of 1 measurement is a 100% accurate estimate for the entire area.
A high school sophomore taking AP Statistics would catch this and question the legitimacy of the analysis.
BTW, on average Argo floats sample abut 100,000 sqkm of the oceans. (Density is higher in the NH than the SH so this is just an average). I’m not sure of the time it takes to cover a sample area but the floats only report every 10 days or so and I assume it takes a number of months to fully cover the area so large areas still have a paucity of measurements. This means that, as above, how well the data actually determines “average” temperatures of the ocean is pretty much undefined – but climate science equates undefined with a zero SEM which, in turn, means a highly accurate estimate!
I was going to try to comment on the idea of the expected number but now I don’t have to.
Let’s discuss measurement uncertainty for a minute using this equation. Rewriting it we end up with:
A = T – E
Now, if both T and E are measurements with a probability distribution, they have both a mean and an uncertainty value.
Looking at NIST TN 1900 as an example (note that it is not a complete assessment of uncertainty but is useable for an example) The monthly average has an standard deviation of 4.1◦C and standard deviation of the mean of 0.872◦C. From my investigations a 30 year baseline has a ~2.96◦C and a standard deviation of the mean of ~0.55.
These are both random variables with probability distributions. When subtracting, “T – E”, the variances add. This gives a combined variance of
√[(4.1)² + (2.96)²] = 5.05◦C
Converting to a standard deviation of the mean by dividing by the √30 gives ±0.92◦C.
Since these are not repeatable measurements of the same thing, nor are they using the same instruments, operators, etc., one should use the standard deviations and not the standard deviation of the mean.
Using error bars of even ±0.92◦C would obviate any substantial conclusions about what the actual anomaly values are.
raw temperatures are not homogeneous.
There are hot and cold places, and hot and cold times.
If some cold places have missing values, the average will go up, when it shouldn’t.
And there were lots of missing values in WWII, which is why you get that crazy behavior.
This is what sounds spurious to me. Then again, I;m not a mathematician.
Nick, the Earth is losing energy, about 44 TW.
That’s defined as cooling.You believe slow cooling is really increasing temperatures. You must be mad.
44TW, even if correct, is tiny compared to the size of the Earth.
Secondly, thanks to the fact that the Earth’s core is radioactive, the Earth is generating heat almost as fast as it is losing it.
Mark, the fact that the Earth is cooling means that it is getting colder. Very slowly.
You don’t seem to be disputing anything I wrote, just trying to imply that slow cooling really means “getting hotter”.
As you rightly point out, the radiogenic heat is not quite enough to replace the oncoming heat loss.
Or did you really mean to say that you think that adding CO2 to air makes it hotter? That would be rather silly, wouldn’t it?
So Nick, tell us about the Argo buoys being continually is the same time and place, as well as the ships. 😉
ERSST is computation of the anomaly? LOL.
They don’t need to be in the same place. Andy starts with an average temerature for a grid cell for a month. What you need is a climate estimate for that grid cell/month. So you use the average 1961-90 or whatever. But you subtract that estimate before you do anything else.
Before ARGO there is basically ZERO coherent whole-of-ocean data… period !
It is all meaningless fakery.
But the grid cells at 12300 sq km? That is not fit for the porpoise. 😉
The purpose is to get a global average. Not a problem.
A global average where the long term data is at the very least suspect. Sparse data cannot be infilled. 😉
Boy, do you have a lot to learn about climate science!
Or homogenized or interpolated.
Probably better than the area of some surface sites.
Still a complete and absolute joke.
There are very few readings in the Southern hemisphere or even the NH before about 1960.
Any values before then, or even after then, are pure guesswork, and basically meaningless.
Alarmist Climate Science is practically all guesswork.
It’s not good science.
I agree, although I don’t think the network was adequate for the task until the middle 1990s. And, since 70% of Earth is covered with ocean, this also applies to the global average surface temperature.
so if you don’t think that we can know the global surface temperature before the 1990s does that mean that you believe in the medieval warm period?
I’m not sure the two are related and how you can connect them is beyond my understanding. We do not know the global average surface temperature before the 1990s, that includes the Medieval Warm Period. However, glacial records show that Northern Hemisphere glaciers and glaciers in the Andes and in New Zealand retreated from around 800 to 1200 AD, thus the Medieval Warm Period did exist, but we do not know the global average temperature during it.
There is no doubt from biological evidence that the MWP was at least as warm as now.
Evidence from all around the world is available.
Well, Nick, you’re right about one thing: “It’s just bad maths.”
I have no confidence in corrections, give us the data as measured if you think there are problems with the measurements tell us and explain why you think there are problems and what caused them. Don’t fool around with the measurements and let people believe your corrections represent what you measured that is dishonest and not helpful.
The winter was Germanys biggest enemy in the war with Russia
Grok’s evaluation of this post. Either grok is biased or Andy May is hiding things deliberately.
Strengths
Weaknesses
Broader Context
Potential Counterpoints
TL;DRThe article highlights a WWII-era SST spike in raw ICOADS data, missing in ERSST and HadSST, and questions data processing methods. It provides historical context but is limited by a skeptical bias, lack of technical depth, and failure to address data correction rationales. Hosted on a climate denial blog, it aligns with critiques of mainstream climate science.
J K is hallucinating again.
GROK as a source of anything!
Is GROK an AI? What literature was used for its learning? Clearly, literature that claims WattsUpWithThat is a denier website.
The very fact that Grok states WUWT promotes “climate change denial” suggests Grok has an inherent bias itself.
Now go back and ask Grok to repeat the analysis without the bias.
I will do that next time
There shouldn’t be a “next time.”
Sure you will.
The same as, next time you’ll think for yourself.
Relying on Artificial Intelligence scraping the farcical anti-science of the web ??.
But where is your own intelligence?
Mr. 2000: He evidently chose not to use it when he got the desired result. Using JK’s “analysis”, his comment is posted at a wrong-think website (WUWT, ick) and therefore is wrong. But he put alotta lipstick on it!
If a true transcript then this just shows the (possibly sub-conscious ?) biases of Grok’s programmers.
These things about WUWT are “known” by whom ?
Those “descriptions” were written by whom ?
On the subject of “scientific consensus” one of the best rebuttals was that of Michael Crichton :
.
In a WUWT article / comment : 2 + 2 = 4
Grok : WUWT is a blog known for promoting climate change denial …
J K : Mathematicians ! You have to rewrite the laws of integer arithmetic ! ! !
Mathematicians (startled) : Errrrrrr … why, exactly ?
J K : Because WUWT is a denialist blog ! ! !
Mathematicians (+ World + Dog) : [ … backs away slowly … ]
.
Some time later, in a second WUWT article / comment : E = mc²
Grok : WUWT is a blog known for promoting climate change denial …
J K : Physicists ! You have to find an alternative to Special Relativity ! ! !
Physicists (startled) : Errrrrrr … why, exactly ?
J K : Because WUWT is a denialist blog ! ! !
Physicists (+ World + Dog) : [ … backs away slowly … ]
.
Please try again, but using your own thoughts and analytical skills this time.
But being told what to think and mindlessly parroting liberal talking points is so much easier. Now they even have “AI” to even prepare all their responses for them.
“Watts Up With That? Reputation: The blog is described as a leading climate change denial platform”
Described by whom? Does Grok have a source? Grok seems to have been fed climate alarmists propaganda and accepts it as truth.
Get brainwashed much, Grok? Your handlers are lying to you.
Describe by Wiki.
’nuff said.
Mr. K: So which is it, grok biased or the author is hiding things? Didn’t you ask? The answer would be just as fascinating as your above comment. Unfortunately, I must discredit your comment due to reading it at this site. Right?
Reading your comment is a mistake I won’t make again.
Kreutz here ( Kreutz, W. Kohlensäure Gehalt der unteren Luftschichten in Abhangigkeit von
Witterungsfaktoren,” Angewandte Botanik, vol. 2, 1941, pp. 89- ) measured several climate indicators (temperature, wind, precipitation, CO2, radiation ) several times every day for 1.5 years. From his graphs it is clear the temperature precedes CO2 and CO2 levels were similar to present day level (about 400ppm)
This paper by Dr Massen and Ernst-Georg Beck (Accurate estimation of CO2 background level from near ground measurements at non-mixed environments) looks at the CO2 results of Kreutz and measurements by others to show how wind influences results to give accurate measurement. The latter paper is available at ResearchGate https://www.researchgate.net/publication/234004309.
Ocean surface temperatures clearly have a measurement problem. E-G Beck also looked at the relation between OST and CO2. As well as I remember there is a lag of atmospheric CO2 from OST changes by upto 5 years. There is a longer lag in ice core data even though the are doubts about the CO2 in ice cores. There is no evidence that CO2 has any effect on atmospheric temperature or on surface temperature.
The peak raw data anomaly is in 1944 (+2.14°C), yet the 1944 ERSST value is 0.091°C, thus the correction in 1944 is over 2°C. Is this realistic?
Yes. There is no physical mechanism that could warm the oceans by 2 degrees over a period of a couple of years without also warming the land temperatures by a much larger given the relative heat capacities of water and air. Also the temperature anomaly dropped by 2 degrees in less than a year in 1946 and again the question has to be where did that energy go? That temperature anomaly is too big and over too short a time period to by physically reasonable.
I tend to agree that a two-degree shift in a few years is unrealistic. I just question the corrections, simply lowering the value to the existing trend looks bad. We have two choices. 1) the correction procedure is ad-hoc and useless. 2) The raw data is so bad it shouldn’t be used.
Either way, the ERSST and HadSST records for the war (and probably before the war) should not be used. Thus, our useful global average temperature record only goes back to around 1950 at most. My personal opinion? Only back to the early 1990s.
“1) the correction procedure is ad-hoc and useless. 2) The raw data is so bad it shouldn’t be used.”
100%
“My personal opinion? Only back to the early 1990s.”
Even after the 90’s the data is so sparse that it is statistically unsound. The Argo floats cover an average area of around 100,000 sqkm and takes several months to do so. Jamming this data into one data set and pretending it is a representative sample is literally garbage.
SSTs near Scotland show something similar.
Many places in the NH, the only hemisphere where any widespread measurements exist, USED to have a similar pattern with a strong peak in or around 1940.
Eg , several places in the Arctic.
Maximum temperature in the USA do the same thing
Somebody should ask one or more of these AI to tell us how Phil Jones got his sea surface temperature data.
Phil Jones wouldn’t let us look at his methods because he said someone might criticize them, but maybe an AI can figure out how he bastardized the temperature record after the end of the Little Ice Age in 1850.
Phil Jones bastardization of the instrument-era temperature record erased the past warm high spots in the 1880’s and 1930’s, and erased the significant cooling of the 1970’s (Ice Age Cometh!), in his efforts to make the temperature record track the CO2 increases, thus making it appear that CO2 is causing the temperatures to get hotter and hotter and hotter and shows today as the hottest time in human history. Made to order for Climate Alarmists.
And it’s all a BIG LIE. It’s not any warmer today than it was in the recent past, and we have actual evidence to prove this in the regional, historic, original temperature records from around the world, which all show it was just as warm in the recent past as it is today.
This being the case, there is no evidence that CO2 has affected Earth’s temperatures because it is no warmer today with more CO2 in the air, that it was in the recent past with less CO2 in the air. Therefore, CO2 has had no visible effect on the Earth’s temperatures.
This is what Phil Jones wanted to hide with his bastardization of the instrument-era temperature record. He wanted to hide the fact that CO2 cannot be shown to be heating up the planet.
Andy,
One might expect large land masses to have a wartime hot blip if the oceans were showing one around them
A quick look at Australian land heatwave data shows a little evidence for 1939 and/or 1940 being years with very hot 10-day hottest heatwaves in 4 stations of the 10 I studied.
Adelaide, Brisbane, Melbourne and Sydney had quite hot 10-day heatwaves in one or two of these years, while stations at Alice Springs, Darwin, Hobart, Longreach, Cape Leeuwin and Perth did not show similar anomalous warming in wartime.
Viewed another way, the average daily Sydney Tmax temperature 1856-2024 is 21.8C, while for 1939-1945 it is 21.6C. For Melbourne, 19.9C versus 19.8C. overall Not much there to think about a link to ocean temperatures.
This is no ideal way to study the matter. There is so much unexplained noise in the raw daily land data that all sorts of odd things could have happened, but cannot be linked to specific events. The same is the case for ocean temperatures.
Geoff S
Andy,
After study of questions like this for 30 years, my only reasonable conclusion is that the available data prior to 1945 is unfit for the purpose.
I welcome arguments to the contrary.
Geoff S
No argument from me! The only question is when does the data become good enough? 1995? 2007? After 2007?
Unfortunately for people like Nick and others, their argument is, “it’s the data we have so we have to use it.” regardless of whether it’s fit for purpose or not. Actually, it’s beneficial to their cause because they can point out the questionable data and therefore justify any number of questionable adjustments until the data “confesses” what they want it to.
The Oceans Govern Climate is a website by Arnd Bernaerts, oceanographer and naval war historian.
Man influences the ocean governor by means of an expanding fleet of motorized propeller-driven ships. Naval warfare in the two World Wars provide the most dramatic examples of the climate effects.
http://oceansgovernclimate.com/wp-content/uploads/2017/02/20_1.jpg
Neither I nor Dr. Bernaerts claim that shipping and naval activity are the only factors driving climate fluctuations. But it is disturbing that so much attention and money is spent on a bit player CO2, when a much more plausible human influence on climate is ignored and not investigated.
https://rclutz.com/2017/02/18/ocean-climate-ripples/
May I ask whether anybody considered the convoy effect where those ships behind are eating the front ships’ water that has been churned and warmed by the screw motions not to mention containing oil and other contaminants such as exploded ship contents?
A useful exercise might be to analyse a convoy’s spatial distribution of readings and see if there is a gradient from leaders to laggards and how things change temporally as convoys speed up or slow down or mill about.
As I noted above, if you average raw temperatures, as Andy has done, you will get spurious results, which reflect not the weather, but the interaction of missing values with long term climate variation, both spatial and seasonal. If there are missing values in a month in cold places, the result is warmer, for example.
A test is to do that bad arithmetic, and then repeat it replacing the T data at each grid/month by a 1961-90 (say) average. Now there is no weather information. A grid cell in June has the same value for each year, except where T was missing.
So I did that with one of the datasets Andy is using – the HADSST actiual. It is the orange curve in the graph here, from the predecessor to this post:
Here is my calc, with the average T calculated in the same way in black. Then the average calculated with the data held constant at average values, as described above, in red:
The pattern in the same, including the WWII spike. But the red curve has no weather. It just has fixed T with the same set of missing values. The spike at WWII occurs because of missing values in colder waters.
Interesting exercise, but aren’t you just confirming my analysis? Your exercise shows that you can get two wildly different results depending upon the sample chosen. I can interpret this as meaning the SST record up to the end of the 1940s, at least, is meaningless. Remember the final cell size for HadSST is 5 degrees on a side or 308,025 sq km. This is almost the area of Poland for one cell!
They report in 5 deg cells but work with one-degree cells and they do this to get more apparent coverage.
“Your exercise shows that you can get two wildly different results depending upon the sample chosen.”
Yes, with bad arithmetic.
Again, you can write T as C+A, where C is the climate (say 1961-90 average for that month and grid) and A is the anomaly, which contains all the information you want about varying weather. The average of T is the sum of the averages of C and A. Now C is constant over years, so its average should be constant. But with missing data you are effectively sampling, and with C being so inhomogeneous (tropics, poles etc), that gives very variable results, which are telling you about your process rather than the state of the Earth. And this exercise returns the average of T and of C, and shows that the process result dominates. What you want to know is the average of A.
To get that, you have to identify C and subtract it, before averaging. That is what ERSST etc are doing, in your example, and free of the wild swings that you have. Those swings are just from the attempt to average C (where the average of C should be constant).
Temperature is not climate. If it was then Las Vegas and Miami would have the same climate.
“Now C is constant over years, so its average should be constant.”
What are the dimensions of “C”? Degrees celsius?
T, C and A have the same units, necessarily.
I seriously doubt we know C, we certainly do not know it for every one-degree cell or two-degree cell, and even if we did, what is its relationship to the measurements for each year? We don’t know that either, the areas are too large (at the equator a one-degree cell is over 12,000 sq km and a two-degree cell is nearly 50,000 sq. km) and the SST varies too much in each cell.
Your assumptions overreach your data and understanding, which is exactly my point.
“Your assumptions overreach your data”
You just have no idea about the arithmetic. Of course we know C. It is the mean over 30 years of the grid average. You could choose 1995-2024 if you like. With 30 years of observation it is ridiculous to say we have no idea of the climate average.
The basic point is that you are averaging T=C+A and C is constant over years for and grid. But C averaged, as in my exercise (red curve) and as you do it, is far from constant. And the error made in that bad calculation dominates your calculation and appears all the way through. It is just a very bad way to average C because of the missing values, and there need not be any. You know the missing values because of the constancy of C.
A proper way to average T would be to average C and A separately, and add. A you could do by your method, but with C you could infill and get a correctly constant result over years. Much better, but not very interesting, which is why people just talk about the anomaly average, which is where the information is. But you could do it.
Of course A should be infilled too, but that is another story. Failure to infill A isn’t as harmful, because of its homogeneity.
Arithmetic has nothing to do with the problem. The problem is the climatology is incomplete and inadequate, further it is not constant since 30 years is only about half the global climate cycles of around 65 years. So even the choice of the 30-year period matters.
Most cells will only have readings for part of the 30-year period anyway, and which years they have readings matter. This is probably the main problem with the WWII debacle.
When building the climatology, cells with no data must be infilled as you say, but the value isn’t correct. Further, as I’ve said repeatedly, the cells are too large and there is a lot of variation in temperature within each cell. Multiple thunderstorms can fit in one cell.
Basically, we don’t know anything about SST variation during WWII, it is just assumed.
Andy, it is just a matter of arithmetic, and your objections are spurious. Back to basics – T is in my case a 72x36x12x175 array. That is, Lon,lat,mth,year. You can split it any way you like into C and A, so that T=C+A. If you apply an averaging procss to C and A separately, and add, you will get the same as averaging T by the same process.
But we can apply some constraints. C should be constant over years. Then you don’t need missing values any more; it is defined constant and can be exactly infilled. So how to choose cconstants for C? Just to make A small. Then the error of averaging A with NA’s will be much less than averaging T. Any reasonably approximation to the mean of T over years assigned to C will have that property. The A’s have a spread of a degree or so, and the error must be a fraction of that. Choosing a good mean will lead to better cancelation, but that is icing on the cake.
Note that I haven’t said anything about what anomalies should really mean, although by the definition they contain all the weather information, including AMO etc. I’ve just presented it as a better way to average T. It really is just arithmetic.
Here it is graphically. As before, black is T averaged your way. Red is C averaged your way, and the difference is the average of A. Green is the correctly infilled average of C. It is periodic over a year, because months are done separately. The blue curve is the sum of the correct average of C to the average of A=T-C. Note that it is very like the ERSST average in your graph, including being about 2C cooler. Your average was boosted because cold places have a lot of missing values. ERSST didn’t do any fancy adjustments. They just averaged C in the obvious way.
The red, blue and black curves have 60 month smoothing. The green is not smoothed, to emphasise the seasonality of the climatology.
“because of missing values in colder waters”
The whole ocean is 97%+ “missing values” before maybe the 1980s, or maybe even before 2005 and ARGO.
It is total idiocy to think anyone can calculate a realistic or meaningful “global mean” from it.
Precisely. Random ships and buoys crossing into a 1×1 or 2×2 deg cell do not provide measurements that can be compared to random ships passing through the same 12,000 or 50,000 sq km patch of ocean in 1961-1990 or part of that period.
“1961-1990″
Those years were your choice. You can use 1995-2024, say.
True you can use any period. The Hadley Centre convention is the 30-year period 1961-1990. Some use the most recent even decade 30-year period, so that would be 1991 to 2020. ERSST and UAH follow this convention. I actually disagree with 30 years since that is half of the critical AMO cycle of about 65 years or the stadium wave of about the same length, thus 30 years can be either the upswing of the AMO or the down swing, but I follow convention.
I hear that Phil Jones and John Kennedy stick to their original 30-year period because they can’t remember how they constructed the original 1961-1990 climatology and would never be able to repeat it. They go through a very complex process. See here:
https://agupubs.onlinelibrary.wiley.com/doi/10.1029/2018JD029867
The process for ERSST and UAH is simpler and easier to move. More recent 30-year periods are better since they have more data in more locations.
Well it was presaging WW2-
Black Friday bushfires – Wikipedia
Which meant the nation had to put off establishing a serious rural firefighting response until after the war-
Black Friday 1939 | CFA (Country Fire Authority)
We learn by experience and as resources permit although some are curiously slow on the uptake it seems-
Electric vehicles are a DANGER to shipping | MGUY Australia