The U.S. National Temperature Index, is it based on data? Or corrections?

By Andy May

The United States has a very dense population of weather stations, data from them is collected and processed by NOAA/NCEI to compute the National Temperature Index. The index is an average temperature for the nation and used to show if the U.S. is warming. The data is stored by NOAA/NCEI in their GHCN or “Global Historical Climatology Network” database. GHCN-Daily contains the quality-controlled raw data, which is subsequently corrected and then used to populate GHCN-Monthly, a database of monthly averages, both raw and final. I downloaded version 4.0.1 of the GHCN-Monthly database on October 10, 2020. At that time, it had 27,519 stations globally and 12,514 (45%) of them were in the United States, including Alaska and Hawaii. Of the 12,514 U.S. stations, 11,969 of them are in “CONUS,” the conterminous lower 48 states. The current station coverage is shown in Figure 1.

Figure 1. The GHCN weather station coverage in the United States is very good, except for northern Alaska. There are two stations in the western Pacific that are not shown.

We have several questions about the land-based temperature record, which dominates the long-term (~170-year) global surface temperature record. The land-based measurements dominate because sea-surface temperatures are very sparse until around 2004 to 2007, when the ARGO network of floats became complete enough to provide good data. Even in 2007, the sea-surface gridding error was larger than the detected ocean warming.

Ocean Warming

We have estimated that the oceans, which cover 71% of the Earth’s surface, are warming at a rate of 0.4°C per century, based on the least squares linear trend shown in Figure 2. This is a very rough estimate and based only on data from 2004 to 2019 and temperatures from the upper 2,000 meters of the oceans. The data before 2004 is so sparse we didn’t want to use it. The error in this estimate is roughly ±0.26°C, from the surface to 2,000 meters and unknown below that.

Argo measurements of ocean temperature at 2,000 meters are a fairly constant 2.4°C. So, we assumed a temperature of 0.8°C at the average ocean depth of 3,688 meters (12,100 feet) and below. For context, the freezing point of seawater at 2900 PSI (roughly 2,000 meters or 2,000 decibars) is -17°C. The value of 0.8°C is from deep Argo data as described by Gregory Johnson and colleagues (Johnson, Purkey, Zilberman, & Roemmich, 2019). There are very few measurements of deep ocean temperatures and any estimate has considerable possible error (Gasparin, Hamon, Remy, & Traon, 2020). The anomalies in Figure 2 are based on those assumptions. The calculated temperatures were converted to anomalies from the mean of the ocean temperatures from 2004 through 2019. The data used to make Figure 2 is from Jamstec. An R program to read the Jamstec data and plot it can be downloaded here, the zip file also contains a spreadsheet with more details. Our calculations suggest an overall average 2004-2019 ocean temperature of 4.6°C.

Figure 2. A plot of the global grid of ocean temperatures from JAMSTEC. It is built from ARGO floats and Triton buoy data mostly. Jamstec is the source of the grid used to compute these anomalies.

Observed ocean warming is not at all alarming and quite linear, showing no sign of acceleration. The oceans contain 99.9% of the thermal energy (“heat”) on the surface of the Earth, the atmosphere contains most of the rest. This makes it hard for Earth’s surface to warm very much, since the oceans act as a thermal regulator. Various calculations and constants regarding the heat stored in the oceans and atmosphere are in a spreadsheet I’ve prepared here. References are in the spreadsheet. The oceans control warming with their high heat capacity, which is the amount of thermal energy required to raise the average ocean temperature one degree. The thermal energy required to raise the temperature of the atmosphere 1,000 degrees C would only raise the average ocean temperature one degree.

I only mention this because, while the land-based weather stations provide us with valuable information regarding the weather, they tell us very little about climate change. Longer term changes in climate require much more information than we currently have on ocean warming. That said, let us examine the GHCN data collected in the United States.

The GHCN station data
In the U.S., and in the rest of the world, the land-based weather stations comprise most of the average temperature record in the 19th and 20th centuries. Knowing how accurate they are, and the influence of the corrections applied relative to the observed warming is important. Lots of work has been done to document problems with the land-based data. Anthony Watts and colleagues documented numerous problems with station siting and equipment in 2011 with their surface stations project. Important information on this study by John Neison-Gammon can be seen here and here. The Journal of Geophysical Research paper is here. Many of the radical changes in NOAA’s U.S. temperature index and in the underlying database in the period between 2009 and 2014 are due to the work done by Watts and his colleagues as described by NOAA’s Matthew Menne in his introductory paper on version 2 of the U. S. Historical Climatology Network (USHCN):

“Moreover, there is evidence that a large fraction of HCN sites have poor ratings with respect to the site classification criteria used by the U.S. Climate Reference Network (A. Watts 2008 personal communication; refer also to www.surfacestations.org).” (Menne, Williams, & Vose, 2009)

Menne, et al. acknowledged Watt’s and colleagues in their introductory paper to the revised USHCN network of stations, this suggests that the surface stations project was an important reason for the revision. USHCN was a high-quality subset of the full NOAA Cooperative Observer program (COOP) weather station network. The USHCN stations were chosen based upon their spatial coverage, record length, data completeness and historical stability, according to Matthew Menne. A set of quality control checks and corrections were developed to clean up the selected records and these are described in Matthew Menne and colleague’s publications. The main paper is cited above in the boxed quote, but he also wrote a paper to describe their Pairwise Homogenization algorithm, abbreviated “PHA” (Menne & Williams, 2009a). Stations with problems were removed from USHCN as they were found and documented by Watts, et al. As a result, the original 1218 USHCN stations dwindled to ~832 by 2020. The dismantled stations were not replaced, the values were “infilled” statistically using data from neighboring stations.

In early 2014, USHCN subset was abandoned as the source data for the National Temperature Index and replaced with a gridded instance of GHCN, but the corrections developed for USHCN were kept. They were just applied to all 12,514 U.S. GHCN stations, rather than the smaller 1,218 station (or fewer) USHCN subset.

NOAA appears to contradict this in another web page on GHCN-Daily methods. On this page they say that GHCN-Daily does not contain adjustments for historical station changes or time-of-day bias. But they note that GHCN-Monthly does. Thus, it seems that the corrections are done after extracting the daily data and while building the monthly dataset. NOAA does not tamper with the GHCN-Daily raw data, but when they extract it to build GHCN-Monthly, they apply some dramatic corrections, as we will see. Some NOAA web pages hint that the time-of-day bias corrections have been dropped for later releases of GHCN-Monthly, but most explicitly say they are still being used, so we assume they are still in use. One of the most worrying findings was how often, and how radically, NOAA appears to be changing their “correction” procedures.

The evolving U.S. Temperature Index
The current U.S. “National Temperature Index,” draws data from five-kilometer grids of the GHCN-Monthly dataset. The monthly gridded dataset is called nClimGrid, and is a set of map grids, not actual station data. The grids are constructed using “climatologically aided interpolation” (Willmott & Robeson, 1995). The grids are used to populate a monthly average temperature dataset, called nClimDiv. nClimDiv is used to create the index.

Currently, the NOAA base period for nClimDiv, USHCN, and USCRN anomalies is 1981-2010. We constructed our station anomalies, graphed below, using the same base period. We accepted all stations that had at least 12 monthly values during the base period and rejected stations with fewer. This reduced the number of CONUS stations from 11,969 to 9,307. No stations were interpolated or “infilled” in this study.

Some sources have suggested data outside the GHCN-Daily dataset might be used to help build the nClimDiv monthly grids and temperature index, especially some nearby Canadian and Mexican monthly averages. But NOAA/NCEI barely mention this on their website. nClimDiv contains climate data, including precipitation, and a drought index, as well as average monthly temperature. As mentioned above, the same corrections are made to the GHCN station data as were used in the older USHCN dataset. From the NOAA website:

“The first (and most straightforward) improvement to the nClimDiv dataset involves updating the underlying network of stations, which now includes additional station records and contemporary bias adjustments (i.e., those used in the U.S. Historical Climatology Network version 2)” source of quote: here.

Besides the new fully corrected GHCN-Monthly dataset and the smaller USHCN set of corrected station data, there used to be a third dataset, the original NOAA climate divisional dataset. Like GHCN-Daily and nClimDiv, this older database used all the COOP network of stations. However, the COOP data used in the older Climate Division dataset (called “TCDD” in Fenimore, et al.) was uncorrected. This is explained in a white paper by Chris Fenimore and colleagues (Fenimore, Arndt, Gleason, & Heim, 2011). Further, the data in the older dataset was simply averaged by climate division and state, it was not gridded, like nClimDiv and USHCN. There are some new stations in nClimDiv, but most are the same as in TCDD. The major difference in the two datasets are the corrections and the gridding. Data from this earlier database is plotted as a blue line in Figures 6 and 7 below.

The simple averages used to summarize TCDD, ignored changes in elevation, station moves and other factors that introduced spurious internal trends (discontinuities) in many areas. The newer nClimDiv monthly database team claims to explicitly account for station density and elevation with their “climatologically aided interpolation” gridding method (Fenimore, Arndt, Gleason, & Heim, 2011). The methodology produces the fully corrected and gridded nClimGrid five-kilometer grid dataset.

nClimDiv is more useful since the gradients within the United States in temperature, precipitation and drought are more accurate and contain fewer discontinuities. But, as we explained in previous posts, when nClimDiv is reduced to a yearly conterminous U.S. (CONUS) temperature record, it is very similar to the record created by the older, official temperature record called USHCN, when both are gridded the same way. This may be because, while nClimDiv has many more weather stations, the same corrections are applied to them as were applied to the USHCN stations. While USHCN has fewer stations, they are of higher quality and have longer records. The additional nClimDiv stations, when processed the same way as the USHCN stations, do not change things, at least on a national and yearly level. As noted in a previous post, stirring the manure faster, with more powerful computers and billions of dollars, doesn’t really matter for widespread averages.

There are good reasons for all the corrections that NOAA applies to the data. The gridding process undoubtably improves the usefulness of the data internally. Artificial mapping discontinuities are smoothed over and trends will be clearer. But the corrections and the gridding process are statistical in nature, they do nothing to improve the accuracy of the National Temperature Index. If a specific problem with a specific thermometer is encountered and fixed, accuracy is improved. If the cause is not known and the readings are “adjusted” or “infilled” using neighboring thermometers or a statistical algorithm, the resulting maps will look better, but they are no more accurate.

The move from USHCN to nClimDiv for the National Temperature Index
How much of the National Temperature Index trend is due to actual warming and how much is due to the corrections and the gridding method? How much error is in the final temperature anomaly estimates? Decades of criticism and NOAA’s revisions of the calculation have not answered this question or changed the result. Figure 3 shows the National Temperature Index, extracted from the NOAA web site on November 18, 2020. Both the USHCN and the nClimDiv computations are plotted. Remember the slope of the least squares line, 1.5°C per century, it will be important later in the post.

Figure 3. The nClimDiv and USHCN climate anomalies from the 1981-2010 average. The data was downloaded from their web page. Both datasets plotted are from grids, not station data. CONUS is an abbreviation for the lower 48 states, the conterminous states.

It has long been known that the National Temperature Index does not follow the underlying published data. Anthony Watts has reported this, as have Jeff Masters, Christopher Burt, and Ken Towe. The problems exist in both the GHCN data and in the USHCN data as reported by Joseph D’Aleo. Brendan Godwin suspects that the “homogenization” algorithms (see the discussion of PHA above) in use today are to blame. When the “corrected” data has a very different trend than the raw data, one should be skeptical.

Anthony Watts does not believe that the underlying problems with the full COOP network of weather stations have been fixed as he explained here last year. He believes that NOAA is “sweeping the problem under the rug.” The data plotted in Figure 3 is fully corrected and gridded, it is not a plot of station data. In Figure 4 we plot the fully corrected station data in blue and the raw station data in orange from the CONUS portion of GHCM-Monthly. This is the same data used to build the nClimDiv curve plotted in Figure 3, but Figure 4 is actual station data.

Figure 4. The orange line is the uncorrected monthly mean temperature, which is “qcu” in NOAA terminology. The blue line is corrected, or NOAA’s “qcf.”

Figure 4 shows the actual measurements from the stations, these are not anomalies and the data are not gridded. The raw data shows CONUS is cooling by 0.3°C per century, while the corrected data shows CONUS is warming by 0.3°C degrees per century. These lines, like all the fitted lines in this post, are Excel least squares trend lines. The lines are merely to identify the most likely linear trend in the data, thus the R2 is irrelevant, we are not trying to demonstrate linearity.

The difference between the two curves in Figure 4 is shown in Figure 5. The slope of the difference is a warming trend of 0.57°C per century. This is the portion of the warming in Figure 3 directly due to the corrections to the measurements.

Figure 5. This plots the difference (Final-Raw) between the two actual station temperature curves in Figure 4. As you can visually see, the difference between the final and raw curve trends, since 1890, is about 0.8°C, roughly the claimed warming of the world over that period.

To many readers Figure 4 will look familiar. Steven Goddard’s Real Science blog published a 1999 NASA GISS version of the CONUS raw data anomalies in 2012. The dataset he used has since been deleted from the NASA website, but a copy can be downloaded here and is plotted in Figure 6, along with the current (October 2020) GHCN-M raw data. We are switching from the actual temperature measurements in Figure 4 to weather station anomalies from the 1981-2010 mean in Figure 6.

Figure 6. The 1999 NASA GISS raw CONUS temperature anomalies compared to the 2020 GHCN-M raw CONUS anomalies. The 1999 NASA anomalies are shifted down .32°C so the means from 1890 to 1999 match. This is to compensate for the base line differences. Notice the least squares trends match very closely. Hansen’s data shows a warming trend of 0.25°C per century and the modern data shows warming of 0.26°C per century. The equations for the lines are in the legend. See the text for the data sources.

Both the current data and the 1999 data show about 0.25°C per century of warming. Figure 7 shows the same GISS 1999 raw data anomalies compared to the 2020 GHCN-M final temperature anomalies. All three plots suggest it was as warm or warmer in 1931 and 1933 in the conterminous U.S. states as today. The various corrections applied to the raw data and turning the actual temperatures into anomalies have the effect of lessening the difference between the 1930s and today, but they don’t eliminate it, at least not in the station data itself. When the data is gridded, as it was to make Figure 3, the trend is fully reversed, and modern temperatures are suddenly much warmer than in the 1930s. The 1999 data again shows warming of 0.25°C per century, but the corrected data shows warming of 0.6°C per century. This is very similar to the warming seen in Figure 5, that is the warming due to the corrections alone.

Figure 7. The 2020 GHCN-M final and fully corrected station data is compared to the 1999 NASA/GISS CONUS anomalies. The equations for the lines are in the legend.

The blue 1999 GISS anomaly lines in Figures 6 and 7 are identical, the orange line in Figure 6 is raw data and the orange line in Figure 7 is final, corrected data. The largest corrections are in the earlier times and the smaller corrections are in the recent temperatures.

The WUWT resident wit, and all-around good guy, Dave Middleton, commented on this in 2016:

“I’m not saying that I know the adjustments are wrong; however anytime that an anomaly is entirely due to data adjustments, it raises a red flag with me.” Middleton, 2016

I agree, logic and common sense suggest Dave is correct to be skeptical.

James Hansen wrote about this issue in 1999:

“What’s happening to our climate? Was the heat wave and drought in the Eastern United States in 1999 a sign of global warming?

Empirical evidence does not lend much support to the notion that climate is headed precipitately toward more extreme heat and drought. The drought of 1999 covered a smaller area than the 1988 drought, when the Mississippi almost dried up. And 1988 was a temporary inconvenience as compared with repeated droughts during the 1930s “Dust Bowl” that caused an exodus from the prairies, as chronicled in Steinbeck’s Grapes of Wrath.” Source.

For once, I agree with James Hansen.

Zeke, at rankexploits.com, the “Blackboard,” tried to defend the corrections in 2014. Zeke tells us that USHCN and GHCN are first corrected for time-of-measurement bias (“TOB”), then the stations are compared to their neighbors, and a pairwise homogenization algorithm (PHA) is used to smooth out suspected anomalies. These are presumably due to station moves, changes in the station environment, or equipment changes. Finally, missing station data are filled in using neighboring stations as a guide. The last step to make nClimDiv is to grid the data.

Zeke notes that the TOB and PHA corrections are not really necessary since the gridding process alone will probably do the same thing. Not understanding all the details of all these statistical data smoothing operations, I won’t offer an opinion on Zeke’s comment. But, from a general mapping perspective he has a point. You want to map a dataset that is as close to the measurements as possible. When you apply three smoothing algorithms to the measurements before you contour them and grid them, what do you have? What does it mean?

We will not get into the details of the NOAA corrections here, they are statistical, and not corrections to specific instruments to correct for known problems. Thus, they are different flavors of smoothing operations applied sequentially to the measurements. The TOB correction is described by Thomas Karl and colleagues (Karl, Williams, Young, & Wendland, 1986). NOAA averages minimum and maximum daily temperatures to derive the average daily temperature, so it matters whether the two temperature readings are recorded from the min-max thermometer at midnight or some other time of the day. When calculations are done using monthly averages this difference is very small. Some NOAA web pages suggest that the TOB correction has been dropped for more recent versions of GHCN-Monthly, others say it is still used. Either way it probably doesn’t make much difference in GHCN-Monthly or nClimDiv.

The second correction is the pairwise homogenization algorithm or PHA. This algorithm compares each station to its neighbors to determine if there are unusual anomalies and then attempts to fix them. This process is purely a statistical smoothing algorithm. It is described by Matthew Menne and Claude Williams (Menne & Williams, 2009a). This process is definitely being used in the most recent version of GHCN-Monthly.

The final step in the smoothing process is the infilling of missing values using neighboring station data. This is done prior to gridding so more grid cells are populated. Infilling is probably still being done in the most recent version.

Zeke makes the point that graphing actual temperatures, as we did in Figure 4, can be misleading. Over the course of the past 130 years, stations have moved, been added, removed, and the spatial distribution of stations has changed. The mean elevation of the stations has changed over time. These changes affect station anomalies less than the absolute temperatures. True enough, and this accounts for some of the difference between Figure 4 and Figures 6 and 7. Beyond a certain point the number of stations doesn’t matter, as can be seen in Figure 3. We start our plots in 1890 or 1895 because this is when we assume that sufficient stations in CONUS exist to get a meaningful average. The USHCN dataset has 143 stations in 1890 and 608 in 1895 and these are the stations with the longest records and the best placement.

Discussion and Conclusions
Zeke’s next point is that Goddard did not grid his data. Thus, he did not deal with the uneven distribution of stations and the changing distribution of stations over time. These are real problems and they do affect internal trends within CONUS but gridding and the other corrections only smooth the data. None of these operations improve accuracy. In fact, they are more likely to reduce it. If we were using maps of CONUS data to identify trends within the country, I would agree with Zeke, smooth the data. But here we are concerned only about the National Temperature Index, which is external to CONUS. The index is an average temperature for the whole country, no statistical smoothing or gridding operation will improve it. Using anomalies, versus actual temperatures, is important, otherwise no.

An average of the station data anomalies is more appropriate than using a grid to produce a national average temperature trend. The average is as close to the real observations as you can get. The corrections and the gridding remove us from the measurements with several confounding steps.

If the corrections fixed known problems in the instruments, that would help accuracy. But they are statistical. They make the station measurements smoother when mapped and they smooth over discontinuities. In my opinion, NOAA has overdone it. TOB, PHA, infilling and gridding are overkill. This is easily seen in Figure 7 and by comparing Figure 3 to Figure 6 or Figure 5. Does the final trend in Figure 3 more closely resemble the measurements (Figure 6) or the net corrections in Figure 5? The century slope of the data is 0.25°, the corrections add 0.35° to this and the “climatological gridding algorithm” adds 0.9°! It is worth saying again, the type of statistical operations we are discussing do nothing to improve the accuracy of the National Temperature Index, and they probably reduce it.

CONUS is a good area to use to check the viability of the “corrections” to the station data and the efficacy of the temperature gridding process. The current station coverage is very dense, as seen in Figure 1, and one would expect the gridded data to match the station data quite well. Figure 3 looks like the orange “final” curve in Figure 7, but it is steeper somehow, and that tells you all you need to know.

Dave Middleton and I have been (in my case “was”) in the oil and gas business for a long time. Between us we have seen more mapped BS than you could find in the Kansas City stockyards. My internal BS meter red-lines when I hear a laundry list of smoothing algorithms, correction algorithms, bias adjustments, etc. I want to scream “keep your &#$@ing maps and calculations as close to the real data as possible!”

In the first part of this post, I pointed out that to study climate change, we need to know more about ocean warming and the distribution and transport of thermal energy in the oceans. Land-based weather stations help predict the weather, but not climate. We argue a lot about relatively small differences in the land-surface temperatures. These arguments are interesting, but they don’t matter very much from the standpoint of climate change. The oceans control that, the atmosphere above land has little to do with it. Taking the raw data from GHCN-Daily and running it through four different smoothing algorithms (TOB, PHA, infilling and gridding) is, with all due respect, ridiculous. My recommendation? Don’t believe any of it, not that it matters much as far as climate is concerned.

A better indicator of climate change or global warming is the trend of ocean warming, shown in Figure 2. Notice the trend over the past 16 years is only 0.4°C per century. Compare this to the CONUS land-based measurements over the past 130 years, they predict 0.25°C, as shown in Figure 6, but NOAA’s fully “corrected” value is 1.5°C, as shown in Figure 3. Truly, which do you believe?

I used R to do the calculations plotted in the figures, but Excel to make the graphs. If you want to check the details of my calculations, you can download my GHCN R source code here.

None of this is in my new book Politics and Climate Change: A History but buy it anyway.

You can download the bibliography here.

5 1 vote
Article Rating
141 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
November 24, 2020 10:12 am

The oceans control warming with their high heat capacity, which is the amount of thermal energy required to raise the average ocean temperature one degree
Is potentially misleading. What you mean is the ‘ocean surface layer [the overturning part]’, not the average ocean down to the bottom.

Walter Horstingg
Reply to  Andy May
November 24, 2020 10:41 am

Andy, for all of us the fact that the Little Ice Age was ending for the starting dates of the charts. figures 3-7; globally glaciers stopped advancing then.

Reply to  Andy May
November 24, 2020 11:42 am

Another way to look at it, the atmosphere contains 0.071% of the thermal energy on the surface
‘surface’? the surface is infinitely thin and contains nothing. Without specifying a depth, the statement has no meaning.
whole ocean?
including one inch above the bottom in the Challenger Deep?
I think not, as that doesn’t warm.
Sloppiness in expression is not good.

Reply to  Andy May
November 24, 2020 1:52 pm

The land can store energy through accumulation of water over relatively long time scales. For example, glaciation requires an immense amount of energy to transfer water from oceans to land surface. The ice has accumulated at quite high rates during periods of rapid glaciation; as much as 70mm/yr.

On a shorter term, a tropical cyclone can dump a massive amount of water over a wide area of land. That water came from the ocean and can hang around in aquifers for decades. Some lakes in Australia only exist for a couple of years then disappear for a decade or two. It takes a lot of energy to evaporate water from the oceans and transport it to land.

Reply to  Andy May
November 24, 2020 3:18 pm

Andy, some have considered increased irrigation of land areas for food to be a factor. Does recent irrigation of land surfaces make it more like the ocean than 100+ years ago ?

Reply to  Andy May
November 24, 2020 3:51 pm

My point is that energy being absorbed by an ocean surface can end up as energy in the atmosphere that eventually gets transferred to land. This is not trivial. The majority of rain and snow falling on land originated as evaporation from ocean surface. The ocean surface level can be changed year-by-year depending on the water, read ocean energy, that gets stored on land.

Oceans are net abosrbers while land is net loss. The storage of energy from the oceans onto land can occur over hundreds of thousands of years. Rain falling over land represents radiative heat loss above the land from latent heat absorbed at the ocean surface.

Richard M
Reply to  Leif Svalgaard
November 24, 2020 10:51 am

I mostly agree with Leif here. The mixed layer is the part that drives the atmospheric temperature. Looking at the entire ocean or even 2000 meter portion will probably be less accurate.

The warming of the mixed layer is the real cause of the warming we have seen. The cause is most likely due to increases in salinity over the past 400 years as seen in Thirumalai et al 2018.

Richard M
Reply to  Andy May
November 25, 2020 6:38 am

Andy, basic physics tells us that it takes more energy to evaporate water when it is saltier. That would lead to more saline water being warmer with all else being equal. Evaporation is a cooling effect.

If something else causes the oceans to warm (fewer clouds perhaps), then that would increase the water cycle and put more water into the atmosphere. That would lead to more evaporation and rain as well which are both cooling effects. Seems to me that would be a negative feedback. Not sure it would change the salinity of the oceans.

Yes, the two are related but is not perfectly symmetrical.

Reply to  Richard M
November 24, 2020 12:42 pm

Any submariners able to comment on this?

Latitude
Reply to  Richard M
November 24, 2020 2:42 pm
Cordvag, Y
Reply to  Richard M
November 24, 2020 4:19 pm

I have been following Dr. Svalgaard for almost 30 years. I find him brilliant and thorough. However, I find that there is tendency to obtuseness in his comments about others’ work.

Best to all, and keep it civil.

Reply to  Cordvag, Y
November 25, 2020 6:21 am

Agreed!

Ron Long
Reply to  Leif Svalgaard
November 24, 2020 11:46 am

Leif, Andy, others: I am struggling to reconcile the ocean heat content switching back and forth to produce the intra-and-inter glacial phases of the Ice Age we are currently in. If the glacial phases are correlated with Milkanovich Cycles then it’s the solar incoming that is changing. How can this possibly heat up/down the ocean and produce such regular intra/inter glacial events?

meiggs
Reply to  Leif Svalgaard
November 24, 2020 5:46 pm

39F on the deck this morn, heat wafting up from the flats below. Frost on the grass a couple hundred yards away, same elevation…which was the anomaly, the frost or the temperature on my deck?

November 24, 2020 10:31 am

Quote:
“The United States has a very dense population of weather stations”
Is that good or bad.
My anecdote…
Just a few days ago, scattered frost was forecast for evening then warming up through the night

About 7pm, my local Wunderground (very rural like me and about 2 miles away) said 0.6 Celsius

My datalogger, 6ft above ground hanging out of a now leafless tree in my garden, agreed perfectly.

About 10pm, Wunderground was reporting (5 minute updates) 2.6 Celsius
My datalogger (updates itself every 15mins) was saying 0.7 Celsius
What gives?

A new one on me is that my logger (an EliTech – half the price of the Lascars and reports to 0.1C rather than 0.5C *and* epic battery life) gives an output called average ‘Mean Kinetic Temperature’ MKT

It fascinates me, something to do with the actual heating effect of any given temp
Is *that* what Climate Science should be measuring?

Reply to  Andy May
November 24, 2020 10:39 pm

Until this virus nonsense came along I spent some years, twice a week, collecting office mail from two post offices no more than half a dozen miles apart. One particularly hot day I looked at the car’s outside thermometer reading when at the first post office. At the second post office I saw it was 6 degrees F higher.

I know the car thermometer isn’t of climate quality but I have had a few opportunities to check it against other thermometers, such as the sign board display’s at places like banks and pharmacies. It agreed exactly each time (digital readout to whole degrees only) so I think it isn’t so totally wrong as to be meaningless.

I began to frequently note the temperatures at the two post offices. In hot weather they were usually 6 degrees difference, sometimes 10 degrees, the lower and higher temperatures always at the same places. In cooler spring and fall weather the differences, while still in the same direction, were only 2 or 3 degrees. In the cold of winter, reading were the same in both places.

Then I began observing the temperature reading while traveling from the first to the second and from the second to the office, that later covering much of the same route as between the two post offices, but in the opposite direction. Time after time, during the warmer seasons, I saw the temperature rising and falling and rising again on each trip, with the change over points from rising to falling and falling to rising being at the same places consistently. Viewing the surrounding I could never make sense of what might be influencing the temperature differences along the route. I could not see anything very different from place to place that should account for it. Maybe someone more knowledge than I would have seen more.

Earlier, when I lived outside of town, in the hills, there was a stretch of a few hundred yards on the two mile route I normally took from the freeway to home where the temperature suddenly dropped very noticeably. I never had a thermometer to measure how much but when running or walking during the day that section was a major relief in summer, and extra cold in winter.

Looking up forest area, I read that forest covers about 30% of the land surface. Anyone out walking on summer days can’t help but notice the coolness of shade from even one tree. Again based on what I read, stepping from the open only a couple of feet into forest can see as much as 20 degrees F drop and deeper into the forest can be as much as 60 degrees F lower than in the open.

My point is two fold. First, the differences, apart from forest settings, that I wrote about above are just a couple of examples of something that occurs in many places. It seems very likely that using weather stations located in sites designed to provide closely similar conditions can not hope to capture the true average over many larger areas (even assuming that an average has any significant meaning), because there are too many temperature differences that will not be included in the calculations.

Second, by defining the ideal characteristics of sites, which immediately excludes 30% of the surface, not to mention many other non-forest places that may be completely free of human influences to the local temperature, a model fantasy world is created that can’t hope to be a good representation of reality. One might reasonably say that these idealized spots are warming or cooling over time but can they really be a good model of the world at large? Perhaps more inclusion, especially of temperatures in forests, might produce a very different slope, even a cooling rather than a warming one.

Rud Istvan
November 24, 2020 10:46 am

I did a less sophisticated, more example oriented, analysis of the CONUS temperature data in essay When Data Isn’t in my 2014 ebook Blowing Smoke. Came to similar conclusions, summarized as:
1. The data adjustments are biased toward showing global warming, and
2. The data isn’t fit for climate purpose.

At that time, there was not enough ARGO data to do a reliable analysis such as in figure 2. That is very helpful, but may understate the ocean surface warming, which should be mostly above the halocline and thermocline, both at most a depth of 700 meters depending on latitude. ARGO Sampling below (2000 meter depth to 700 meter depth) likely understates what is going on centennial time frames.

November 24, 2020 10:49 am

The methodology of climate data adjustment is a long established one, best illustrated by the following figure:

https://images.app.goo.gl/ghXxwF6ou7AoJKEG8

DonK31
November 24, 2020 10:52 am

It would seem that there is one more comparison that could be made. How does the last decade of GHCN, both raw and adjusted, compare to the USCRN, which is spacially accurate and needs no adjustment?

If USCRN agrees with neither, then both are wrong. If it agrees with one but not the other, it is more likely that the one it agrees with is correct.

a_scientist
Reply to  Andy May
November 24, 2020 12:07 pm

Yes, that is the beauty of USCRN…NO adjustments !

So they can no longer fudge the numbers to create trends over time. After 2005, they must agree.
DonK31 is correct, if it disagrees with USCRN, it is in error.

The sites are not contaminated by urban effects, the temperature measurements (triple redundant PRT) are the best possible. I do worry if it is flat or cooling, will the data be tampered with?

Who watches the raw data and its collection and collation? Will it messed with by a Biden administration?

fred250
Reply to  a_scientist
November 24, 2020 12:34 pm

Here is USCRN vs UAH USA48.. As expected , some trends are pretty much the same, surface seems to have bigger range than satellite data.. some discrepancies as you would expect from different measuring systems

comment image

On the other hand USCRN and nClimDiv match almost perfectly….. TOO perfectly

…. a sure sign that one is being manipulated to match the other..

comment image

CordvagyC
Reply to  a_scientist
November 24, 2020 8:39 pm

Election 2020, we have a winner!

Data will be totaled and reported by SmartMatic and Dominion, and disseminated by our inerrant main stream press.

Reply to  a_scientist
November 24, 2020 10:42 pm

Yes, that is the beauty of USCRN…NO adjustments !

And no one can hack the voting networks either.

DonK31
Reply to  Andy May
November 24, 2020 12:08 pm

Thank you for the correction.

fred250
Reply to  Andy May
November 24, 2020 12:27 pm

Yep. the advent of USCRN means NO MORE ADJUSTMENTS.

You can see that nClimDiv is being manipulated to match USCRN….. the match between them is too perfect to be anything else.

Anything before USCRN.. take with a grain of salt.

November 24, 2020 11:46 am

Our wonderful oceans prove that there is so very much more we need to learn. We need to do this before we listen to the alarmist know-all climate experts and are disturbed by their opinions.

John F Hultquist
November 24, 2020 11:52 am

In your quote from James Hansen, 1999, he mentions Steinbeck’s Grapes of Wrath.

I’ll suggest a better reference for anyone interested in “The Dust Bowl” years:
Timothy Egan”s . . .
The Worst Hard Time: The Untold Story of Those Who Survived the Great American Dust Bowl

Toward the end he provides an update.

Meab
Reply to  John F Hultquist
November 24, 2020 12:25 pm

I recommend both. While “The Worst Hard Time” is non-fiction and well referenced, some dishonest “climate crisis” nuts claim that it doesn’t fairly represent the actual conditions in the 1930’s. That’s where “The Grapes of Wrath” is useful. It’s just not credible for these dishonest Alarmists to claim that Steinbeck fabricated the conditions that drove the dust bowl era Okies out of their farms to uncertain futures in California way back in the year he wrote the book, 1939. While some of the choking dust is rightfully to be blamed on poor farming (plowing) practice and the lack of soil conservation back in the 30’s, it’s also clear that heat waves and droughts were widespread in the 1930s as documented contemporaneously.
.

Reply to  Meab
November 24, 2020 10:50 pm

There was a paper, a year of so ago, that used lake bottom sediments from somewhere in the midwest to gather information of rainfall over the past 3 or 4 thousand years. As is the case on the west coast, the data showed that extended droughts were common over that period. It commented that the 1930s’ drought was a rather small one compared to many in the record.

Reply to  AndyHce
November 25, 2020 5:34 am

It is widely accepted that the Anasazi civilization in the Southwest US was destroyed by a prolonged and severe drought around 1300-1400.

Reply to  Meab
November 25, 2020 7:07 pm

It’s why much of the central plains are classified as semi-arid deserts. Even semi-arid deserts have prolonged droughts, it doesn’t have to be a full-on desert like the Sahara.

Reply to  John F Hultquist
November 24, 2020 12:33 pm

John, thanks for the book reference.

I have found journals written nearly 200 years ago about two areas in countries where I have lived with enlightening comments about floods and drought, heat and cold (with actual temperature measurements) that cause me to question claims of climate change. I wonder if we had an accurate record over 2000 years that we would be surprised to find that all these changes are simply the great weather variations within these climate zones. In a semi-desert area I have seen dunes forming in a year and disappearing the next.

I am curious about how accurate the thermometers were during the 19th century and whether the range between maximum and minimum was perhaps more accurate than the actual max and min. Perhaps you or Andy could refer me to some interesting reading on this.

CheshireRed
November 24, 2020 11:52 am

The answer will be ‘whatever gives the best case for alarmism’.

Sadly the US is now as corrupt a country as anyone could wish (or not) to find. I can’t trust anything from the US anymore, but then again I can’t trust anything from the UK or EU either. What a mess.

Reply to  CheshireRed
November 24, 2020 12:11 pm

I recall a comment that the historian Norman Davies made with reference to his Europe A History – but have not been able to trace it – to the effect that history is the record more of the mess people make than of their achievements. Every historian needs to recognize this as does every journalist. Perhaps it would introduce some humility into their writing and reporting.

Mickey Reno
Reply to  CheshireRed
November 25, 2020 7:29 am

But looking on the less than bright side, we cannot trust Russia, China, Australia, Brazil, Africa, or Antarctica, either. Everyone’s got an agenda these days. There is a special place in hell for those who are trying to turn science into push-polling.

Our salvation may come from groups like John Christy and Roy Spencer’s UAH remote sensing science. Just as long as they continue to work at their own quality assurance issues, and continue to compare the modeled “measurements” from satellites with multiple, coordinated measuring systems like balloons, etc. The prospect for alarmists interfering with remote sensing data is already established, and worrying. I’m looking at you Josh Willis.

Harry Passfield
November 24, 2020 12:03 pm

Do you ever wonder if these alarmists are using Dominion and/or Smarttec systems? (Like, is it a surprise tha Biden has polled a RECORD 80M votes!!??)

Graham
November 24, 2020 12:03 pm

Thankyou Andy May,
I have skimmed through this and it looks a very thorough assessment of what has been going on to push global warming .
I am sure that temperature data manipulation is rife in many countries around the world and I know that it has happened in New Zealand and Australia.
Pushing global warming as fast as the doomsayers can to benefit nobody anywhere in the world when there are much bigger problems for civilization to grapple with.
Yes the world is extracting coal oil and gas and limestone for cement manufacture but in reality the amount of CO2 and CH4 released will make little difference to the global temperature as the atmosphere is almost saturated with CO2 and CH4 and H2O has by far the most effect on the earths temperature than those minor trace gasses .
If the debate was only about science there was a very minor problem of a few tenths of a degree rise but things changed when activists decides they should ride this climate scare to change the world .
They climbed on board at the Kyoto Accord and introduced clauses that were adopted without any scientific scrutiny and this climate scare is supposed to be about science .
These activists pushed for and the Kyoto Accord adopted that plantation logging and methane from farmed animals were to be included in countries emissions profiles .
Both of these are cycles and cannot and will never increase the levels of CO2 or methane in the worlds atmosphere .
When you understand this and then see that temperature records around the world are tampered with you become very skeptical.
You then travel to Glacier Bay in Alaska and find that when it was first discovered the mighty glacier was caving into the North Pacific Ocean in the 1700s and the glacier rapidly retreated many miles till the early 1900s and has slowly retreated since then .
Greenland is another example where it was a lot warmer when the Vikings settled there than it is now .
Going back even further to the Medieval Warm Period which was at least as warm as present and most probably warmer and the Roman warm period 2000 years ago was warmer than present .
There is far to much politics in climate change and it is not about climate but about CHANGE.

Galen
Reply to  Graham
November 24, 2020 4:39 pm

4000 years ago there were no glaciers in Yosemite.

Rick C PE
Reply to  Graham
November 24, 2020 5:19 pm

Here’s my question for “climate experts”. If it could be proven that the earth will warm 2-3 C over the next 80 years entirely due to natural causes and unrelated to human activity, would it still be dangerous? If so, would massive untested geoengineering projects be advocated to prevent the natural warming from occurring? If your answer is yes to these questions, you are not an environmentalist. This is such a strong argument, IMO, that Mann, et. al. thought it necessary to use tricks to try and erase the MWP and Little Ice Age from the historical record.

I find the amount of effort and energy put into trying to measure “global temperature change” quite fascinating. Even if it the adjustments and homogenization is justified, it does not mean the CO2 or human activity is the cause (or, perhaps more precisely, more than a minor contributor) . I became a skeptic back in the 80s when I saw ice core reconstructions that showed numerous periods of warming and cooling with poor correlation to CO2 concentration. My conclusion was that temperatures change -both up and down – over time due to natural forces. I do not accept the proposition that around 1940-1960 natural forces stopped affecting climate and we humans took over control with our CO2 producing fossil fuel use.

Fred Hubler
November 24, 2020 12:04 pm

It’s not based on corrections, it’s based on adjustments.

Lawrence E Todd
November 24, 2020 12:07 pm

“The final step in the smoothing process is the infilling of missing values using neighboring station data”. So they are making up temperature records and probably using bad neighboring station data to do it. My opinion is that the whole temperature record only shows that the temperatures are getting higher because of UHI effects and that increase in air traffic has raised the temperatures at airports.

Leonard
November 24, 2020 12:12 pm

It seems to me that if the “standardized and homogenized” data show a gradual temperature increase and the raw data show a gradual temperature decline, then the standardized and homogenized data are being corrupted to show climate warming. I do not see a logical way to interpret the data any other way.
Also, to me changing and corrupting the temperature is a crime as it is done to reach a certain political result rather than trying to show the real result.

Now, if their data manipulation showed no change from the raw data why do it? Have we ever seen a report showing that the manipulation of the raw data produced a more negative trend than for the raw data? No we have not.

Finally, if someone flipped a coin a hundred times and it always showed heads more than tails, would you be willing to gamble that this is a fair coin?

Thomas Gasloli
Reply to  Leonard
November 25, 2020 1:08 pm

It has always bothered me that this field thinks changing data is anything other than data fraud. You just can’t do this in any other field, science or otherwise. It isn’t “correction”, it isn’t “adjustment”, it is fraud. That you have a written SOP for the it, doesn’t change that it is fraud.

Tom Abbott
Reply to  Leonard
November 26, 2020 7:56 am

“It seems to me that if the “standardized and homogenized” data show a gradual temperature increase and the raw data show a gradual temperature decline, then the standardized and homogenized data are being corrupted to show climate warming. I do not see a logical way to interpret the data any other way.”

I’m with you, Leonard. I believe data corruption, for policital purposes, is exactly what is going on.

Not only is the US in a temperature downtrend since the 1930’s, but the rest of the world is also in a temperature downtrend, if you go by the actual temperature readings.

The actual temperature readings destroy the Human-caused climate change claims. That’s why the Climate Data Manipulators don’t use the actual temperature readings. They put the actual temperature readings into their computers and out comes a False Reality, that causes politicians to waste Trillions of dollars on trying to fix a problem that doesn’t exist.

Bastardizing the temperarture records of the world has led to the biggest fraud in history perpetrated on the human race. One of these days it will be seen as such.

Neo
November 24, 2020 12:15 pm

I believe in “The Cult of Climate Change” about as much as I believe the Election of 2020 was “Fraud Free”

Bruce Cobb
Reply to  Neo
November 24, 2020 12:35 pm

Actually, the Climate Cult and the Trump Cult are quite similar, relying primarily on emotionalism rather than facts.

Reply to  Bruce Cobb
November 24, 2020 2:22 pm

So, what does the demrat cult…the Bideno cult…the left wing cult rely on? Just the facts, mam.

Tom Abbott
Reply to  Bruce Cobb
November 26, 2020 8:04 am

“relying primarily on emotionalism”

Do you have any examples of that, Bruce? I think I base my assessment of Trump on facts. Show me where I’m being emotional.

I think you are projecting your own sentiments into the picture. I think it is you that judges Trump on using emotions. If you went by the facts, you would have to praise Trump for all the good things he has done for the United States.

Can you name something that was bad for the United States that Trump has done?

fred250
November 24, 2020 12:17 pm

A bit OT,

…. but did you know that North America in October had its highest October snow cover since Rutgers records began in 1967

https://climate.rutgers.edu/snowcover/chart_anom.php?ui_set=1&ui_region=namgnld&ui_month=10

Reply to  fred250
November 24, 2020 1:06 pm

Yet according to UAH, at +0.71C above the 1981-2010 average, October 2020 was the warmest October in their northern hemisphere record (starts 1979): https://www.nsstc.uah.edu/data/msu/v6.0/tlt/uahncdc_lt_6.0.txt

Wonder if there is some relationship between NH warming and increased snowfall?

fred250
Reply to  TheFinalNail
November 24, 2020 1:46 pm

Don’t know where you get your data from….. incompetence , perhaps?

Snow data in North AMERICA..

We all know that northern Siberia had a strange little very warm WEATHER anomaly

According to UAH USA48 October 2020 was +1.1ºC, and 6th warmest October.

Last two months USCRN and UAH USA48 have been going totally opposite directions

USCRN has October 2020 at +0.18ºC and in 9th place since 2005

Please try to stay on topic (north America not NH).

Otherwise you will keep making a fool of yourself.

Reply to  fred250
November 25, 2020 12:33 am

fred250

“Don’t know where you get your data from….. incompetence , perhaps?”
______________

That’s a little confusing because I stated exactly where I got the data from and even provided a link to it. All you needed to do was read what was written and follow the link, but apparently this modest task was beyond you. What was that you were saying about ‘competence’?

fred250
Reply to  TheFinalNail
November 25, 2020 2:42 am

Yes, YOU were confused,, I said Northern America..

You stupidly went to give data for the whole the NH.

Try not to do stupid things like that.. makes you look very stupid.

November 24, 2020 12:56 pm

Andy,
“but gridding and the other corrections only smooth the data. None of these operations improve accuracy.”
No, the function of gridding is not to smooth the data. It is to counter the effects of inhomogeneity. Spatial inhomogeneity, primarily. It ensures that your average is not dependent on whether you chose the right proportion of warm places vs cool places to reprsent the country, and will not change as that distribution changes as different stations report in successive months.

The “other corrections” (PHA) are also not done to smooth the data. They are to remove biases caused by non-climatic changes, of which a classic is change of time of observation. If there is a net trend in these changes, as there was in the USA when observers shifted over years from evening to morning observation, then that introduces a trend which has nothing to do with climate, and should be removed. Your graphs of raw versus adjusted shows that that happened.

“An average of the station data anomalies is more appropriate than using a grid to produce a national average temperature trend.”
A gridded average of anomalies is appropriate.

Reply to  Andy May
November 24, 2020 2:16 pm

Andy
“You are trying to tell me that interpolating measurements over an area you don’t know anything about creates accuracy that isn’t in the measurements??”

The whole purpose of measuring here is to deduce something about the unmeasured area. You don’t just want to know about the temperature inside those boxes. The same is true, say, for exploratory drilling, for oil or minerals. In both cases, you are sampling, and you need to make inferences about the rest of the orebody or whatever from what you have sampled. And to do that, you have to decide how representative the samples are.

That is what gridding does. It converts just a random set of numbers into an estimate per unit volume (or area). Then you can add the estimates according to their volume to get the whole body evaluation.

Reply to  Andy May
November 24, 2020 3:50 pm

Andy,
“…which is more accurate? The average of the measurements or the average of the grid created from the measurements? I would say, in almost every case, it is the average of the measurements.”
No, always the grid. Just averaging the measurements puts you at the mercy of what points happened to give you data. If there were a whole lot of points in Florida, you’ll get a warm average. If in NY, it will show as a cool month, regardless of the true climate.

The point of giving an estimated average is that it should be reproducible and independent of what points you measured it at. It is a property of the US which any proper method should reproduce.

The underlying issue is homogeneity. If there weren’t known underlying variabilities (eg latitude, altitude) then it wouldn’t matter where you sampled, and the ordinary average would be fine. But there are. When you cut into grid cells, there is much better homogeneity within a cell, so you can average. Then you can combine the cell averages, since they represent area fractions.

An example of what goes wrong with ordinary averages is the “Goddard spike”. Here Tony Heller showed just an average of all USCHN raw readings for Jan-Apr 2014, and compared with same for all final readings. He found a huge discrepancy which was much discussed. But it was an artefact. For final each station had an estimate for each month. But “raw” included only the subset of stations that had reported, and because of lag, they were predominantly winter. He later averaged each month separately and put them together, and the spike went away. Gridding in time, but the same idea. More here.

Michael Jankowski
Reply to  Andy May
November 24, 2020 4:54 pm

“…If there were a whole lot of points in Florida, you’ll get a warm average. If in NY, it will show as a cool month, regardless of the true climate…”

Extremely pathetic, Nick, especially since you are such a proponent of anomalies.

Read again what you’re addressing: “An average of the station data anomalies is more appropriate than using a grid to produce a national average temperature trend.”

Florida doesn’t always produce warm anomalies nor New York produce cold anomalies “regardless of climate.”

Why would you of all people intentionally try to confuse the issue?

Reply to  Andy May
November 24, 2020 5:32 pm

“Why would you of all people intentionally try to confuse the issue?”

You are confusing the issue. Andy is talking about USHCN and nClimDiv, which average temperatures, not anomalies.

Michael Jankowski
Reply to  Andy May
November 24, 2020 5:57 pm

“You are confusing the issue. Andy is talking about USHCN and nClimDiv, which average temperatures, not anomalies.”

Doubling-down, are we?

Again, read what you responded to. My caps for emphasis:
“An average of the station data ANOMALIES is more appropriate than using a grid to produce a national average temperature trend.”

I mean, YOU even responded with the following (my caps for emphasis):
“A gridded average of ANOMALIES is appropriate.”

Both of you were talking anomalies. Then you supporting gridding by using actual temperatures in Florida and New York.

Reply to  Andy May
November 24, 2020 7:00 pm

“Again, read what you responded to.”
No, read again. Explicitly, it is at the top of the comment:
“The average of the measurements or the average of the grid created from the measurements?”
The measurements are the temperatures.

Reply to  Andy May
November 24, 2020 7:55 pm

Andy,
“So the problem is reduced to: do we grid the anomalies first, then average them?”
Yes, you should grid the anomalies. Again the issue is homogeneity. Anomalies are much more homogeneous, so simply averaging them is not as bad. But it is still not good enough. You still have to worry about whether you are evenly sampling the regions of high and low anomaly. Gridding (or equivalent) is always better.

Marcus
Reply to  Andy May
November 25, 2020 8:11 am

Assume a case where you have carefully placed 1 measurement station in each 100 km grid box across the US. But then you add a million measurement stations in Barrow, Alaska. Step one: convert all stations to anomalies. Step two: choose to either use the “average everything” or the “grid approach”. Which is better?

The average everything approach will give you back the anomaly of Barrow, Alaska, whose 1 million stations will overwhelm all your other data. Given that Alaska is generally warming faster than the rest of the country, it will make it look like the US is getting hot, fast!

The grid everything approach will give the 1 million Barrow stations the exact same weight as any other individual station. That seems like a much better approach to determining the average temperature change (or the average absolute temperature if you don’t use anomalies) of the country.

Michael S. Kelly
Reply to  Andy May
November 25, 2020 6:49 pm

I’ve asked this question on this blog a number of times over the years, and have yet to have anyone even attempt to answer it: Why don’t we put a weather station at the exact geographic location of each of these “grid points”, and see if what they measure bears any relation to all of our highly processed, homogenized, interpolated, and “corrected” calculated temperatures ?

Or is that too difficult a concept?

slow to follow
Reply to  Nick Stokes
November 24, 2020 3:05 pm

Nick:

“The whole purpose of measuring here is to deduce something about the unmeasured area.”

Is this true? Isn’t this unnecessary? We have a body, distributed over which we have a set of thermometers. We want to understand what is happening to the temperature of the body over time and we are seeking an aggregate view. Can’t we form a view of this based on the measurements as recorded at points? If not, why not? And if so, why would grinding and assigning temperatures across areas give us a better view? Where is the additional information coming from?

LdB
Reply to  slow to follow
November 24, 2020 4:32 pm

Because they want to pretend there is something hiding in gaps 🙂

In hard sciences we used to get this argument with crackpots and the laws of gravity. The argument goes unless you have measured every point then there is a point the laws may be different and so they to wanted to “GRID” the system up. Sound similar to the garbage Nick the troll is dribbling.

The problem is the system is so large that you need to build a compelling case that you could get a discontinuity and how such a discontinuity could arise. Your instinct is correct you are better off using the existing points and not introducing sample quantize noise. Nick has a clue what he is doing and if you ask for background publications all he will do is cite circular references to the stupidity of Climate Science (TM) and other stupids.

fred250
Reply to  slow to follow
November 25, 2020 2:50 am

Gridding, homogenisation, in-filling, kriging, etc are just methodologies to ADJUST data and remove the real information from the data.

They are ALL based on ASSUMPTIONS that may or may NOT be true for that particular set of data.

In “climate science™” be often used to turn a set of decreasing temperature sites, into a set of increasing temperature sites.

slow to follow
Reply to  slow to follow
November 25, 2020 3:39 am

*grinding = gridding

Plus – FWIW I don’t see that evaluating average temp change with time is the same problem as reserve estimation. The latter is a problem where kriging etc adds analytical value. I think reserve estimation is a case of identifying boundaries of concentrations within a particular spatial domain rather than trying to evaluate changes with time in an average value across an entire surface – but I’m open to persuasion otherwise by those who know more.

Reply to  slow to follow
November 25, 2020 5:52 am

fred250

“Gridding, homogenisation, in-filling, kriging, etc are just methodologies to ADJUST data and remove the real information from the data.”

Per Nick’s earlier reference to the hydrocarbon extraction biz, David Middleton should be taking serious umbrage at your ignorance w.r.t. the efficacy of these methods. Geoscientists have used geostatistics for at least 40 years that I know of to recommend trillions in oil and gas investments. Quite successfully. I personally remember laughing at “nuggets” and “sills” until I got basic geostatistical training and then later saw how well the reservoir models that used geostatistical outputs as inputs, worked to add value to the PRIVATE E&P’s that employed us…

fred250
Reply to  slow to follow
November 25, 2020 12:12 pm

Yawn.. Yep they are great for having a GUESS. !! Sometimes.

Try not to be SO DUMB as to think they are anything else.

I had noticed that about you.

You pretend to know far more than you actually do, big oily blob.

Ian W
November 24, 2020 1:05 pm

Put simply, this is a lack of coherent validation of the correction/homogenization algorithms. Yet it is relatively easy to validation test the outputs of these algorithms. First someone must specify the required accuracy of the correction/homogenization algorithm output. That is if there was an operating and accurate sensor at this geographic point what is an acceptable difference between the real atmospheric measurements and the invented atmospheric measurements. It would appear from the Y axes of the graphs that 0.1C is the desired accuracy which may be pushing limits when anomalies are being calculated against human observations in the 1930’s.
So the requirement is that if a temperature (or other value) is required to be generated or corrected then it should be within +/- 0.05C of the value that would have been reported by an actual accurate observation system at that point.
The test for the algorithm is simple to automate. Take a set of good quality assured observation stations. Remove the data for one of the stations from the input file. Run the homogenization /correction algorithms to generate the missing observation compare the result from these algorithms to the actual high quality observation that was removed.
IF the output from the algorithms is out of spec that is more than 0.05C different to the actual observed value THEN the algorithms are out of spec and need to be rewritten.
This test can be done automatically for the entire CONUS but in reality one failure is sufficient to require re-examination of the algorithms.
In the case of observation site moves which are relatively rare (a thousand a year would only require 3 a day or so to manage) each and every one of them should have separate assessed algorithm/parameters to correct the old values to the new values and these should be tested as correct against parallel run data. Then that site should be signed off manually by the data manager, a senior meteorologist from the observation site and countersigned by the software programmer describing what was done and what corrections needed to be applied and why. This allows full data governance and audit with an understanding of why particular changes were made and who was responsible for making them.

Ideally, there should be an independent Data Management organization outside NASA/NOAA etc. with the task of maintaining pristine raw data, and approving any new datasets based on that data.

I still find the idea of taking the mean of highest and lowest atmospheric temperature and calling it the average when it is NOT a mean of the daily temperature profile and in any case averaging an intensive variable is not physics. Then the process iteratively averages all these arithmetic means over a wide temporal and spatial distance taking no account of huge enthalpy differences.
The output of Stevenson Screen observations 90 years ago taken hourly or at Special observation times by a met observer with sleet blowing down his neck to perhaps the closest degree F, is ‘averaged’ with SAMOS output that is automated and can be every several seconds to a very precise value of degrees C. The mix of statistical sampling times and accuracies gives misleading results which are then hidden by repeated ‘homogenizations and gridding’

However, the first action is to apply governance and validation testing of homogenization and correction algorithms at each and every observation site. And have formal QA sign off for changes at individual sites where siting has changed.

Reply to  Ian W
November 24, 2020 1:38 pm

“Take a set of good quality assured observation stations. Remove the data for one of the stations from the input file. Run the homogenization /correction algorithms to generate the missing observation compare the result from these algorithms to the actual high quality observation that was removed.”

This is a common misconception of the purpose of homogenisation. It isn’t to improve on the number that was measured at that place and time, with that equipment. That is assumed accurate. The purpose is to estimate what would have been measured at that time with the present equipment and location, so it can be compared with what is measured with the same setup at other times, including present. That comparison tells you about climate; the raw data mixes in difference due to location changes etc.

November 24, 2020 1:16 pm

Tropical ocean temperature cannot exceed 32C. The shutters start going up when the temperature reaches 27C and almost full blackout on a daily basis before 32C is reached Tropical oceans can reject so much heat that they can cool under the midday sun:
https://1drv.ms/b/s!Aq1iAj8Yo7jNg3qPDHvnq-L6w5-5

Sea ice prevents heat loss from oceans to set the lower limit of -1.7C. Tropical oceans reject heat input to limit equatorial temperature to 32C. Everything else is noise.
https://1drv.ms/b/s!Aq1iAj8Yo7jNg3j-MHBpf4wRGuhf

If you see data showing a long term warming trend then suspect the measurement system before trusting the reading.

Reply to  RickWill
November 24, 2020 1:30 pm

I can easily envisage a warming bias on the Argo buoys simply due to fouling and their cycling. The salinity measurement is known to drift due to fouling:
https://www.researchgate.net/publication/226107755_Long-term_Sensor_Drift_Found_in_Recovered_Argo_Profiling_Floats

They say the temperature is stable but if fouling occurs then that will alter the temperature response and could bias the reading if the settling time for the reading is not long enough. Would be interesting to take a close look at data collected during descending compared with data while ascending.

Loren C Wilson
Reply to  RickWill
November 24, 2020 3:59 pm

Platinum resistance thermometers usually read higher as they age. For those who haven’t used one, a platinum resistance thermometer is simply a resister made from nearly pure platinum wire. Usually, the wire, which is less than the diameter of your hair, is wrapped around a solid core to hold the strands in place. The unit is encapsulated in metal or quartz or ceramic, depending on the sensor style and accuracy. The resistance of platinum increases in a regular and well-measured way as a function of temperature. This increase is about 0.39 ohms per °C near room temperature. Measure the resistance and you can convert the resistance to a temperature.

Fully-annealed and strain-free platinum wire has the lowest resistance at a given temperature. Rough handling and vibration induce strain in the wire and its resistance slowly increases. This is checked in the lab by measuring the temperature of a known fixed point that is more accurate than the thermometer. The triple point of water at 0.010°C is the gold standard. I used an ice point, which is only accurate to perhaps ±0.01°C, but I was using mostly industrial quality thermometers. When the resistance at the fixed point increases over what it was before, the thermometer has drifted.

Have any of the ARGO thermometers been retrieved to check the calibration?

Don K
Reply to  Loren C Wilson
November 25, 2020 3:23 am

“Have any of the ARGO thermometers been retrieved to check the calibration?”

I think the answer is probably yes. There’s a paper at
https://www.frontiersin.org/articles/10.3389/fmars.2020.00700/full which is remarkable if nothing else for the sheer number of authors (around 100 give or take). It discusses the system and its problems in considerable detail. The section on temperature sensors seems to say that they have recovered a few floats and done sensor analysis on them. But my sense is that they don’t do that routinely. Searching within the paper for “Long-Term Sensor Stability” should get you to the right section. Odds are that it’ll make more sense to you than it does to me.

Keep in mind that the pressure sensors on some floats have exhibited several kinds of problems. I think that if the pressure is wrong, the temperature — even if correct — will be ascribed to the wrong depth. But I could be wrong.

Loren C Wilson
Reply to  Don K
November 25, 2020 9:32 am

Good point.

Reply to  Don K
November 25, 2020 1:59 pm

This points out how naive it is to ascribe the term “warming” to a 0.07C change in a number measured by a large number of ascending and descending buoys operating in the real world ocean.

It is incomprehensibly silly for anyone who has tried to measure anything to measure 0.07C. Even sillier when considering what happens in the ocean in a matter of days let alone a few years.

One or maybe two barnacle in the right place could be responsible for the so-called “warming”.

Michael S. Kelly
Reply to  Loren C Wilson
November 26, 2020 6:23 pm

“Have any of the ARGO thermometers been retrieved to check the calibration?”

I’ve read fairly extensively on the ARGO buoys, and yes, a number of the PRTs have been recovered and have been shown to maintain their factory calibration remarkably well (without exception). They are very high quality PRTs, but, as RickWill notes below, measuring any temperature to within 0.07 C is exceedingly difficult. Despite the fact that the ARGO PRT specs would say that it’s possible, I’m not convinced that the thermal time constant for the PRTs is compatible with the descent and ascent rates of the probes. I’m not convinced because I’ve never found any reference to it having been measured or accounted for in the data from the buoys.

Nonetheless, these buoys are a gigantic step forward in sea temperature measurement from the practice of dunking a thermometer in a bucket of bilge water aboard a merchant ship whose location is only approximately known at the reported time of the measurement. It’s a pity that the high quality information is then put in a meat grinder of gridding and interpolations, destroying much of the original information.

Dudley Horscroft
Reply to  Michael S. Kelly
November 27, 2020 12:21 am

Michael Kelly remarked:
“the practice of dunking a thermometer in a bucket of bilge water aboard a merchant ship whose location is only approximately known at the reported time of the measurement.”

As one whose job when a cadet on watch on a weather observing ship was to sling the canvas bucket overside on a long rope to get a substantial sample of surface water in which the thermometer could be immersed, and then to read the temperature off the thermometer, I take exception to the use of “bilge water”. Any cadet who went to the engine room to access the bilges there would have been very severely reprimanded (a) by the duty engineer officers for invading the engine room, (b) by the navigation officer of the watch for disobeying the instructions to get a sea water sample, and for getting the bucket soiled with oil, and (c) by the Master for producing incorrect data for the radio message. Likely penalty, withdrawal of shore leave at next few ports. And contrary to TV shows, access to a ship’s holds is normally not practical, (the ER would be the only place where there could be easy access to the bilges) nor is there normally any bilge water – soundings are regularly taken and any accumulation of water is regularly pumped out.

Finally the ship’s position at the time of of measurement is much more accurately known than what would end up as a position in the centre of a grid in the Met Office.

Observing ships and staffs prided themselves on the accuracy of their observations.

Rich Davis
November 24, 2020 1:42 pm

Nitpick of the day, but CONUS is a military acronym meaning continental United States. As opposed to OCONUS (outside the continental United States).

It is often referred to as the “contiguous” US, or the “lower 48”, and I only heard it referred to as conterminous for the first time today. I guess that either are logical, but technically CONUS doesn’t mean either one.

CONUS doesn’t make much sense, since Alaska is on the same continent as the lower 48. Not sure how we categorize Hawaii, since it is on the same tectonic plate as Los Angeles. Maybe LA is not part of CONUS. I find that believable.

Rich Davis
Reply to  Andy May
November 24, 2020 3:33 pm

Not officially

Reply to  Rich Davis
November 24, 2020 6:21 pm

I too had never heard “conterminous” before and looked it up. I found this first:
https://www.acronymfinder.com/Military-and-Government/CONUS.html

CONUS Continental United States
CONUS Contiguous United States
CONUS Conterminous United States

Further search found this:
https://www.jcs.mil/Portals/36/Documents/Doctrine/pubs/jp1_ch1.pdf?ver=2019-02-11-174350-967

Joint Publication 1, Doctrine for the Armed Forces of the United States

continental United States. United States territory, including the adjacent territorial
waters, located within North America between Canada and Mexico. Also called
CONUS. (Approved for incorporation into JP 1-02 with JP 1 as the source JP.)

Seems that all are appropriate, and I learned a new word today.

Steve Z.
Reply to  Andy May
November 25, 2020 4:42 am

Before reading your essay, I had never seen the word “conterminous.”

“Coterminous” – the word I have always used – has the same meaning, according to Google.

Kevin A
November 24, 2020 1:58 pm

Can I argue that “The oceans contain 99.9% of the thermal energy (“heat”) on the surface of the Earth” and from 2004 until 2019 it has warmed 0.07c° which indicates that the land based temperatures are being massaged for political reasons just as Biden votes were?

Reply to  Kevin A
November 24, 2020 4:20 pm

I am doubtful of the 0.07C as that could easily be a bias caused by fouling. Operators have already determined the salinity reading is biased due to fouling. Reducing heat transfer to the sensing probe could bias the reading of a buoy that is ascending and descending while taking measurement.

Two things are certain. (1) The ocean surface becomes solid below -1.7C and slows heat loss. (2) Daily cloud bursts occurs when the precipitable water reaches 38mm, typically at ocean surface temperature of 27C, such that the ocean surface temperature can never exceed 32C; the shutters go up and the sea surface can actually cool under the midday sun.
https://1drv.ms/b/s!Aq1iAj8Yo7jNg3qPDHvnq-L6w5-5

Tropical ocean surface temperature shows no trend:
https://1drv.ms/b/s!Aq1iAj8Yo7jNg3j-MHBpf4wRGuhf

There rest is just noise. If you see a long term temperature trend that is not zero, look for the error in the measurement system. Earth has hung around for a long time despite volcanoes, very high level of CO2 and asteroid impacst but the powerful negative feedbacks, embodied by water over the surface and in the atmosphere, bring it back to the average of -1.7+32 = 15C (288K). No “Greenhouse Effect” required.

Chris Hanley
November 24, 2020 2:01 pm

The up-adjustments of linear T trends over the past century or so on the NASA and BEST global surface records for individual stations seem far more common than the opposite, in fact I don’t recall seeing any adjusted falling trends although there may be some.
Merely eyeballing, I’ve noticed that the year-to-year anomalies remain, only the overall century-plus linear trend is ‘corrected’.
Why would that be?
Given that the compilers long ago committed to ‘the cause’, to a lay observer the many adjustments (aka corrections) are becoming comical.

November 24, 2020 2:27 pm

Wow that’s a lot of comment in four hours. Anyway, Here’s a graph that illustrates the changes made to GISTEMP’s Land Ocean Temperature Index since 1997:

comment image

Here’s the number of changes CISTEMP has made to their Lan Ocean Temperature Index so far in 2020:

Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
319 240 313 340 298 404 319 370 303

October 2020 is probably out, I’ll have to update my file.

Reply to  Steve Case
November 24, 2020 3:45 pm

If my memory serves me – for a short period of years early in the new millenium – GISS actually produced UHI corrections to raw station data.

November 24, 2020 2:34 pm

All those US temp stations are not useful for climate study. There have been a few people who have gone through and discarded temp stations that are suspect due to environmental changes nearby and the remaining stations do not indicate any global warming. One of the main reasons for statistics is that you choose a sample to indicate the whole and there is a degree of accuracy associated with the sample and there is a savings of work. All those temp stations are not needed even if they were all accurate.

November 24, 2020 2:46 pm

The Australian BoM has generated the ACORN series of corrected temperature data which warms more than raw data. What Andy is finding for US t data looks to have undergone a similar process.

Reply to  wazz
November 24, 2020 4:24 pm

It is known as “World’s Best Practice Temperature Homogenisation”. It is the standard means of creating a warming trend where there is none and cannot be one.

November 24, 2020 2:49 pm

Update to my previous post there were 389 changes for October made to GISTEMP’s most recent update to their Land Ocean Temperature Index. Looks like this:

Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec
319 240 313 340 298 404 319 370 303 289

Reply to  Steve Case
November 24, 2020 8:40 pm

Oops, 389 changes made to the October 2020 LOTI data.

Robert of Texas
November 24, 2020 3:13 pm

Excellent article, I really enjoyed this.

Adding data to infill “Grids” makes sense if you are looking for things to investigate further, but when used to create an area average temperature to support a hypothesis is completely open to bias and outright corruption. The assumptions made to create the infill are completely arbitrary and up to the person creating the algorithms – it is why it is so important these be clearly listed, debated, and agreed upon. Once this process is hijacked by politics, the resulting output is just propaganda – no longer of any scientific use.

Tank goodness for people who archive the old data raw data – it allows such bias and corruption to be easily spotted.

November 24, 2020 3:49 pm

After enjoying every article here excerpted from Mr. May’s book,
and thinking we were eventually going to get the whole book for free
here, this article disappointed me. I realize it was not from the book.

The subject was important, and I tried to read the whole article, but I got a headache, and lost my mind, again.
The writing is tedious, and the material is not organized in an easy to follow format. And the use of linear trend lines for non-linear data, makes no sense, and is deceptive. I tried reading the article in a mirror, but that did not help. So my conclusion is that the author should be horsewhipped, tarred and feathered, and run out of town on a rail. Or on a railroad.

observa
Reply to  Richard Greene
November 24, 2020 4:42 pm

“the material is not organized in an easy to follow format. And the use of linear trend lines for non-linear data, makes no sense, and is deceptive.”

You mean it was just like Goddam weather and climate and you don’t think there’s any linearity trend to be had like the climate changers reckon there is and want to reverse? Yep that was pretty much it for me too but I know who needs to be horsewhipped etc and run out of town with cooking it up in the first place.

Reply to  Andy May
November 25, 2020 10:12 am

I only posted a comment after reading Robert’s favorable review of the article. And comparing what I REIED to read with the well written excepts here from your book. This here article is a great example of how NOT to write.

This time I skipped to the article conclusion and that was pretty good.
I especially liked;
“My internal BS meter red-lines when I hear a laundry list of smoothing algorithms, correction algorithms, bias adjustments, etc. I want to scream “keep your &#$@ing maps and calculations as close to the real data as possible!” ”

I also read there, but DO NOT agree with:
“A better indicator of climate change or global warming is the trend of ocean warming, shown in Figure 2. Notice the trend over the past 16 years is only 0.4°C per century.”

Because 16 years is too short for determining a long term trend.

A better indicator of climate change might be the percentage of old fogies complaining about the weather. A slightly better indicator would be using long term tide gauge records.

My other conclusions:
— Robert from Texas is not really from Texas, nor is his real name Robert. Also was paid off at least $2.75 to complement your article. His real name is Eaton N. Faartz.

— You hired a ghostwriter for your book, and he did a very good job, judging from the articles here that were derived from your book. Everybody should buy your book to pay for your needed writing class. Write as you talk, Mr. Maybe, unless you mumble, stumble and stutter, like our next President.

Note: This comment was NOT influenced by adult beverages.

Mr.
November 24, 2020 5:11 pm

Can I just leave a question here (asking for a friend of course) –
when a cheeky bunch of clouds wanders across the sun for an hour or so, do the weather station thermometers report “TILT” beside that period’s readings?

Dudley Horscroft
Reply to  Mr.
November 27, 2020 12:31 am

Don’t know about that but my brother showed me the graphs of power from his solar cells when the clouds did that. And in the Northern Territory, the solar farm suddenly shut down, and as the reserve power supplies could not start up soon enough, the whole Northern Territory had a black out! Certainly a loud shout of “TILT”!

Doug Proctor
November 24, 2020 5:41 pm

I mapped by hand data that was non statistically valid on its distribution for decades in the oil and gas business. Then computer grinding and mapping aligned with lazy geologists and geophysicists. I noted they always dropped the data points from their maps and insisted they keep them. What I found was that the algorithms did not honor a lot of the data points. Mathematics rules.

In the real world, trends violate mathematical rules regularly. A cold spot exists between two hot spots due to peculiarities of air flow, especially in or near mountainous or holly area.

Has anyone looked at the computer maps with the datapoints remaining?

Bill Rocks
November 24, 2020 6:04 pm

Andy May,

I appreciate your work to acquire, understand and analyze the near surface earth temperature data and derivations thereof and to communicate your work. Good detective work and data analyses. I will not complain about your writing – the subject is a convoluted and a tedious mess and you have provided some clarity.

I also agree that gridded geographic data can not improve the original data but that gridding is commonly done to facilitate grid to grid mathematical operations in search of a different answer such as in situ minerals or fluids.

Finally, it is difficult for me to understand how the sketchy temperature data rather than enthalpy have been used to drive the global warming train. Looks to me that you have revealed the subject to be a slow motion train wreck whether intentional or not. Hope to read more of your work.

November 24, 2020 6:53 pm

“Zeke’s next point is that Goddard did not grid his data. Thus, he did not deal with the uneven distribution of stations and the changing distribution of stations over time. These are real problems and they do affect internal trends within CONUS”

Perhaps they should revert to just graphing one station at a time… Forget the whole CONUS or Global deal!

Oh wait, JoNova and Wood for Trees already do that and show cooling temperatures long term and stable not rising temperatures short term.

Good analysis Andy!
Don’t get dragged into the morass of dubious adjustments by those engaged in the global warming religion.

November 24, 2020 8:56 pm

Ocean Warming
We have estimated that the oceans, which cover 71% of the Earth’s surface, are warming at a rate of 0.4°C per century, based on the least squares linear trend shown in Figure 2. This is a very rough estimate and based only on data from 2004 to 2019 and temperatures from the upper 2,000 meters of the oceans. The data before 2004 is so sparse we didn’t want to use it. The error in this estimate is roughly ±0.26°C, from the surface to 2,000 meters and unknown below that.

Really, that didn’t stop the IPCC. They say in the first two sentences of Chapter 5 of their AR5 Assessment Report:

                            Executive Summary
The oceans are warming. Over the period 1961 to 2003,
global ocean temperature has risen by 0.10°C from the
surface to a depth of 700 m

If they really meant that trailing zero then they’re saying not 0.09 and not 0.11 but 0.10°C.

Clyde Spencer
November 24, 2020 9:32 pm

Andy,
You remarked, “… thus the R2 is irrelevant,” I beg to differ. A useful interpretation of the R^2 value is that it explains/predicts the variance in the dependent variable (temperature) associated with changes in the independent variable (time). Thus, a correlation coefficient (R) of less than about 0.71 implies only about 50% of the variance in the dependent variable is associated with changes in the independent variable. That means you’re getting into coin-toss-territory. Or, to put it another way, a small R means there is little or no linear correlation between the two variables. Plotting the data, as you have done, might indicate if there is high correlation with some other functional transform of the independent variable. That doesn’t seem to be the case here. So, presenting the R^2 value might be instructive in just how futile it would be to try to determine cause and effect.

I think it would be interesting to see the R^2 values for the raw and adjusted trends in Figure 4 to judge whether the adjustments have improved or degraded the predictability of temperature over time.

Geoff Sherrington
November 25, 2020 1:58 am

Nick Stokes has sadly let belief overwhelm correct science.
You cannot improve accuracy by making adjustments, except in special cases.
Example, you have a stream of numbers that includes an outlier that, in conventional statistical terms, is ten times the standard deviation above the mean. Conventionally, one is compelled to reject the very high value, but one should not do so unless there is an explanatory cause. For example, with ambient air temperatures a 10X sd anomaly might be justified for rejection because the thermometer would break before that reading could exist. Or, there might be another explanation that allows you to reject. It is a wrong belief system that allows people to reject values as outliers solely because of a statistical reasoning that all values outside +/- 3 sd or whatever, can and should be deleted.
We used to find an occasional new mine from exploration geochemistry where we welcomed values many times outside a few standard deviations. Belief systems operate in climate change work to try to get rid of what we valued in geochemistry.
Years ago in CSIRO I cut my teeth on statistics by doing analysis of variance manually, pencil and paper and eraser, using Fisher’s seminal method. We analysed the growth of plants that had several different levels of several different fertilizers. This was a lovely way to learn about co-variables and then variables that were acting non-linear, then variables that interacted with each other ….
One of the big failures in climate work is lack of knowlwdge, lack of control about co-variables and seldom much about non-linear and interacting variables. Like with plants, when added Mo does a lot or a
little depending on the Ca level, but has different magnitudes when the soil has a high or a low pH. Life in general is a sea of interacting variables, much of which we have failed to quantify, particularly in the climate change scene with its newcomers to science and its L plate drivers who time and again have been advised to engage and listen to professional statisticians.
Nick Stokes seem to have a liking for the “anomaly” method of showing temperatures, You select a lengthy period from some time series data like temperatures, then subtract its average from all values to get an anomaly (sic) that shows how the temperatures have changed relative to each other rather than relative to an absolute mark, like degrees on the Kelvin scale. I think that I have read Nick saying that the uncertainty of regional temperature estimates is more accurate when you use this anomaly method. Correct me if I am wrong, Nick. But, you cannot improve the accuracy of a set of data like temperatures by making adjustments that you believe will give improvement. To be correct, one should not adjust values unless there is a clear, physical reason to do so. The NOAA type global temperature sets are in flagrant violation of established firm principles. It is ludicrous to read of hundreds of adjustments a month being made to some sets, sometimes a century or more after the original reading, with little to no means available to see what physical error was made that so needs adjustment. Rather than devising mathematics to do a Time of Observation correction, for example, the actual observations should be retained and an error envelope applied according to the time of day that they were taken. You cannot guess how the temperature changed between an actual and a desired observing time, you have to guess and once you guess you are into belief that you are right and so you have disqualified yourself.
In my Australia, I have studied the national temperature data sets since about 1992. The latest adjusted set named ACORN-SAT version 2.1 has numerous changes from its predecessors Acorn 2, Acorn 1, the “High Quality” set a few years earlier, the set named AWAP, the raw data as written by the observer and more lately, as in the USA, the gridded data set(s). The latter, as Andy notes, rely on interpolation to infill data missing from grids. Interpolation is an estimate and every estimate is required, in correct science, to have a proper description of its errors and uncertainties.
The best that I can think of for these various Australian sets is to make spaghetti from them, draw an error envelope that encloses 95% of the values and call that the MINIMUM mathematical uncertainty in the data. It comes out at something like +/- 1.2 degrees C. But, as stated already, this is no more than an aid to visualisation, because of the rule that no raw data can be adjusted in the absence of an observed physical effect. Geoff S

Reply to  Geoff Sherrington
November 25, 2020 11:42 am

Geoff,
“Correct me if I am wrong, Nick. But, you cannot improve the accuracy of a set of data like temperatures by making adjustments that you believe will give improvement.”
You are not trying to improve the accuracy of the data. You are trying to improve the accuracy of a deduced quantity, in this case the spatial average.

Taking anomaly improves because it enables you to average a more homogeneous data set, reducing sampling uncertainty.

Homogenisation improves deduction from the average over time, because it removes effects that are not due to climate but to changes in your way of measuring (including location).

Geoff Sherrington
Reply to  Nick Stokes
November 25, 2020 1:33 pm

Nick,
That is what I mean. Who defines “more homogenous”?
Someone overcome by belief?

I have this mental image of the first Apollo lander nearing the surface of the Moon.
Command says “There will be a delay while we homogenize your altitude readings and discuss among ourselves whose homogenization is best”.
Geoff S

Reply to  Nick Stokes
November 25, 2020 7:24 pm

Using anomalies does not improve the accuracy of the data in any manner whatsoever. Subtracting 10degC (an average baseline) from a temperature of 20degC +/- 0.5degC doesn’t make the result, 10degC, any more accurate than +/- 0.5degC. This is especially true when the baseline average has at least a +/- 0.5degC if not much larger.

Deducing something within the uncertainty range is impossible. Only be ignoring the uncertainty and assuming that all measurements, and therefore all averages, are 100% accurate can such a deduction be made.

Take a look at Graph 2. Almost all of the anomalies are within the +/- 0.26degC uncertainty claimed for the data. How do you discern a trend when you don’t know the actual true values? If you blacked out (whited out?) all the trend line within the +/- 0.26degC uncertainty interval you wouldn’t be left with much of a trend line to look at.

Geoff Sherrington
November 25, 2020 2:03 am

Oh, and I forgot to add that Rud Istvan was correct in calling the USA raw data not fit for purpose, when the purpose is to estimate national warming on century scales. Same with Australia. Geoff S

A C Osborn
Reply to  Geoff Sherrington
November 25, 2020 10:18 am

Never designed for it.

Mark Pawelek
November 25, 2020 4:36 am

US NTI is an anomaly. So are they all. They use anomalies because it’s easier to say the temperature increased without a firm empirical baseline. A book originally published in 1853 (and republished) says the global surface temperature average was 14.6C in 1889: “Distribution of heat on the surface of the globe“, by Heinrich Wilhelm Dove.

JohnA
November 25, 2020 11:59 am

I thought this was an excellent overview of the problems with the new temperature statistical metric for the CONUS.

Figure 5 demands a physical explanation as to what is consistently causing an non-climatic cooling in the dataset. If climatology were an ordinary science, someone would have demanded an answer that made any physical sense and it would be the focus of a lot of papers.

I struggle to work out what the explanation for the slope would be: did the Stevenson screens become more reflective to heat over time?

M Seward
November 25, 2020 3:31 pm

Fig 3 looks loke it has a long period component in there starting on a trough; and finishing on a ‘peak. You will get an linear uptrend ftrom such data if it fitted to a pure sinusoid,. The so called linear regression is a sign of monstrous mathematical and statistical incompetence it seems to me. Fitting a line to data that clearly has some component suggesting a seriously non linear mathematical form is below the level of a freshman wannabe IMHO.

Tom Abbott
November 26, 2020 9:10 am

From the article: “stirring the manure faster”

I like the comparison! 🙂

From the article: But the corrections and the gridding process are statistical in nature, they do nothing to improve the accuracy of the National Temperature Index.”

That’s correct. The only reason to use this process is to get a global average and a temperature trend. The reason not to use it is when the computer-generated global average and the trend do not agree with the actual temperature readings and trend.

The actual temperature readings show the world is in a temperature downtrend since the Early Twentieth Century, which shows there is no need for CO2 mitigation. The computer-generated global average we have today says we need to spend Trillions of dollars on CO2 mitigation. It would be in our interests to go with the actual temperature readings, and save ourselves a lot of money and a lot of worry on the part of people who don’t know any better.

From the article: “Remember the slope of the least squares line, 1.5°C per century, it will be important later in the post.”

Yes, the Data Manipulators have changed a cooling trend into a warming trend with their computers.

From the article: “When the “corrected” data has a very different trend than the raw data, one should be skeptical.”

Yes, especially when the “corrected” data means we have to spend Trillions of dollars mitigating CO2’s alleged dangers. The actual temperature readings say there is no CO2 danger to worry about.

From the article: The raw data shows CONUS is cooling by 0.3°C per century,”

That’s right. That also applies to every other nation on Earth for which we have data. All regional, unmodified surface temperature charts show the same cooling trend from the Early Twentieth Century. The only thing that doesn’t show a cooling trend is the computer-generated global temperature chart. It’s all by itself in the world. An outliar.

From the article: Figure 5. This plots the difference (Final-Raw) between the two actual station temperature curves in Figure 4. As you can visually see, the difference between the final and raw curve trends, since 1890, is about 0.8°C, roughly the claimed warming of the world over that period.”

So all the “warming” that has taken place over the decades, has taken place inside a computer, not in the real world.

From the article: “All three plots suggest it was as warm or warmer in 1931 and 1933 in the conterminous U.S. states as today.”

Correct again. Hansen says 1934 was the hottest year in the US, and that it was 0.5C warmer than 1998, which would make it 0.4C warmer than 2016, the so-called “hottest year evah!”

So when Gavin Newsom, California governor says global warming is causing his forest fires, what is he referring to? It’s cooler now in California than in the past, not hotter. The same goes for the rest of the world, it is cooler now than in the recent past. Computer-generated Science Fiction like we have with climate science is leading a lot of people astray. To get them back on the “straight and narrow” we need to hit them over the head with the actual temperature readings until they start to sink in.

From the article: “The century slope of the data is 0.25°, the corrections add 0.35° to this and the “climatological gridding algorithm” adds 0.9°!”

It’s just pure fraud on the part of the Data Manipulators. The only warmth we are experiencing is in the minds of these climate fraudsters.

From the article: “We argue a lot about relatively small differences in the land-surface temperatures. These arguments are interesting, but they don’t matter very much from the standpoint of climate change.”

Well, actually, they do matter a lot when it comes to spending money on climate change. Deciding which temperature charts to accept as reality means the difference between wasting Trillions of dollars trying to mitigate CO2 “dangers” that dont exist, or not doing so, and spending that money on something productive.

I thought you wrote a good article, Andy, but I think the focus should be on what I laid out in that last paragrah. We need to show that the computer-generated global surface temperature chart is a Fraud, and we can do that by emphasizing unmodified, regional temperature charts which tell a completely different story than the one the Bogus Hockey Stick global temperaure chart tells.

Let’s save ourselves some money and trouble by declaring the regional temperaure records as the official temperature records of the Earth, and throw the Bogus Hockey Stick chart in the trash, where it belongs.

Tom Abbott
Reply to  Tom Abbott
November 27, 2020 5:35 am

And let me add some regional surface temperature charts from around the world that show CO2 is not a problem. What the regional surface temperature charts show is that it was just as warm in the Early Twentieth Century as it is today.

What does it mean if it was just as warm in the Early Twentieth Century as it is today? What it means is CO2 is a minor player in the temperatures of the Earth’s atmosphere.

Since the 1930’s, human-caused CO2 has increased, yet the temperatures cooled for decades after the CO2 started increasing, cooling down from the hot 1930’s to the cold 1970’s, where at one point some climate scientists were warning that the Earth might be descending into a new Ice Age (human-caused, of course), but that didn’t happen.

Instead, the temperatures started to warm in the 1980’s (keep in mind that CO2 has been constantly rising over this entire period of decades) and the temperautures warmed up to 2016, which is described by NOAA/NASA as the “hottest year evah!”, yet the temperature high point in 2016 did not exceed the temperature highpoint in 1998 (a statistical tie) and 2016 was cooler than 1934 by 0.4C.

CO2 increased for 90 years since the 1930’s, yet the Earth’s temperatures went on a temperature decline from 1940 to 1980, and then warmed from 1980 to the present, yet the warmth of today has never exceeded the warmth in the 1930’s, when all this drama began. So obviously, CO2 has had very little affect on the atmospheric temperatures. CO2 could not stop the temperatures from declining for decades from 1940 to 1980, and when it started warming again, CO2 could not push the temperatures higher than they were in the 1930’s. And now, today, the temperatures have declined by 0.3C since the year 2016.

Here’s some regional charts of actual temperatures from all over the world. They all show that it was just as warm in the Early Twentieth Century as it is today. They all show that CO2 is a minor player in the Earth’s atmosphere and there is no need to be spending Trillions of dollars to fix this non-problem.

Tmax charts

US chart:

comment image

China chart:

comment image

India chart:

comment image

Norway chart:

comment image

Australia chart:

comment image