Crowd Sourcing A Crucible

Last month we introduced you to the new reference site EverythingClimate.org

We would like to take advantage of the brain trust in our audience to refine and harden the articles on the site, one article at a time.

We want input to improve and tighten up both the Pro and Con sections, (Calling on Nick Stokes, etc.)

We will start with one article at at time and if this works well, it will become a regular feature.

So here’s the first one. Please give us your input. If you wish to email marked up word or PDF documents, use the information on this page to submit.

https://everythingclimate.org/measuring-the-earths-global-average-temperature-is-a-scientific-and-objective-process/

Measuring the Earth’s Global Average Temperature is a Scientific and Objective Process

Earth Thermometer from 123rf.com

Pro: Surface Temperature Measurements are Accurate

A new assessment of NASA’s record of global temperatures revealed that the agency’s estimate of Earth’s long-term temperature rise in recent decades is accurate to within less than a tenth of a degree Fahrenheit, providing confidence that past and future research is correctly capturing rising surface temperatures.

….

Another recent study evaluated US NASA Goddard’s Global Surface Temperature Analysis, (GISTEMP) in a different way that also added confidence to its estimate of long-term warming. A paper published in March 2019, led by Joel Susskind of NASA’s Goddard Space Flight Center, compared GISTEMP data with that of the Atmospheric Infrared Sounder (AIRS), onboard NASA’s Aqua satellite.

GISTEMP uses air temperature recorded with thermometers slightly above the ground or sea, while AIRS uses infrared sensing to measure the temperature right at the Earth’s surface (or “skin temperature”) from space. The AIRS record of temperature change since 2003 (which begins when Aqua launched) closely matched the GISTEMP record.

Comparing two measurements that were similar but recorded in very different ways ensured that they were independent of each other, Schmidt said. One difference was that AIRS showed more warming in the northernmost latitudes.

“The Arctic is one of the places we already detected was warming the most. The AIRS data suggests that it’s warming even faster than we thought,” said Schmidt, who was also a co-author on the Susskind paper.

Taken together, Schmidt said, the two studies help establish GISTEMP as a reliable index for current and future climate research.

“Each of those is a way in which you can try and provide evidence that what you’re doing is real,” Schmidt said. “We’re testing the robustness of the method itself, the robustness of the assumptions, and of the final result against a totally independent data set.”

https://climate.nasa.gov/news/2876/new-studies-increase-confidence-in-nasas-measure-of-earths-temperature/

Con: Surface Temperature Records are Distorted

Global warming is made artificially warmer by manufacturing climate data where there isn’t any.

The following quotes are from the [peer reviewed] research, A Critical Review of Global Surface Temperature Data, published in Social Science Research Network (SSRN) by Ross McKitrick, Ph.D. Professor of Economics at the University of Guelph, Guelph Ontario Canada.

“There are three main global temperature histories: the United Kingdom’s University of East Anglia’s Climate Research Unit (CRU-Hadley record (HADCRU), the US NASA Goddard’s Global Surface Temperature Analysis (GISTEMP) record, and the US National Oceanic and Atmospheric Administration (NOAA) record.  All three global averages depend on the same underlying land data archive, the US Global Historical Climatology Network (GHCN). CRU and GISS supplement it with a small amount of additional data.  Because of this reliance on GHCN, its quality deficiencies will constrain the quality of all derived products.”

As you can imagine, there were very few air temperature monitoring stations around the world in 1880.  In fact, prior to 1950, the US had by far the most comprehensive set of temperature stations.  Europe, Southern Canada, the coast of China, the coast of Australia and Southern Canada had a considerable number of stations prior to 1950.  Vast land regions of the world had virtually no air temperature stations.  To this day, Antarctica, Greenland, Siberia, Sahara, Amazon, Northern Canada, the Himalayas have extremely sparce if not virtually non-existent air temperature stations and records. 

“While GHCN v2 has at least some data from most places in the world, continuous coverage for the whole of the 20th century is largely limited to the US, southern Canada, Europe and a few other locations.”

Panels Above:
Top panel: locations with at least partial mean temperature records in GHCN v2 available in 2010.
Bottom panel: locations with a mean temperature record available in 1900.

With respect to the oceans, seas and lakes of the world, covering 71% of the surface area of the globe, there are only inconsistent and poor-quality air temperature and sea surface temperature (SST) data collected as ships plied mostly established sea lanes across all the oceans, seas and lakes of the world.  These temperature readings were made at differing times of day, using disparate equipment and methods.  Air temperature measurements were taken at inconsistent altitudes above sea level and SSTs were taken at varying depths.  GHCN uses SSTs to extrapolate air temperatures.  Scientist literally must make millions of adjustments to this data to calibrate all of these records so that they can be combined and used to determine the GHCN data set.  These records and adjustments cannot possibly provide the quality of measurements needed to determine an accurate historical record of average global temperature.  The potential errors in interpreting this data far exceed the amount of temperature variance.

https://en.wikipedia.org/wiki/International_Comprehensive_Ocean-Atmosphere_Data_Set

“Oceanic data are based on sea surface temperature (SST) rather than marine air temperature (MAT). All three global products rely on SST series derived from the International Comprehensive Ocean-Atmosphere Data Set (ICOADS) archive, though the Hadley Centre switched to a real time network source after 1998, which may have caused a jump in that series. ICOADS observations were primarily obtained from ships that voluntarily monitored sea surface temperatures (SST). Prior to the post-war era, coverage of the southern oceans and polar regions was very thin.”

“The shipping data upon which ICOADS relied exclusively until the late 1970s, and continues to use for about 10 percent of its observations, are bedeviled by the fact that two different types of data are mixed together. The older method for measuring SST was to draw a bucket of water from the sea surface to the deck of the ship and insert a thermometer. Different kinds of buckets (wooden or Met Office-issued canvas buckets, for instance) could generate different readings, and were often biased cool relative to the actual temperature (Thompson et al. 2008).”

“Beginning in the 20th century, as wind-propulsion gave way to engines, readings began to come from sensors monitoring the temperature of water drawn into the engine cooling system. These readings typically have a warm bias compared to the actual SST (Thompson et al. 2008). US vessels are believed to have switched to engine intake readings fairly quickly, whereas UK ships retained the bucket approach much longer. More recently some ships have reported temperatures using hull sensors. In addition, changing ship size introduced artificial trends into ICOADS data (Kent et al. 2007).”

More recently, the temperature stations comprising the set of stations providing measurements used in the GHCN have undergone dramatic changes.

“The number of weather stations providing data to GHCN plunged in 1990 and again in 2005. The sample size has fallen by over 75% from its peak in the early 1970s, and is now smaller than at any time since 1919. The collapse in sample size has not been spatially uniform. It has increased the relative fraction of data coming from airports to about 50 percent (up from about 30 percent in the 1970s). It has also reduced the average latitude of source data and removed relatively more high-altitude monitoring sites. GHCN applies adjustments to try and correct for sampling discontinuities. These have tended to increase the warming trend over the 20th century. After 1990 the magnitude of the adjustments (positive and negative) gets implausibly large. CRU has stated that about 98 percent of its input data are from GHCN. GISS also relies on GHCN with some additional US data from the USHCN network, and some additional Antarctic data sources. NOAA relies entirely on the GHCN network.”

Figure Above: Number of complete or partial weather station records in GHCN v2. Solid line: mean temperature records. Dashed line: Max/min temperature records. Source: Peterson and Vose (1997).

To compensate for this tremendous lack of air temperature data, in order to get a global temperature average, scientists interpolate data from surrounding areas that have data.  When such interpolation is done, the measured global temperature actually increases.

NASA Goddard Institute for Space Studies (GISS) is the world’s authority on climate change data. Yet, much of their warming signal is manufactured in statistical methods visible on their own website, as illustrated by how data smoothing creates a warming signal where there isn’t any temperature data.

When station data is used to extrapolate over distance, any errors in the source data will get magnified and spread over a large area2.  For example, in Africa there is very little climate data. Say the nearest active data station from the center of the African Savannah is 400 miles (644km) away, at an airport in a city. But, to cover that area without data, they use that city temperature data to extrapolate for the African Savannah. In doing so They are adding the Urban Heat Island of the city to a wide area of the Savanah through the interpolation process, and in turn that raises the global temperature average.

As an illustration, NASA GISS published a July 2019 temperature map with 250 KM ‘smoothing radius’ and also one with 1200 KM ‘smoothing radius.3’ The first map does not extrapolate temperature data over the Savanah (where no real data exists) and results in a global temperature anomaly of 0.88 C. The second, which extends over the Savanah results in a warmer global temperature anomaly of 0.92 C.

This kind of statistically induced warming is not real.

References:

  1. Systematic Error in Climate Measurements: The surface air temperature record. Pat Frank, April 19, 2016. https://wattsupwiththat.com/2016/04/19/systematic-error-in-climate-measurements-the-surface-air-temperature-record/
  2. A Critical Review of Global Surface Temperature Data, published in Social Science Research Network (SSRN) by Ross McKitrick, Ph.D. Professor of Economics at the University of Guelph, Guelph Ontario Canada.  https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1653928
  3. NASA GISS Surface Temperature Analysis (v4) – Global Maps https://data.giss.nasa.gov/gistemp/maps/index_v4.html
4.3 13 votes
Article Rating
384 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Tom Halla
February 24, 2021 6:17 pm

I think it is manifestly ludicrous to make claims of changes of temperature of a tenth or hundredth of a degree in a record kept to the nearest whole degree. This is not measuring the same thing multiple times, where averaging is reasonable, but measurements of different things multiple times, where the dice have no memory.

nw sage
Reply to  Tom Halla
February 24, 2021 6:38 pm

I agree, and would add the following observation – the calibration of the thermometers used, both accuracy and precision needs to be taken into account. For example, in the 1800s, little is known about the thermometers actually used in the bucket measurements of seawater temperature thus a potential error of several degrees must be added to the data unless both precision and accuracy can be demonstrated (ie calibration records traceable back to the standards in the UK of other national standards). If we don’t KNOW the ‘exact’ temperature to within one degree it is very hard to make an argument that the average is ‘better’ [see Tom’s comment] and it is impossible to use that data to argue that the temperature has risen – or lowered unless the uncertainty figure is exceeded.

Graemethecat
Reply to  nw sage
February 25, 2021 12:33 am

This is blindingly obvious to any physical scientist, but apparently not to Climate “Scientists”.

tonyb
Editor
Reply to  Graemethecat
February 25, 2021 1:00 am

We might be better using the ‘Koppen’ climate classification whereby those areas of the world with similar climate are grouped together and not with those climatic regions that have little relevance to them

Köppen Climate Classification System | National Geographic Society

tonyb

Graemethecat
Reply to  tonyb
February 25, 2021 5:48 am

I couldn’t agree more. The Koppen scheme (sorry, can’t render the umlaut) is simple, intuitive, and rooted (pun!) in the real, observable world of plants and Earth, rather than in fictions like global temperature.

Right-Handed Shark
Reply to  nw sage
February 25, 2021 3:17 am

Glass tube thermometers made today, with what I would assume would be tighter manufacturing tolerances than were available 150+ years ago, are not accurate enough to give anything more than an indication of temperature. The “calibration” method shown in this video only require that the distance between the hot and cold marks falls within a range of a pre-printed scale, of which there are many. Add in the complication of different manufacturers, hand-written records and the possible misreading of those records by a dyslexic collator and I’m pretty sure that any claim of historic records being accurate to 1/10th of a degree are dubious to say the least.

Reply to  nw sage
February 25, 2021 10:03 am

nw sage.
I agree.
I know that I and my colleagues were trying our best when we took sea water surface temperatures from ships, but the instruments [calibrated by the Met Office on the ships i sailed on, and later was responsible for] were used in a bucket of sea water that may have come from anywhere in the top meter or so of the ocean.
And the readings were sometimes taken in the dark, or in rougher weather.

Also it is important to be aware of the size of the oceans of the world.
They cover some 120 million square miles.
So, even 4,000 Argo floats are each covering about 30,000 square miles.- a quarter more than West Virginia, or about the same area as Armenia, for each reading.
The volume of the world’s oceans can be approximated to 1000 million cubic kilometres [not precise, but of the close order].
Satellites, I understand, measure the surface temperature. Argo floats – one per quarter of a million cubic kilometres – do give a sense of temperatures below the surface.
But when calculating the mass of an oil cargo, a tank of ten thousand cubic meters will have two or three temperatures taken – more these days on modern ships – yet the oceans are assumed to be accurately measured with one temperature profile, on one occasion, for twenty five million times that.
It all suggests to me that conclusions are based in little data, some, perhaps, not terribly good.

Auto

C. Earl Jantzi
Reply to  Tom Halla
February 25, 2021 7:17 am

I think William Shakespeare had the right thoughts about this whole discussion, in the title of his play, “Much Ado about Nothing”. What difference does it make when WE can’t do anything about it? The only thing we can do is screw it up with a “nuclear winter”, and hopefully we won’t be that stupid. “Glow bull warming” is just another scam to move money around, from the poor to the rich. Mankind is just a speck on the “elephants butt” in relation to the the planet Earth. The greening of the Earth from more CO2 is a benefit, not a problem.

Tom Halla
Reply to  C. Earl Jantzi
February 25, 2021 7:31 am

I am old enough to remember when the Turco, Toon, Ackerman, Pollock, and Sagan study came out, and as it was one of the first climate computer models, it was largely a crock, but highly influential.

C. Earl Jantzi
Reply to  Tom Halla
February 25, 2021 7:33 am

As I sit here writing this I look out over the Blue Ridge mountains of Virginia, that look pretty and soft from the trees covering them. Get out the binoculars, and look at the bare spots where nothing grows and you realize that you are looking at a rock pile of limestone that once laid on the bottom of an ocean, and now eons later have been pushed up into mountains, covered with a thin patina of trees. The whole idea that mankind can control the climate of the Earth is as absurd, as trying to change the tectonic forces that made those mountains.

Greg
Reply to  C. Earl Jantzi
February 25, 2021 3:54 pm

Don’t worry. Once we have stopped climate from changing we will have gained enough experience to move to the next level and stop tectonic change.

Jim G
Reply to  Greg
February 27, 2021 12:28 pm

A great line from the movie Jurassic Park:

Malcolm:
“Yeah, but your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should.”

Richard Brimage
February 24, 2021 6:27 pm

Why are we trying to calculate an average temperature in order to study climate. Memory and processing power should be sufficient to study daily highs and lows. We could still calculate anomalies from the highs and lows and do global averages of the anomalies. I am more interested in my regional climate and I define that climate by the range in this region. Just imagine an average of 25C could mean an average range of 25C to 30C or a range of 0C to 50C. Two much different climates wit the same average. Also by using daily highs and lows there should be fewer corrections needed.

Clyde Spencer
Reply to  Richard Brimage
February 24, 2021 8:21 pm

Instead of presenting averages of mid-range values, the anomalies should be calculated for the highs and lows, and then averages of the high/low monthly values presented. Averages can hide a multitude of sins, especially when the highs and lows have different long-term behaviors.

Dr. Deanster
Reply to  Richard Brimage
February 25, 2021 10:48 am

Absolutely!! Global is a meaningless term. We should strictly view climate changes in defined regions. For example, how has the climate changed in the NE United States, the SE United States, the SW United Stated, the Great Plains, the Arctic, the Sahara Dessert, ETC … We all know that the “global” average is being driven by Arctic changes. Changes in the arctic with no change somewhere else is meaningless to somewhere else.

February 24, 2021 6:34 pm

Completely absurd. There is no such thing as “Adjusted Data.” Read the instrument, write it down, end of story.

Izaak Walton
Reply to  Michael Moon
February 24, 2021 8:21 pm

Michael,
The opposite is true. There is no such thing as raw data. Everything is processed or adjusted to some extent. Take a simple thermometer, is the raw data the height of the mercury in the tube or is it the adjusted data that converts that height into Celsius. More modern instruments count the flow of electrons across a resistor but then adjust that so that all you see on the screen is a temperature.

fred250
Reply to  Izaak Walton
February 24, 2021 8:53 pm

Thanks for CONFIRMING that surface measurements an all the calculations are RIDDLED with erros of MANY TYPES and anyone thinking they can “average” them to even 1/10 degree is talking absolute BS

You know, like YOU do all the time.

Rory Forbes
Reply to  Izaak Walton
February 24, 2021 11:26 pm

The opposite is true. There is no such thing as raw data. Everything is processed or adjusted to some extent.

What utter nonsense. The fact that you’d even write such a thing to present company is an indication of just how out of touch you are with reality. It’s a question of accuracy and precision. The trick is to make consistent, reliable tools able to measure to the true value with the greatest accuracy … then have everyone use the same instrument in the same way (precision).

You really don’t understand the scientific method, do you?

Jim G
Reply to  Rory Forbes
February 27, 2021 12:41 pm

Don’t forget repeatability.

Accuracy, precision (scale increment), and repeatability.
If an instrument has high repeatability, even though it is out of cal and not accurate, by knowing the offset, you can reliably adjust the data.

If it has low repeatability, it will read a different value for the same value of temperature (within the accuracy tolerance). In this case the actual temperature should be shown with error bars.

It does seem that folks are trying to ascribe greater precision, 1/10th of a degree when the thermometers had a scale reading of 1 or 2 degrees.

In the Navy we were allowed to record measurements no smaller than 1/2 the scale increment. Most gages were big enough that this could be done.

Graemethecat
Reply to  Izaak Walton
February 25, 2021 12:36 am

We all knew you were an idiot, but thanks so much for confirming it.

Derg
Reply to  Izaak Walton
February 25, 2021 4:45 am

You get dumber and dumber with each post.

Pamela Matlack-Klein
Reply to  Izaak Walton
February 25, 2021 4:49 am

What? No such thing as raw data? How totally ignorant to say such a thing. As someone who has participated in the collection of raw data on sea levels, I know whereof I speak! You, sir, are a buffoon, to make such a foolish statement.

Paul Penrose
Reply to  Izaak Walton
February 25, 2021 11:37 am

True conversion is not adjustment. To conflate the two shows your ignorance. But then maybe you are a post-modernist, in which case you aren’t worth debating.

Izaak Walton
Reply to  Paul Penrose
February 25, 2021 12:33 pm

Paul,
I do not think you realise the amount of data processing and conversion that goes on to make even a simple measurement. Suppose for example I want to measure the length of a room. I can go out and buy a laser distance meter for about $100 that has an accuracy of about 10^-4 and a digital display that gives an almost instant reading.

Now ask what is the device actually doing? Well it works by sending out a frequency modulated laser beam and then comparing the frequency of the returned signal to the current frequency. It does this by interfering the two on a photodetector and looking at the instantaneous frequency of the resulting current. A similar issue arises with how the device measures time — there is a quartz crystal inside vibrating at some frequency which again is recorded as a voltage across a resistor. Finally because all the signals are noisey there is a lot of averaging and data processing inside the device to produce a sensible signal.

So what is the raw data? Is it the frequency of the returned laser signal? Is it the
number of electrons that passing through a particular resistor? Is it the difference frequency between the returned beam and the current one? Or are you going to say that all of that is irrelevant since it happens inside a cheap piece of kit and the raw data is whatever is output on the screen?

Carlo, Monte
Reply to  Izaak Walton
February 25, 2021 4:31 pm

Go read the GUM and attempt to educate yourself.

Jim G
Reply to  Izaak Walton
February 27, 2021 12:52 pm

Mr. Walton;
Even though there are the details of how an electronic measurement is taken, such devices are calibrated so that the final output reads what is intended. They have a resolution (or precision), and ratings for accuracy and repeatability. In some cases the accuracy is less than the precision.

For instance: Calipers that measure length to +/- .0005 are a classic example. The accuracy is stated as +/- .001. However, when you use them to measure the same gage block (accuracy to .0002 or better) you will find that the calipers read the same measurement each time you take a reading (repeatability).

The “raw data” as you describe, is the value output by the instrument.
If an instrument is out of cal the next time it is calibrated, that calls in to question all measurements between this calibration and the previous one. You don’t really know when the instrument began to drift from the accurate measurement. If the instrument is impacted somehow and there is a step jump in the record where you can tell when the data was corrupted, then you may be able to adjust the data with a larger degree of confidence.

Tim Gorman
Reply to  Jim G
February 27, 2021 1:28 pm

Think of a micrometer. How “tight” do you screw it against what is being measured? Not quite touching? So tight you can’t move anything (e.g. around a journal)? That affects accuracy and repeatability but not precision. Some of the high priced micrometers have a torque break in them so the tightness is more precise. This helps accuracy and repeatability but doesn’t impact precision. (it helps accuracy and repeatability but it doesn’t cure it. the torque trigger can wear for instance)

bigoilbob
Reply to  Izaak Walton
February 25, 2021 12:41 pm

Right over their heads, Mr. Walton…..

Clyde Spencer
Reply to  Izaak Walton
February 27, 2021 7:37 pm

Does one “count the flow of electrons across a resistor,” or measure the electromotive force between the ends of the resistor?

Often times there is no way to make a direct measurement of a physical parameter, so a proxy measurement is used, and converted to the desired one with a mathematical transform, such as Ohm’s Law. The point is, the ‘raw data’ are what can be obtained with some instrument that is immediately sensing or measuring the parameter of interest. Even if it is a proxy or indirect measurement, it has not experienced subsequent adjustments beyond the necessary mathematical transform, which may displayed in the desired units.

It strikes me that the people who make really ignorant claims are the very one’s who think that skeptics are fools. What does that indicate?

Reply to  Michael Moon
February 24, 2021 9:16 pm

“Read the instrument, write it down, end of story.”
It isn’t the end of the story. The topic here is the global average temperature. There is no instrument you can read to tell you that. Instead you have to, from the station data, estimate the temperature of various regions of the Earth, and then put them together in an unbiased way. That is where the adjustment comes in.

Carlo, Monte
Reply to  Nick Stokes
February 24, 2021 9:41 pm

The real story is that the “global average temperature” is a completely meaningless and ill-defined quantity, regardless of whatever input data are used.

Oh, and “estimation” really means extrapolation.

TheLastDemocrat
Reply to  Carlo, Monte
February 25, 2021 6:54 am

I don’t believe in the man-made global warming story, but I think it is bad to throw out this lame concept that there is no such thing as global average temperature.

A “mean” is the value you would expect for some observation, given no other information. Given no other information, if you were to guess the height of the next adult male and female to walk through the door, your best guess – meaning the guess with the smallest error trial after trial, would ber the mean.

If you go on a cruise to the Caribbean, I can tell you how to pack. That will be different from how I would tell you to pack if you are taking a cruise to Alaska.

An alien deciding whether to vacation on Earth or Venus could benefit from knowing the mean planet temperature.

Let’s quit leaning on this one dumb idea that “there is no such thing as average global temp.”

Hoyt Clagwell
Reply to  TheLastDemocrat
February 25, 2021 8:58 am

The “global average temperature” exists purely as a mathematical calculation. It isn’t the measurement of an actual thing. People pretend to predict the effect that changes in the global average temperature will have on living things, even though no living thing on Earth can actually sense the “global average temperature” and therefore has no means of responding to it. Living things can only respond to actual local temperatures that they can sense. Knowing where the GAT is going tells one nothing about what might happen locally. It is no more meaningful than calculating the global average length of rope, which can also be done in a “scientific and objective process.”

Meab
Reply to  TheLastDemocrat
February 25, 2021 10:13 am

You remind me of the guy with one hand in boiling water and the other in ice water. On average, his hands feel fine. Seriously, the global average temperature doesn’t tell you anything about whether there is a problem or not.

In addition, the computed average is entirely dependent on the distribution of temperature stations, you get a different answer if the distribution changes; are more stations on top of mountains or in low laying deserts? Are more in populated areas or out on the ocean? Are more in the tropics or in Antarctica? Since the global coverage cannot be uniform, the average relies on extrapolation, interpolation, or area averages – all with unavoidable gross errors.

MarkW
Reply to  Carlo, Monte
February 25, 2021 8:11 am

If we could measure the exact energy content of every single molecule in the atmosphere, then we could calculate a highly accurate average temperature for the entire atmosphere with complete precision.

If we could measure the energy content of every single molecule in the atmosphere with a small margin of error, then we could calculate the average temperature for the entire atmosphere with that same margin of error.

If we could measure the energy content of every other molecule in the atmosphere with a small margin of error, then we could still calculate an average, but to the margin of error from the measurements, we would have to add an extra margin of error to account for the fact that we can only estimate what the energy in the unmeasured molecules is.

As the percentage of molecules that we are measuring goes down, the instrument error stays the same, but the factor to account for the unmeasured molecules goes up.

When we get to the point where we only have a few hundred, non-uniformly distributed instruments trying the measure the temperature of the atmosphere, the second error gets huge.

With multiple measurements, assuming your errors are randomly distributed around the true temperature, you can reduce the portion of the error that comes from instrument error. But no amount of remeasuring will reduce the error that comes from the low sampling rate.

Generating data, by assuming that the unmeasured point has a relationship with surrounding points, may make it easier for your algorithms to process the data, but it does nothing to reduce these errors.

Tim Gorman
Reply to  MarkW
February 25, 2021 9:07 am

Nice explanation, Mark. I would only say that temperature alone means nothing when measuring the atmosphere. When you speak of “energy” you are speaking of the heat content of the atmosphere – enthalpy. And the enthalpy depends on more factors than temperature. Pressure (altitude) and specific humidity must also be known to calculate enthalpy.

The use of temperature by so-called climate scientists as a proxy for enthalpy is a basic mistake in physics. Since around 1980 most modern weather stations give you enough data to calculate enthalpy – that’s 40 years worth of data. Why don’t the so-called climate scientists, modelers, and mathematicians convert to using enthalpy in their studies and CGM models?

(hint: money)

Reply to  Tim Gorman
February 27, 2021 7:09 am

Knowing the enthalpy of the atmosphere tells us nothing about the thermal equilibrium of the planet with outer space. Knowing the temperature does.
For gases the temperature is directly related to enthalpy. ∆H = Cp∆T
We know the heat capacities of all the gases so we can easily work out the enthalpy of the atmosphere if we want to (which we don’t). Moreover the change in T tells us the change in enthalpy. So stop trying to over-complicate it. Using temperature is far easier and more informative.

Tim Gorman
Reply to  Climate Detective
February 27, 2021 12:07 pm

Temperature tells you nothing about the thermal equilibrium of the planet either. It is *heat content* that establishes that equilibrium and heat content is enthalpy, not temperature.

Your equation is based on ∆H and ∆T, not on H and T.

∆T can be from any temp to any temp.

For moist air, enthalpy is

h = h_a + (m_v/m_a)h_g

and h_a = (C_pa)T

Reply to  Tim Gorman
February 27, 2021 2:43 pm

Utter garbage again from Tim. Go away and learn some thermodynamics.

Zeroth law of thermodynamics:
“If two thermodynamic systems are each in thermal equilibrium with a third one, then they are in thermal equilibrium with each other.”
What determines thermal equilibrium? The TEMPERATURE DIFFERENCE. Not enthalpy or specific enthalpy or even the Gibb’s free energy. Heat flows from hot objects to cold ones. The hot ones are not the ones with the most enthalpy, they are the ones with the highest temperature.

What determines the amount of heat the planet radiates? TEMPERATURE.
In particular the Stefan-Boltzmann equation which depends ONLY on temperature.

Tim Gorman
Reply to  Climate Detective
February 27, 2021 5:13 pm

What determines thermal equilibrium? The TEMPERATURE DIFFERENCE.”

As with so many in climate science, enthalpy of the air is ignored as is the latent heat. Those determine much of the thermal equilibrium of the atmosphere and therefore Earth. You talked about thermal heat transferred to space. That really makes no sense.The Zeroth Law speaks to *thermal* equilbirium. Space is nothing, you cannot transfer heat to space via conduction so how does the atmosphere get in thermal equilibrium with space?

Heat flows from hot to cold *thermally*, but not if there is a vacuum in between.

Radiative equilibrium is NOT thermal equilibrium. You mentioned only thermal equilibrium. Stop moving the goal posts.

I’m not surprised you have now moved the goal post to radiative equilibrium.

Reply to  Tim Gorman
February 27, 2021 6:16 pm

Do you have any qualifications in physics?

If you do then you should know that outer space has a temperature of 3K even though it is a vacuum. Ever heard of the cosmic microwave background (CMB)?

“…how does the atmosphere get in thermal equilibrium with space? “
It gives off infrared radiation and absorbs solar radiation from the Sun.

“Heat flows from hot to cold *thermally*, but not if there is a vacuum in between. “
How do you think heat/energy gets from the Sun to Earth?

Nicholas McGinley
Reply to  Nick Stokes
February 24, 2021 9:50 pm

If it is unbiased, why is it that every alteration makes the imagined “problem” worse?
Why does it never happen that some work is done, and the result is that “it is nowhere near as bad as we thought”?

Dave Fair
Reply to  Nick Stokes
February 24, 2021 10:40 pm

Use satellite and radiosonde-estimated temperatures and get away from all this bickering about the past.

fred250
Reply to  Nick Stokes
February 24, 2021 10:47 pm

Yes, Nick the combining and whim driven “adjustment , infilling of missing data, smearing of urban temperatures of huge areas where they don’t belong

You KNOW the surface data fabrications are a complete and absolute FARCE,

STOP trying to defend the idiotically indefensible.. !

fred250
Reply to  Nick Stokes
February 24, 2021 10:49 pm

“and then put them together in an unbiased way”
.

ROFLMAO..

Do you REALLY BELIEVE that is what happens, Nick

WOW, you really are living in a little fantasy land of your own, aren’t you , Nick !!

Is it those tablets you take to keep you awake ???

Reply to  fred250
February 25, 2021 12:02 am

Fred, these sort of comments don’t help…they just make WUWT look silly. We may well not agree with everything Nick says but let him have his say. It just makes us look childish by ridiculing him in the manner you have done in these two comments. WUWT has higher standards than that.

fred250
Reply to  Alastair Brickell
February 25, 2021 1:36 am

Alastair, if you want to protect and namby-pamby those that want to destroy western society…….. That’s up to you.

Nick deserves absolute ridicule he for supporting this scam.

Do YOU really believe the scammers that put together the farcical “global land temperatures ” are NOT extremely biased ? !!!

MarkW
Reply to  Alastair Brickell
February 25, 2021 8:13 am

Nick could end the ridicule by stopping the ridiculous claims.

Rory Forbes
Reply to  Nick Stokes
February 24, 2021 11:22 pm

It isn’t the end of the story.

Mmmmm … well, yes it is the end of story. It’s a question of accuracy and precision. The trick is to make consistent, reliable tools able to measure to the true value with the greatest accuracy … then have everyone use the same instrument in the same way (precision). The Victorian scientists, engineers and tool makers accomplished that, often with great beauty. That’s why older measurement are often superior to the modern data sets … too much post measurement manipulation of the data.

In any event, “the global average temperature” really has no importance at all. It’s an artificial number intended to be used politically. It has no more to do with science than discussing the Earth’s climate. There is no “climate” in the singular.

gbaikie
Reply to  Rory Forbes
February 25, 2021 12:35 am

“It has no more to do with science than discussing the Earth’s climate. There is no “climate” in the singular.”

Yeah there is, we in an icehouse climate.
And we are in Ice Age due to our cold ocean.
The average temperature of entire ocean is one number that indicates
global climate. The present average volume temperature of our ocean is about 3.5 C.
Our ocean temperature would have to exceed 5 C before one could imagine we might be leaving our Ice Age.
Or before Earth entered our icehouse climate, the Earth oceans were somewhere around 10 C.

Alasdair Fairbairn
Reply to  gbaikie
February 25, 2021 2:37 am

Would love to know where you got those temperatures. All I know is that the surface sea temperatures rarely if ever get above 30C and there are good thermodynamic reasons for that when you delve into the properties of water. How one measures the volume temperature beats me.

gbaikie
Reply to  Alasdair Fairbairn
February 25, 2021 9:38 am

It’s commonly said that 90% of ocean is 3 C or colder. Here something:
“The deep ocean is all the seawater that is colder (generally 0-3°C or 32-37.4°F), and thus more dense, than mixed layer waters. Here, waters are deep enough to be away from the influence of winds. In general, deep ocean waters, which make up approximately 90% of the waters in the ocean, are homogenous (they are relatively constant in temperature and salinity from place to place) and non-turbulent.”
https://timescavengers.blog/climate-change/ocean-layers-mixing/
But 3.5 C number I seen in few times. I just google: Average volume temperature of ocean of Earth ocean 3.5 C:
Jan 4, 2018 — The study determined that the average global ocean temperature at the peak of the most recent ice age was 0.9 degrees Celsius and the modern ocean’s average temperature is 3.5 degrees Celsius.
https://economictimes.indiatimes.com/news/science/oceans-average-temperature-is-3-5-degree-celsius/articleshow/62363696.cms
And next is:
https://manoa.hawaii.edu/exploringourfluidearth/physical/density-effects/ocean-temperature-profiles

Few more down is:
https://manoa.hawaii.edu/exploringourfluidearth/physical/density-effects/ocean-temperature-profiles

Rory Forbes
Reply to  gbaikie
February 25, 2021 9:51 am

Yeah there is, we in an icehouse climate.

Nope. The planet is enjoying a warm, interglacial period of the present Quaternary ice age known as the Holocene.

The rest of your post has nothing to do with my statement (non sequitur).

gbaikie
Reply to  Rory Forbes
February 25, 2021 5:44 pm

This Quaternary Ice Age has been the coldest Ice Age within the last 34 million years of our global Icehouse Climate.

Rory Forbes
Reply to  gbaikie
February 25, 2021 6:01 pm

our global Icehouse Climate.

Climate is the average weather @ a particular location over time. What part of LOCATION do you not understand? Our planet doesn’t have a climate in any meaningful way. Even when the planet was mostly frozen there were always zones different from the others.

Graemethecat
Reply to  Rory Forbes
February 25, 2021 12:41 am

In Victorian times, clowns like Michael Mann would have been drummed out of polite society for their dishonesty and incompetence. Can you imagine the great engineers and scientists of that era like Brunel and Kelvin fiddling their data?

Rory Forbes
Reply to  Graemethecat
February 25, 2021 9:53 am

Can you imagine the great engineers and scientists of that era like Brunel and Kelvin fiddling their data?

There would have been letters written to The Times, by gawd!

Pariah Dog
Reply to  Nick Stokes
February 25, 2021 1:10 am

I’m sorry, did you just say “estimate”? I’m hoping that was a slip of the keyboard, but otherwise, I’ll take direct measurements over “estimates” any day of the week.

Reply to  Pariah Dog
February 25, 2021 2:16 am

Wearily, again, there is no instrument that measures global surface temperature. That doesn’t mean we know nothing about it.

Graemethecat
Reply to  Nick Stokes
February 25, 2021 3:15 am

You still genuinely believe one can average an intensive variable like temperature?

Paul Penrose
Reply to  Graemethecat
February 25, 2021 11:49 am

You can carry out the calculation, but the result has very limited usefulness. It certainly can’t tell you anything useful about the energy gain/loss of a large volume of gas.

Rory Forbes
Reply to  Paul Penrose
February 25, 2021 1:16 pm

It’s rather like averaging phone numbers or more appropriately, street addresses. Sure you get a number, but what does it mean. It certainly fools the masses, though.

Reply to  Nick Stokes
February 25, 2021 1:36 am

If your goal is to look at change of temperature – then there is absolutely no need to adjust the data from a homogenous data series. The only reason anyone would adjust the data and average it, is if they can’t get the biased result they want from doing it properly.

Tim Gorman
Reply to  Mike Haseler (aka Scottish Sceptic)
February 25, 2021 2:01 pm

Winner, Winner, Chicken Dinner!

MarkW
Reply to  Nick Stokes
February 25, 2021 8:00 am

The problem is that climate scientists believe that making up missing data, makes all of the data better.

Tom Abbott
Reply to  Nick Stokes
February 25, 2021 9:43 am

“It isn’t the end of the story. The topic here is the global average temperature. There is no instrument you can read to tell you that. Instead you have to, from the station data, estimate the temperature of various regions of the Earth, and then put them together in an unbiased way. That is where the adjustment comes in.”

Why do we need an “average” global temperature at all?

The only benefit it seems is to allow unscrupulous Data Manipulators to create a temperature profile made up out of thin air that promotes the Human-caused Climate Change scam, but does not match any regional surface temperature profile anywhere on Earth.

In fact, the global temperature profile is the opposite of the actual regional surface temperature readings. The global temperature profile shows a climate that is getting hotter and hotter and is now the hottest in human history.

But the regional surface temperature charts show the opposite. They show that it was just as warm in the Early Twentieth Century as it is today and there is no unprecedented warming, which means CO2 is a minor player in the Earth’s atmosphere.

Quite a contrast of views of reality, I would say. One, the global temperature profile shows we are in danger of overheating. The other, the regional surface temperature charts show we are *not* in danger of overheating.

I’ll go with the numerous regional surface temperature charts which show the same temperature profile for nations around the globe, and show the Earth is not in dangr from CO2, and will reject the global surface temperature profile as Science Fiction created by computer manipulation for poltical and personal gain.

Tim Gorman
Reply to  Tom Abbott
February 25, 2021 1:08 pm

Regions shouldn’t make decisions based on some kind of a “global” metric that is uncertain at best, decidedly wrong at worst. Each region should make decisions based on what they know about their own region. If there is a region that can show it is warming and can show, without question, why it is so then they can work with the other regions to address the issue.

Paul Penrose
Reply to  Nick Stokes
February 25, 2021 11:44 am

Nick,
The “global average temperature” is a worthless value. If you want to show that the planet is accumulating extra energy in the atmosphere due to CO2, then what you want is the total energy content of the atmosphere. That is not possible to measure directly, and the relationship between total energy of a volume of gas and the average temperature is too complex in a system as large as the Earth’s atmosphere to use temperature as a proxy. And it doesn’t matter if the temperature is all we’ve got, it isn’t suitable for the purpose. Sometimes, if one is honest, one has to just admit that they don’t know.

Rory Forbes
Reply to  Paul Penrose
February 25, 2021 1:18 pm

Sometimes, if one is honest, one has to just admit that they don’t know.

And that statement is the winner!

Reply to  Paul Penrose
February 25, 2021 10:05 pm

Paul,
” If you want to show that the planet is accumulating extra energy in the atmosphere due to CO2, then what you want is the total energy content of the atmosphere.”
You should take that up with your local TV station. They report and forecast temperatures, and lots of people think that is what they want to know. They never report the energy content of the air, and no-one complains of the lack. Temperature is what affects us. It scorches crops, freezes pipes (and toes), enrages bushfires. It is the consequence of heat that matters to us. The reason is that it is the potential. It determines how much of the heat will be transferred to us or things we care about.


Graemethecat
Reply to  Nick Stokes
February 26, 2021 12:39 am

TV stations make weather reports for ordinary people, not meteorologists or atmospheric physicists. Internal energy content (enthalpy) doesn’t mean much to most people – temperatures are much easier to grasp. The calculations used to make the forecasts certainly use enthalpy and not just temperature, though.

Here’s a question for you: a blacksmith’s anvil is in thermal equilibrium at 300K. Its “average” temperature is therefore 300K. A white-hot drop of molten iron lands on it. What is its “average” temperature now?

Tim Gorman
Reply to  Nick Stokes
February 26, 2021 8:42 am

They are forecasting LOCAL WEATHER, not global temperature averages!

“It is the consequence of heat that matters to us. ”

Temperature is *NOT* heat! They are two different things!

BTW, the temperature in a bushfire is not driven by the atmosphere or the earth itself, it is driven by the fuel being consumed in the fire.

Last edited 5 months ago by Tim Gorman
Reply to  Nick Stokes
February 28, 2021 11:20 am

“There is no instrument you can read to tell you that. Instead you have to, from the station data, estimate the temperature of various regions of the Earth, and then put them together in an unbiased way.”

That is the BIG issue : bias. Everyone is prone to bias. That is why you should always test your experimental processes against a set of controls. You can only address this if you try different methodologies as I have suggested to you here.
https://wattsupwiththat.com/2021/02/24/crowd-sourcing-a-crucible/#comment-3195430
Given the subjective nature of the appropriateness of so many of the statistical methods used, these all need to be tested against the control method of not using them. Where is that data or evidence?

“That is where the adjustment comes in.”
So far nobody on this comment site has really addressed the issue of adjustments, but they are the real problem. If you want an example just look at Texas. See
https://climatescienceinvestigations.blogspot.com/2021/02/52-texas-temperature-trends.html
If you average the temperature anomalies without breakpoint (or changepoint) adjustments, there is no warming. The trend is flat since 1840. But when Berkeley Earth add adjustments to the data, that changes the trend to a positive warming trend of +0.6 °C per century (see Fig. 52.4 on my blog page link above). And when you look at the individual stations the situation is even worse.
Of the 220 longest temperature records in Texas, which all have over 720 months of data, 70% have stable temperatures or a negative trend. For the 60 or so stations with over 1200 months of data this rises to 80%. After adjustment, most of these trends change to strong positive ones.
So how can anyone justify adjusting 70% of the data to give a result that is totally at odds with that produced by the mean of the raw data? I can accept making adjustments to a few outlier temperature readings in an individual station record, and I can accept the need to make adjustments to one or two rogue stations trends that are in complete disagreement with all their neighbours, but not 70%. That is the problem I have with the GAT and climate science.

Robert of Texas
February 24, 2021 6:48 pm

How do they calibrate the satellite sensing, and how often. Obviously if you keep calibrating a tool to match your land temperature data then the satellite is going to agree with a “high degree of precision.

Also, if I remember right, the satellite’s are not measuring “near to the ground” but some distance above it. They then probably “adjust the data” to take that into account. Every adjustment is another chance to bias the data.

Tim Gorman
Reply to  Robert of Texas
February 25, 2021 5:20 am

The satellites don’t actually measure temperature. They measure radiance at different wavelengths. Temperature is then inferred from those readings. All kinds of things add into the uncertainty of these readings. The readings are not all taken by the same satellite, they are a conglomeration of readings from several satellites. Thus the sensors in each satellite may be different with different aging drift. The time consistency between the satellites can differ. Anything that changes the transparency of the atmosphere to a certain wavelength can affect the radiance reading, e.g. clouds (thick or thin, perhaps even too thin to see) or dust, etc).

Therefore there is a *LOT* of adjustments, calculations, calculation methods that get used to turn this into a temperature. It’s your guess what the uncertainty of those temperature readings are, I’ve never seen it stated (but I’ve never actually gone looking for it either).

Tom Abbott
Reply to  Tim Gorman
February 25, 2021 9:57 am

In the case of the UAH satellite measurements, they have been matched and confirmed with the weather balloon data, both of which measure from the ground to the upper atmosphere.

u.k.(us)
February 24, 2021 6:58 pm

“We would like to take advantage of the brain trust in our audience…”
=========
I guess you can count me out.
My brain doesn’t trust me anymore 🙂 

Nicholas McGinley
Reply to  u.k.(us)
February 24, 2021 9:52 pm

I cannot hear this phrase (or read it) without thinking of the movie “Oh, brother, where art thou?”
From now on, these here boys are gonna be my brain trust!

R92b1c8bd9644b3482e713e68280c7491.jpg
February 24, 2021 7:02 pm

I don’t have additional information to help with above subject, however, I’ve often thought of creating a list of “accredited” colleges that practice rational climate science. As an alum of PSU, I keep telling them that I’ll only donate after hockey-stick Michael Mann leaves — so they’re off the list. Same goes for Texas A&M and climate alarmist Andrew Dressler — especially after he told Texans to “get used to it” regarding rapidly increasing global warming. I’m sure there are other Alums out there with knowledge about their schools.

Alastair gray
February 24, 2021 7:02 pm

How do Airs , R S S, UAH compare with each other, and with terrestrial measurements. What about Argo data

Reply to  Alastair gray
February 24, 2021 8:40 pm

I show an interactive graph here where you can compare many global indices, including satellite (but not reanalysis):
https://moyhu.blogspot.com/p/latest-ice-and-temperature-data.html#Drag

They are on a common anomaly baseline.

Dave Fair
Reply to  Nick Stokes
February 24, 2021 10:49 pm

And the world is cooling as CO2 levels continue to rise. While the estimates only over a short period of time, the world has not warmed appreciably during the 21st Century, contrary to UN IPCC computer models. People need to be reaching to check if their wallets are still there.

fred250
Reply to  Nick Stokes
February 24, 2021 10:55 pm

ROFMLAO

That propaganda moe-who ?

Moe was one of the 3 Stooges, you do know that don’t you. !!

Why doesn’t your little graph start in 1979 ?

A 3 year period is MEANINGLESS.

Reply to  fred250
February 25, 2021 2:14 am

Read the instructions. It is an interactive graph, and starts in 1850.

Tim Gorman
Reply to  Alastair gray
February 25, 2021 5:21 am

The stated uncertainty in Argo temperature readings is +/- 0.5C. Take it for what its’ worth.

Doonman
Reply to  Tim Gorman
February 25, 2021 9:27 am

NoNo, its much better now that it was adjusted to agree with ships intake water measurements.

Tom Abbott
Reply to  Alastair gray
February 25, 2021 10:05 am

“How do Airs , R S S, UAH compare with each other, and with terrestrial measurements.”

VERIFYING THE ACCURACY OF MSU MEASUREMENTS

http://www.cgd.ucar.edu/cas/catalog/satellite/msu/comments.html

“A recent comparison (1) of temperature readings from two major climate monitoring systems – microwave sounding units on satellites and thermometers suspended below helium balloons – found a “remarkable” level of agreement between the two.

To verify the accuracy of temperature data collected by microwave sounding units, John Christy compared temperature readings recorded by “radiosonde” thermometers to temperatures reported by the satellites as they orbited over the balloon launch sites.

He found a 97 percent correlation over the 16-year period of the study. The overall composite temperature trends at those sites agreed to within 0.03 degrees Celsius (about 0.054° Fahrenheit) per decade. The same results were found when considering only stations in the polar or arctic regions.”

February 24, 2021 7:18 pm

the problem with a red team blue team approach has always been: who picks the teams?

Clyde Spencer
Reply to  billtoo
February 24, 2021 8:10 pm

Like choosing jurors for a court case, let both sides have input on what team members are acceptable. Both sides should be able to dismiss a potential team member without stating cause.

Reply to  billtoo
February 25, 2021 1:42 am

Now lets have a red-blue team debate on when Trump last hit his wife.

The whole question is skewed by asking about “global average temperature”! That presupposes a way of handling the data and looking at the issue. it also conveniently ignores all the data we have showing how climate changed before whatever arbitrary date they set to start the “global temperature” based on picking the heigh of the little ice-age.

Instead the question should be: “what indicators are there & how reliable are they that there has been long temperature change”. Now the focus is on proving or indicating long term change, not creating a totally incredible “global temperature” and then claiming that shows change (when all it shows is change in the adjustments used to fabricate the bogus global temperature).

Pat from kerbob
February 24, 2021 7:28 pm

Small quibble
McKittrick mentioned southern Canada twice in same paragraph listing of areas with good thermometer coverage, probably because he is a hoser.

Other than that, this post is all about global average “mean” temperature and isn’t that all about hiding what is really happening?

If the meme is runaway heat then we should be discussing Tmax

And Tmin, and then use both to clarify where the supposed increasing average mean comes from
And why it’s not a problem

Tom Abbott
Reply to  Pat from kerbob
February 25, 2021 10:10 am

Regional Tmax charts all show CO2 is not a problem because today’s temperatures are no warmer than they were in the recent past.

That’s why climate alarmists don’t use Tmax charts. Tmax charts put the lie to the climate alarmists’ claims that the Earth is overheating to unprecedented temperatures.

tmatsi
February 24, 2021 7:44 pm

I agree totally with Tom Halla. I have spent most of my working life in research and testing laboratories where decisions are made on the suitability of materials for construction. A major problem often arises when two laboratories are asked to test the same material and it is usually found that there will be differences in the test results. This is irrespective of the fact that each laboratory is following the same standard procedure and using same sourced calibration materials. Indeed in one of my early jobs, I was tasked to analyse the results of an annual coordination test where samples of a material were homogenised by extensive mixing and then sent to 12 or so laboratories for physical testing and chemical analysis. The results of this testing were forwarded to me and I tabulated and analysed the differences. While most test results for each test tended be around a consensus mean there was always a wide range of results around the mean and always a one or two outsiders well outside the 2 sigma limit. This result derived under well controlled conditions where one would expect reasonable agreement but it was not always found.

For climate temperature measurements the situation is most unclear. Not only are the measurements over time subject to inconsistencies of location of the sensors, changes in sensors response time and accuracy of results but the temperatures measured are not necessarily under the same conditions. For example, I believe that most of the temperature are the maximum and minimum measured at a location during one 24 hour period. However, I have seen one dataset from Australia that measured the temperature at 4 times during the day. So taking these matters together (leaving aside the arrogance of attempting to correct measurements that may have been made a century ago), I fail to see how it is possible to align historic measurements with those of today particularly to an accuracy of a fraction of a degree.

I also wonder if any of those “correcting” the temperature have actually been outside to see just how much the temperature can vary from place to place a very short distance apart. A walk on a cool morning will soon show that temperatures are very much dependent on locale with small hollows in the ground often being cooler than surrounding higher ground. The assumption in temperature homogenisation that there will be a smooth change in temperature from one location to another so it is possible to fill in the gaps is quite ludicrous in my view if only because the terrain is not smooth. Temperature is not homogenous to that degree and when the gap between measurement points may be hundreds of kilometres surely even the time of day becomes important because there may be time differences as well as distance between the locations. It was recently reported that temperature had been interpolated in Northern Tasmania using temperature data from southern Australia a distance of about 350km over open ocean. How accurate would this be?

To me it is not possible to determine temperature trends by measuring temperatures to 1 degree and calculating averages to fractions of a degree even assuming that the temperature set is determined in a standard fashion. I am also suspicious there seem to be no publicly available procedures for recording of temperatures that can be reviewed. I would be surprised if all of the publishers of temperature use the same protocols for gathering their data or indeed if their protocols have remained the same over time.

Clyde Spencer
Reply to  tmatsi
February 24, 2021 8:16 pm

The assumption in temperature homogenisation that there will be a smooth change in temperature from one location to another so it is possible to fill in the gaps …

That is most evidently not true when a cold front is moving across the continent.

MarkW
Reply to  Clyde Spencer
February 25, 2021 10:02 am

Low lying areas where snow or water may accumulate can differ from the surrounding areas for weeks or months at a time. Then when the snow has melted or the water has dried, that area will start to track more closely to the surrounding areas.

Clyde Spencer
Reply to  MarkW
February 25, 2021 10:20 am

The Sacramento Valley in California is renowned for its Tule Fogs. It can be socked in with dense fog just above 32 deg F, with an inversion at about 1,500′ elevation. One can drive through the valley in January with temperatures close to freezing, climb up into the Mother Lode foothills and look out onto a sea of grey fog that looks like the Pacific Ocean, while basking in sunlight and 70 degrees F! The temperature boundary is at the elevation of the inversion, and is quite sharp. Smooth interpolation between Sacramento and and most of the Mother Lode towns along Highway 49 would result in almost all temperatures being wrong except the two end points!

noaaprogrammer
Reply to  tmatsi
February 24, 2021 9:35 pm

Before I retired, I always walked to work. On many still mornings in the Fall and Spring, it will be in the low 40’s F at our home situated on a 70 ft. high bluff. Two-thirds of the way along the trail that goes down the bluff, frost will appear on the grass and weeds. — a drop of around 10 degrees F within a drop of 45 feet in elevation.

Curious George
Reply to  noaaprogrammer
February 25, 2021 7:01 am

The effect you describe is real, but .. did you measure the ground temperature at your home?

MarkW
Reply to  Curious George
February 25, 2021 10:03 am

It could be from cold air settling on a still night.

noaaprogrammer
Reply to  Curious George
February 25, 2021 10:19 am

On these occasions, my thermometer registers around 42 degrees F at 5-feet above the grass and in shade. I have not measured the temperature at ground level, but there is no frost on the ground — just dew. Next time I will measure the temp around 2-inches above the ground to see what the difference is.

Rory Forbes
Reply to  tmatsi
February 24, 2021 11:32 pm

The assumption in temperature homogenisation that there will be a smooth change in temperature from one location to another so it is possible to fill in the gaps …

I have watched as the snow fell in the park across the street, while 100 feet away on my front porch it was raining cats and dogs. There are far too many wrong, but convenient assumptions made in the name of the AGW fraud. All these little tricks and ‘sciency” sounding methods are intended to confuse the masses.

Derge
Reply to  Rory Forbes
February 25, 2021 3:42 am

I have driven under torrential downpour only to cross an interaction and it’s dry as a bone.

Rory Forbes
Reply to  Derge
February 25, 2021 9:55 am

Bingo!

Mike Dubrasich
February 24, 2021 8:06 pm

Dear CR,

Some thoughts. This EC article is about measuring global temps. It discusses first the dearth of stations both today and historically. Then it discusses the unreliability of ocean air temps (proxy by SSTs). It does not mention the unreliability of land temps, which are discussed in another article:

https://everythingclimate.org/the-us-surface-temperature-record-is-unreliable/

Then it discusses some problems with interpolation.

I agree that tightening is needed. An opening paragraph that summarizes the rest of the Con section would help. The McKitrick quote does not do that. First state what you are going to say, then say it, then tell the reader what you said.

Emphasize the unreliability of the measurements, land and sea: not enough stations, poor station siting, error-filled measurements at the stations, manipulation of older data. If interpolation is to be discussed, state the problems clearly (a different McKitrick quote or citation would work there). Note also that the globe does not have a single temperature. The Pro section discusses various climes; counter that directly, i.e. rebut the Pro point by point.

N.B. — All the articles have a “like this” button at the bottom. Only registered WP users can click it. Almost nobody is doing so. That’s a poor signal. Probably best to drop that feature.

PS — Schmidt is quoted in the Pro section without giving his first name or association. Fair is fair. Give the man his due or else don’t mention him.

Last edited 5 months ago by Mike Dubrasich
Clyde Spencer
February 24, 2021 8:07 pm

A new assessment of NASA’s record of global temperatures revealed that the agency’s estimate of Earth’s long-term temperature rise in recent decades is accurate to within less than a tenth of a degree Fahrenheit, …

Then why does NASA imply a precision of +/- 0.0005 degrees in its tabulations of monthly anomalies when “long-term temperature rise” is only accurate to about to about 0.1 degree? The basic rules of handling calculations with different significant figures are routinely ignored, and error bars seem to be a novel concept to NASA.

The AIRS record of temperature change since 2003 (which begins when Aqua launched) closely matched the GISTEMP record.

This is surprising because it is obvious that air temperatures can be below freezing at the elevation of a weather station, yet snow and ice can be melting, if in the sun. Dark pavement can sometimes be hot enough to literally fry an egg, yet the air being breathed by the ‘cook’ is not nearly that hot! As is too often the case, “closely” is not defined or quantified.

Comparing two measurements that were similar but recorded in very different ways ensured that they were independent of each other, Schmidt said.

Recording temperatures in very different ways almost guarantees that they will be different, even if they are similar. What one wants, in order to justify the claims of precision, is that the temperatures be nearly identical, that is, virtually indistinguishable.

The Arctic is one of the places we already detected was warming the most.

What is happening in the other Köppen climate regions? To really understand what is happening on Earth, and be able to make defensible statements about any particular region, one should be able to characterize all the regions and recognize differences and similarities.

Reply to  Clyde Spencer
February 24, 2021 9:13 pm

“Then why does NASA imply a precision of +/- 0.0005 degrees in its tabulations of monthly anomalies”
It doesn’t.

Carlo, Monte
Reply to  Nick Stokes
February 24, 2021 9:53 pm

Do a formal uncertainty analysis of a single “anomaly” point, I dare you.

fred250
Reply to  Nick Stokes
February 24, 2021 10:57 pm

3 decimal points is an implied precision of +/- 0.0005 degrees

So yes.. It does.

Reply to  fred250
February 25, 2021 2:12 am

No, it doesn’t. GISS does not post 3 decimal points.

Derge
Reply to  Nick Stokes
February 25, 2021 3:44 am

Fred250 never mentioned GISS.

MarkW
Reply to  Derge
February 25, 2021 10:11 am

Nick has perfected the art of distraction.

bigoilbob
Reply to  Derge
February 25, 2021 12:52 pm

Fred250 never mentioned GISS.”

Fred250 mentioned NASA. They use GISS data.

Rory Forbes
Reply to  Nick Stokes
February 24, 2021 11:37 pm

It does something even more absurd. In locations with few reporting stations (like polar regions and N. Canada, Russia, etc) it averages the product of interpolated numbers and pretends it’s “data”. Needless to say they get to choose which empirical data they “average”.

Clyde Spencer
Reply to  Nick Stokes
February 25, 2021 10:35 am

Stokes
I have seen the tables of monthly anomalies with my own eyes on the NASA website, showing averages with three significant figures to the right of the decimal point.

See the table of NASA anomalies under the heading Warmest Decades at
https://en.wikipedia.org/wiki/Instrumental_temperature_record

Clyde Spencer
Reply to  Clyde Spencer
February 25, 2021 10:47 am

Stokes
Incidentally, the table I linked to does not have any explicit uncertainties or standard deviations associated with the anomalies. Therefore, the implied precision is as I stated.

That reinforces my claim that error bars are a novelty to NASA.

You might want to read my analysis at
https://wattsupwiththat.com/2017/04/23/the-meaning-and-utility-of-averages-as-it-applies-to-climate/

Reply to  Clyde Spencer
February 25, 2021 11:18 am

Here is the regular table of anomalies produced by GISS

https://data.giss.nasa.gov/gistemp/tabledata_v4/GLB.Ts+dSST.txt

It is given in units of 0.01C, to whole numbers.

The table you link from Wiki is somebody else doing calculations from GISS data.

Tim Gorman
Reply to  Nick Stokes
February 25, 2021 1:51 pm

Right there is the problem. How do you get anomalies calculated to the hundredths digit when the temperatures from which the anomalies are calculated are only good to the tenths digit?

You have artificially expanded the accuracy of the anomalies by using a baseline that has had its accuracy artificially expanded past the tenth’s digit through unrounded or unterminated average calculations.

This kind of so-called “science” totally ignores the tenets of physical science. It’s what is done by mathematicians and computer programmers that have no concept of uncertainty, acceptable magnitudes of stated values, and propagation of significant digits through calculations. To them a repeating decimal is infinitely accurate!

Reply to  Tim Gorman
February 25, 2021 9:57 pm

“How do you get anomalies calculated to the hundredths digit when the temperatures from which the anomalies are calculated are only good to the tenths digit?”
You don’t. No-one claims the individual anomalies are more accurate than the readings. What can be stated with greater precision is the global average. A quite different thing.

Tim Gorman
Reply to  Nick Stokes
February 26, 2021 8:40 am

Averages should have their last significant digit of the same magnitude as the components of the average.

YOU CANNOT EXTEND ACCURACY THROUGH ARTIFICIAL MEANS – e.g. calculating averages!

If the anomalies are calculated to the tenths digit then the averages, including the global average, should only be stated out to the tenths digit.

You just violated every tenet of the use of data in physical science!

Clyde Spencer
Reply to  Nick Stokes
February 25, 2021 3:53 pm

Stokes
It does appear that the Wiki’ article presents a derivative of NASA data without making that clear. The links to the original data are not working.

In an article I previously wrote here, I stated that the NASA table of anomalies reported three significant figures to the right of the decimal point. I would not have made that claim if it wasn’t the case. It appears to me that the table [ https://data.giss.nasa.gov/gistemp/tabledata_v3/GLB.Ts+dSST.txt ] has been changed. There is no embedded metadata to reflect when it was created or last updated. You are correct that the current v3 and v4 are only showing 2 significant figures.

Clyde Spencer
Reply to  Clyde Spencer
February 25, 2021 4:03 pm

Stokes
The article I wrote with the link to NASA anomaly temperatures was written in 2017.

Doing some searching, I discovered that NASA has written some R-code to do an updated uncertainty analysis. That apparently took place in 2019.
https://data.giss.nasa.gov/gistemp/uncertainty/

Last edited 5 months ago by Clyde Spencer
Reply to  Clyde Spencer
February 25, 2021 4:13 pm

The GISS table has been the same format since forever. Here is a Wayback version from 2005
https://web.archive.org/web/20051227031241/https://data.giss.nasa.gov/gistemp/tabledata/GLB.Ts+dSST.txt

Clyde Spencer
Reply to  Nick Stokes
February 25, 2021 8:04 pm

Stokes
Since I can’t produce evidence supporting my claim, and you have shown evidence that I was apparently wrong, I’ll have to concede and acknowledge that NASA only reports anomalies to 2 significant figures to the right of the decimal point.

Clyde Spencer
Reply to  Clyde Spencer
February 25, 2021 8:19 pm

Stokes
However, with respect to my opinion that NASA is cavalier with the use of significant figures, the link that you provided shows that ALL the anomaly temperatures are displayed with two significant figures, including the older data, which reaches an uncertainty 3X that of modern data.

The link on uncertainty analysis I provided declares that the 2 sigma uncertainty for the last 50 years is +/- 0.05 deg C, implying that anomalies should be reported as 0.x +/- 0.05 (which is implicit and not necessary to explicitly state), not 0.xx!

Older data should be reported as 0.x +/- 0.2 deg C.

Last edited 5 months ago by Clyde Spencer
Tim Gorman
Reply to  Clyde Spencer
February 25, 2021 6:33 pm

two significant figures is *still* one too many. When temperature uncertainties are only significant to the tenth digit then the temperatures should only be stated to the tenth digit. That also means that anything using those temperatures should only be stated to the tenths digit.

MarkW
Reply to  Clyde Spencer
February 25, 2021 10:09 am

If CO2 is driving climate, then all dry places should be warming more than all places with lots of water in the air.

Clyde Spencer
Reply to  MarkW
February 25, 2021 10:39 am

Yes, that is the implication. However, I haven’t seen any research confirming that. I suppose part of the reason is that people are likely to become more emotional about threats to polar bears than to the Saharan silver ant, so the Arctic is emphasized.

February 24, 2021 8:41 pm

“Building a crucible”. If only the climastrologists were so interested in honesty! In the end, it is all moot, unless everybody starts using the same instrument with the same methodology at the same standard. While this here quest is indeed a noble one, we are trying to argue with people who “adjust” data, it is like arguing politics with a Bolshevik; every time you start winning, they will just change the rules.
In other words, the best we can do, is critisize their methodologies, which is not really constructive, whereas their methodologies are positively destructive.

RickWill
February 24, 2021 8:53 pm

All current measurements are a waste of time, money and effort. All way too noisy to be useful.

The best indicators of global temperature are the latitude of the sea ice/water interface and the persistence (extent and time) of the tropical warm pools that control to 30C. Right now Atlantic is energy deficient with no warm pool regulating at 30C.

The tell-tale for global warming is sea ice/water interface moving to higher latitudes and greater persistence or extent of the tropics warm pools.

The ice is observable by satellite. The warm pools are measurable with moored buoys. There are reliable measurements of the Earth’s temperature because both these extremes are thermostatically controlled. You have more or less sea ice with interface at -2C and more or less warm pools controlled to 30C.

Most humans live on land that to the best of our knowledge averages 14C but the range is from 6C to 20C in any year. Trying to discern a trend amongst this mess is hopeless.

The use of anomalies is just a way of hiding reality.

Screen Shot 2021-02-25 at 3.42.59 pm.png
Last edited 5 months ago by RickWill
Geoff Sherrington
Reply to  RickWill
February 24, 2021 10:22 pm

In my humble opinion, the work that Rick Will is developing is well worth the consideration of all WUWT denizens. Geoff S

Curious George
Reply to  RickWill
February 25, 2021 7:05 am

“Most humans live on land that to the best of our knowledge averages 14C but the range is from 6C to 20C in any year.” That favors China over Indonesia.

Rick C
February 24, 2021 9:02 pm

I would start with the question: Is there any single measurement that can be made that actually measures “climate”? Temperature; local, regional global, maximum, minimum, average, daily, monthly, annual, all seem to be far from adequate to characterize climate. What about rain, snow, wind (speed, direction), storms, humidity, heating degree days, cooling degree days, UV index, cloudiness, etc. I can go to various sources and look up weather data for all of these parameters for a specific location. Traditional climate zones are defined primarily by what species of plants will survive in a specific region. Can climate change prognosticators tell me if and when I’ll be able to grow oranges in Wisconsin? If climate change means more useful plants will be able to grow over more area can that be a bad thing? If the average global temperature goes up 2 or 3 or 5 C but mainly because it doesn’t get as cold in the winter and stays warmer over night, so what? It won’t make anyone less comfortable.

Frankly, I find “global average annual temperature anomaly” to be completely useless metric. In the practice of metrology we always start with a clear definition of exactly what we are going to measure (the “measurand”). Once defined, we then select appropriate instruments and create a detailed procedure to carry out the measurement. That would typically include a sampling plan and appropriate measurement frequency if we’re dealing with a large system and looking to determine change over time. But climate science seems to have a fixation on trying to construct a single number global temperature metric is based on an undefined nebulous concept using poor quality data and ad hoc procedures. No one who has at least a basic understanding of scientific measurement procedures would accept the claim that a difference in global temperatures made 50 to 100 years ago and measurements made in the last 30 years are meaningful measures of a real change. It can not even be claimed that the older measurements are of the same measurand as more recent measurements because there is no clear definition of what was being measured in either period. It’s all just Cargo Cult science.

To be taken seriously, the climate change alarmists should be able to clearly answer two question:

  1. What exactly does global temperature measure?
  2. What is the uncertainty of that measurement?

If they can’t answer the first, there’s no point is asking the second.

Reply to  Rick C
February 24, 2021 11:59 pm

I don’t think they really want to measure the temperature of the surface of the earth as such a concept is absurd for a factor that changes continuously over multiple spatial and topographical influences. What we really want is a proxy for the earth’s temperature (much like a Stevenson screen temperature is a proxy for ground level air temperature) that is highly reproducible and so capable of detecting small changes. The main point is not that it accurately represent the Earth’s surface temperature, whatever that means, but is that it needs to be reproducible and to behave consistently from day to day and over a wide range of weather conditions. Numerous people have now demonstrated that using a subset of the best sites gives a different number than what comes from adjusted data from larger arrays of sites. Since there is an almost infinite variety of ways that sites can be compromised it is unlikely that any complex array of adjustments will be able to create consistency out of variation that derives from poor siting (one day it’s a jet engine blowing on the thermometer, the next day it’s the exhaust from a bus parked by the station). As in all complex sampling problems, the goal needs to be to select multiple subsets of reliable sites and use those to determine multiple temperature anomalies so that they can be compared to each other to determine trends that are consistent across all datasets. Similarly with historical data, it is not possible to adjust for poorly documented influences that happened in the past so the best that can be done is to use the longest running samples with the most standard protocols. Trying to develop a good proxy for Earth surface temperature from the period of early thermometer records seems to be more of a political than scientific goal. Since theory suggests that temperatures could not have been influenced by man until the recent past, we need to let the early thermometer record rest in peace and rely on other types of proxies for that period.

Tim Gorman
Reply to  Rick C
February 25, 2021 5:39 am

The actual quantity of interest is the enthalpy of the atmosphere just above the surface of the earth. The climate scientists use temperature as a proxy for enthalpy but it is a *poor*, *poor* proxy. It totally ignores things like pressure (altitude) and specific humidity.

Three *is* a reason why the same temperature in D*eath Valley and Kansas City feel so differently. Altitudes are different and so are the humidities. It;s a matter of the heat content in each location, i.e. the enthalpy. Because of these differences the CLIMATE in each location is different even though they might have exactly the same temperature.

Since the altitude of each sensor is (or should be) known, and since most modern weather instruments can provide data on pressure and specific humidity there is absolutely no reason why enthalpy cannot be calculated. This is probably available since 1980 or so giving a full 40 years of capability of calculating enthalpy instead of using the poor, poor proxy of temperature.

Why don’t the so-called climate scientists move to using enthalpy? Because it takes a little more calculation? Or because they are afraid it won’t show what they want?

Carlo, Monte
Reply to  Tim Gorman
February 25, 2021 12:13 pm

They need hockey stick graphs that can be used make dire extrapolations about the future.

Jeff Alberts
February 24, 2021 9:03 pm

I would submit, as I often do, that “global temperature” is a fantasy, a chimera, whatever you want to call it.

https://www.semanticscholar.org/paper/Does-a-Global-Temperature-Exist-Essex-McKitrick/ffb072fc01d2f2ae906e4bb31c3b3a5361ca3e18

H.R.
February 24, 2021 9:04 pm

The average telephone number is 555-555-5555.

The high end of telephone numbers is 999-999-9999.

The low end of telephone numbers is 000-000-0000.

All climate is local/regional, or micro for that matter. The average temperature of the Earth is meaningless as a measure of climate, just as the average telephone number is meaningless.

If you want to assign a global climate, then I’d suggest that the current global climate is Ice Age and Interglacial. The global climate can be expected to change to Ice Age, Glacial at some point yet to be nailed down to the nearest millennium.

Nicholas McGinley
Reply to  H.R.
February 24, 2021 9:56 pm

Call me. You have my average number.

noaaprogrammer
Reply to  H.R.
February 24, 2021 9:58 pm

Summing up all 10^10 numbers from zero to 9999999999 gives:
9999999999x(9999999999+1)/2 = 49999999995000000000
The average of that sum of 10^10 numbers is:
49999999995000000000/10^10 = 4999999999.5
which is not a standard phone number. So if we round it up, we get:
500-000-0000 as the “average” phone number.

Nicholas McGinley
Reply to  noaaprogrammer
February 24, 2021 11:14 pm

I think we have the opening line of a great comedy here!

Rory Forbes
Reply to  H.R.
February 25, 2021 2:49 pm

Have you noticed, when you make a non-refutable statement like that, none of the AGW true believers go near it? They’ll debate until the cows come home some complex point about the so called “science” of climate change, but won’t go near a statement that falsifies their entire philosophy.

Michael S. Kelly
Reply to  H.R.
February 26, 2021 8:53 pm

“The average human has one Fallopian tube.”

  • Demetri Martin
Juan Slayton
February 24, 2021 9:17 pm

I understand that your primary need here is to sharpen the dialectic of the opposing parties. Unfortunately, I can’t be of much help with that, because the content very quickly goes right over my head.

On the other hand, I have years of experience correcting, editing, and occasionally re-writing compositions from both elementary and high school students. To attract a larger audience, we must make the dialogue readable. I might be able to help with that, I’ll check your mark-up reference.

I will resist the temptation to make general comments here–with one exception. The first thing to do to enhance readability (and intelligibility) is to break up overly long sentences. Think Hemingway. I offer an example, rewriting this sentence:

“With respect to the oceans, seas and lakes of the world, covering 71% of the surface area of the globe, there are only inconsistent and poor-quality air temperature and sea surface temperature (SST) data collected as ships plied mostly established sea lanes across all the oceans, seas and lakes of the world.”

Change to:

“Oceans, seas and lakes cover 71% of the surface of the globe. Ships collect air and sea surface temperature data as they cross these bodies of water on established traffic lanes. But this data is of poor quality and gives inconsistent measurements….”

saveenergy
Reply to  Juan Slayton
February 25, 2021 4:13 am

Absolutely !

Comprehension & Précis = Clarity

DMA
February 24, 2021 9:18 pm

This is the question that started me on this strange journey of studying the alarm over climate change.
My question it still what is the formula to get average temperature of the globe from average temperatures of widely separated spots on it taken at different times. If it is a simple average of all the stations average daily temps averaged for a year the result is certainly not a true global average temp but an approximation of it. If the reporting stations are all maintained and measurements are consistent in quality and timing this approximation could be reasonably compared over time. My understanding is that this is not the process because the average temperature for 1936 (or any other year) keeps changing with new editions of the global temperature. If you can’t give the data to different analysis teams with your formula and have them come up with the same answer youre comparisons are not very meaningful. At least the UAH set compares the same thing from one year to the next. The surface data, in my opinion, is only good for station or at most local reagonal climate analysis.

Pflashgordon
February 24, 2021 9:35 pm

A bit OT, but I have been puzzling over the comment threads here at WUWT. Since the moderators do not block commenters except for language or behavior, why is it that there are rarely any credible opposing viewpoints? All we get is a regular diet of griff and company, mainly serving as target drones. Is this an indicator of the reach of the message? Rather than stay in our safe echo chamber, what can be done to change that and mainstream our messages and influence?

Pflashgordon
Reply to  Pflashgordon
February 24, 2021 9:44 pm

How about a process to flood media editorial pages and comment sections with some of our better missives and collective comments? We cannot change the minds of those who have never heard.

Tom Abbott
Reply to  Pflashgordon
February 25, 2021 11:33 am

How about an advertising campaign to bring the people, including the media, here to WUWT.

That might be worth a contribution.

Lead those horses to water!

Pflashgordon
Reply to  Pflashgordon
February 24, 2021 9:53 pm

A marketing and communications plan / department? To government officials? The public? We have the story to tell, but limited circulation. Maybe that is what Anthony has in mind with the revised “war footing.”

Nicholas McGinley
Reply to  Pflashgordon
February 24, 2021 10:00 pm

There are no credible opposing views.
Only the most stonily obtuse and hardheaded warmistas bother to comment here.
It is impossible to lie, smarm, dissemble, BS, fib, prevaricate, fast talk, or otherwise bafflegab thissa here crowd.

Tom Abbott
Reply to  Nicholas McGinley
February 25, 2021 11:35 am

Well said, Nicholas! 🙂

MarkW
Reply to  Pflashgordon
February 25, 2021 10:18 am

To engage in debate implies that there is something worth debating. The big names in climate “science” have declared that the science is settled, therefor they refuse to debate with anyone who disagrees with them.

Tom Abbott
Reply to  Pflashgordon
February 25, 2021 11:29 am

“why is it that there are rarely any credible opposing viewpoints?”

I don’t think there are many credible opposing viewpoints. In fact, I can’t think of any.

There are not too many opposing views here because when there is, commenters here request that the view be accompanied by evidence, and that throws the alarmists off because they don’t have any evidence to back up their claims.

That results in not many opposing views here. They get shot down too easily. All you have to do is say, “Where’s your evidence”, and that’s the last you hear from the alarmists.

Jim Gorman
Reply to  Tom Abbott
February 25, 2021 3:53 pm

Part of the problem 8s the CAGW crowd have a meme that CO2 8s the driver of temperature but have no proof.

They insist that regressions against temperature and CO2 ppm show a remarkable correlation and claim this is evidence.

They refuse to do any time series analysis such as detending to see if there are spurious conditions causing the correlation. Or if other periodic trends such as ENSO or AMO are causing are part of the trends.

A buddy of mine has been doing exactly this and is finding that many of the ocean cycles are very instrumental. Most lead CO2 increases which pretty much destroys the CO2 connection.

I am surprised that so many “Climate scientists” and supposed mathematicians aren’t pursuing this are of research to find underlying connections. I am probably making it sound very easy but it is not. However the connections could easily revolutionize the way models are assembled.

At the very least it will put the lie to plain old linear regressions being used to justify correlation as evidence.

Graemethecat
Reply to  Jim Gorman
February 26, 2021 4:02 am

There is a Warmist YouTuber called Potholer 54 who has at least tried to explain away the Inconvenient Fact that temperature changes precede CO2 changes in the ice-core record. Strangely enough, he hasn’t dared come on this forum to defend his ideas. Can’t think why.

February 24, 2021 9:44 pm

“Top panel: locations with at least partial mean temperature records in GHCN v2 available in 2010.”
Why on Earth is this article talking about GHCN v2 available in 2010. WE now have V4, with about four times as many stations.

“GHCN uses SSTs to extrapolate air temperatures. Scientist literally must make millions of adjustments to this data to calibrate all of these records so that they can be combined and used to determine the GHCN data set.”
Total muddle here. GHCN does nothing with SSTs. It is a database of monthly land station temperatures.

“Figure Above: Number of complete or partial weather station records in GHCN v2.”
Again, just hopeless. That was a decade ago, and the station dropoff was a consequence of the limitations then placed on v2 to handle rapid monthly entry of temperatures; they had to restrict the number of stations they could handle. That has long gone.

“When such interpolation is done, the measured global temperature actually increases.”
No, it doesn’t. There is no global temperature without interpolation. There is only ever a finite number of stations; temperatures on the rest of the surface must be inferred. There is no reason why that should increase temperature, but the statement is meaningless. Relative to what? There is no estimate of global temperature without interpolation.

Carlo, Monte
Reply to  Nick Stokes
February 24, 2021 9:55 pm

The distances are so huge it is actually an exercise in extrapolation.

fred250
Reply to  Nick Stokes
February 24, 2021 11:01 pm

So everywhere between 2 urban affected sites with the same temperature, gets given that temperature.

NOT REALITY, and you know it !

Rory Forbes
Reply to  Nick Stokes
February 25, 2021 2:56 pm

There is no global temperature without interpolation.

At last you finally managed to have an epiphany. You’re 100% right for a change. Now doesn’t that feel better to get that fact off your chest?

You’ve just discovered the flaw in most climate science. Temperatures are unique, therefore inferring them results in nothing useful.

Tim Gorman
Reply to  Nick Stokes
February 25, 2021 6:28 pm

This is why most climate scientists appear to be applied mathematicians or computer programmers and not physical scientists.

You state only what you can measure. You do *not* make up data to fill in what is unknown. If what you can measure changes then start with a new data set and come up with a new result.

Under your logic I could say that a metal bar I am heating from one end is the same temperature every where what I am measuring at the heated end. I just “interpolate” the temp at one end to the other end.

Reply to  Tim Gorman
February 27, 2021 5:15 pm

“This is why most climate scientists appear to be applied mathematicians or computer programmers and not physical scientists.”

They aren’t. Most of them have geography and marine biology degrees, or degrees from universities outside the global top 10. If there were more physicists from the top global universities then there might be more rigour to the subject. If they really were all mathematicians and computer scientists then some of their models might actually work.

Reply to  Nick Stokes
February 27, 2021 5:49 pm

“Why on Earth is this article talking about GHCN v2 available in 2010. WE now have V4, with about four times as many stations.”

So what?
If you want to measure global warming since 1850 you need to measure the temperature difference since 1850. Adding more stations to the dataset now does not help that much because, while it might improve the accuracy of the measured temperature today, it cannot improve the accuracy of the measured temperature in 1850. So the accuracy of the temperature change cannot improve significantly. The biggest hurdle to accurate determinations of global warming is the lack of data before 1960, particularly in the Southern Hemisphere.

As for interpolation, there are multiple alternatives that could be used instead which could at least be used to corroborate the interpolation-based results. One is to do local averages for the temperature in various countries or regions and use weighted averages of those means based on land area to get a global mean. I did this for Australia here.

https://climatescienceinvestigations.blogspot.com/2020/07/26-temperature-trend-in-australia-since.html

This method reproduced the Berkeley Earth data pretty well (compare Fig. 26.2 and Fig. 26.3) even though I only used datasets with over 480 months of data.

The issue I and I suspect many climate sceptics have with the various GAT datasets is the lack of apparent benchmarking using different calculation methods. The only group that has shown any real transparency in what they do that I am aware of is Berkeley Earth. Unfortunately, when I look at their methods I don’t like them.

Reply to  Climate Detective
February 28, 2021 12:04 pm

“The only group that has shown any real transparency in what they do that I am aware of is Berkeley Earth.”
Well, I’m transparent. I publish a temperature calculation every month, eg
https://moyhu.blogspot.com/2021/02/january-global-surface-templs-same-as.html
I describe the methods in detail here
https://moyhu.blogspot.com/p/a-guide-to-global-temperature-program.html
with links to the complete code.

I get similar results to other groups, using unadjusted GHCN data. The only one out of line was Hadcrut, which failed to infill properly. But they have now fixed that.

Reply to  Nick Stokes
February 28, 2021 7:11 pm

But from what I can tell, you appear to be using similar methods to those of the other groups. Is that true? And how far back in time does your temperature calculation go?

The problem with global-scale meshed simulations is that they look too much like black-box solutions, irrespective of how much of the code you publish. So it is difficult for people like me to relate the data input to the data output, particularly on a local level.

What I like about Berkeley Earth is that you can select a station and see exactly what the original data is, and what the BE data analysis produces. The problem is, they are often not the same, as I showed for Texas.
https://climatescienceinvestigations.blogspot.com/2021/02/52-texas-temperature-trends.html
So what I am looking for are explanations and justifications for these differences.

Reply to  Climate Detective
February 28, 2021 7:50 pm

TempLS goes back to 1900. All the methods are just spatial integration. I think mine are more sophisticated, but it doesn’t make much difference.

You can find the unadjusted land data here (click radio button “GHCN V4”)
https://moyhu.blogspot.com/p/blog-page_12.html
You can’t find any modification by TempLS; there isn’t any. It just integrates the raw data, after subtracting a local average to make the anomaly. Monthly plots are here
https://moyhu.blogspot.com/p/blog-page_24.html
You can click to find the anomaly for that month.

The problem with global-scale meshed simulations is that they look too much like black-box solutions, irrespective of how much of the code you publish.”
Well, it’s hard to overcome that one.

Carlo, Monte
February 24, 2021 9:48 pm

A new assessment of NASA’s record of global temperatures revealed that the agency’s estimate of Earth’s long-term temperature rise in recent decades is accurate to within less than a tenth of a degree Fahrenheit, providing confidence that past and future research is correctly capturing rising surface temperatures.

Someone needs to show their work in support of this highly dubious claim.

Tim Gorman
Reply to  Carlo, Monte
February 25, 2021 5:48 am

The federal standard for weather stations is +/- 0.6C uncertainty. You can find this in the Federal Meteorological Handbook No. 1.

The Argon floats are considered to have an uncertainty of +/- 0.5C.

Since uncertainty grows as root sum square as you add independent, uncorrelated data together the uncertainty of the global temperature simply can’t be less than a tenth of a degree. In fact, the uncertainty interval gets so wide when you add even 100 stations together you simply don’t know the true value of the temperature within +/- 5C.

Carlo, Monte
Reply to  Tim Gorman
February 25, 2021 7:39 am

And this why the IPCC puts a lot of handwaving about uncertainty and error into the committee reports instead of hard numbers—it would plainly show they don’t know what the past and present is, and their extrapolations into the future are utterly without any statistical support.

Yet they want scores upon scores of trillions of dollars wasted trying to reduce a number they cannot measure.

DMA
Reply to  Tim Gorman
February 25, 2021 10:15 am

Their adjustments exceed .6C.

noaaprogrammer
February 24, 2021 10:18 pm

Rather than measuring Earth’s air temperature which has a wide standard deviation, why not create a set of standards for measuring the Earth’s ground temperature at set intervals down to the depth where there is little seasonal variation.

Measuring and controlling for different kinds of soil, surrounding terrain and elevation, amount of urbanization, flux of geothermal sources, etc. would have to be considered. But once in place (and effectively protected), the readings would be subject to less man-caused disturbances like exhausts from passing automobiles and airplanes.

But this would cover roughly only 30% of the Earth. The other 70% is water which has moving currents in all 3 dimensions, further making it difficult to get the illusive average Earth temperature.

And if you are a purest and want the average temperature of Earth’s entire sphere — well, it does become rather hot.

Dave Fair
February 24, 2021 10:34 pm

Recent and projected CO2 induced warming is what is of concern for AGW. Recent warming is shown by satellites/radiosondes (40+ years) for the atmosphere and ARGO (about 16 years) for the oceans from about 2,000 meters in depth to the surface.

The satellite period covers much of man’s exponential CO2 additions to the atmosphere. During that time, the atmosphere has been warming at a rate of less than 0.12 C/decade. Since the GHE occurs in the atmosphere, there is no AGW concern above the surface. UN IPCC climate models assert there is a “hot spot” in the troposphere caused by water vapor enhancement of the small theoretical CO2 GHE effect of about 1.2 C/doubling of concentrations. Satellites and radiosondes have never detected a hot spot.

The best I could get at NOAA’s ARGO site is a cheesy miniature graph of temperature trends vs depth and my Mk 1 eyeballs have large error ranges. Keeping in mind the trends end on the 2015-16 Super El Nino, the Nino 3-4 region at the surface has a trend of about 0.16 C/decade and the overall trend at the surface is about 0.18 C/decade. The trend at 200 meters depth is about 0.05 C/decade. No AGW concern below the sea surface either.

In the atmosphere and the oceans CAGW fails and there is no need to fundamentally alter our society, economy and energy systems.

CAGW fails and there is no need to fundamentally change our society, economy and energy systems.
comment image

Last edited 5 months ago by Dave Fair
February 24, 2021 10:54 pm

Less interesting than estimates the average temperature for the whole Earth is how that temperature changes.

Over the long term, especially if you begin your measurement record in chilly period, like the 1800s, or the 1960s & 1970s, there’s been a (generally beneficial) warming trend. But it is so slight that it’s hard to measure, with the result that the amount of warming reported by major temperature indexes differs by as much as a factor of two, as you can see in these graphs:
comment image

It troubles me that the institutions producing most of the temperature indexes are so severely politicized. They actively encourage misunderstanding of the context of recent temperature increases, because they encourage people to believe that myth the recent warming trend is unusually rapid, and the myth that recent temperatures are unusually high, even though many of them surely know better. They never mention the dramatically more rapid Dansgaard-Oeschger warming events recorded in ice cores (which nearly all existing species somehow survived), they rarely admit that past climate optimums, like the peak of the Eemian Interglacial, were certainly much warmer than our current warm period, and they never, ever acknowledge the compelling evidence that warming and eCO2 are beneficial.

It’s also worth noting that the surface temperature measurements are unsettlingly malleable, with large and mysterious revisions often being made to old measurements. (Worse yet are the pre-measurement proxy-based paleoclimate reconstructions.)
comment image

The predictability of the direction of those revisions invites suspicion that the temperature indexes produced by organizations like GISS, BEST, etc. are exaggerated: perhaps not through conscious fraud, but simply because the organizations are run by activists, and people tend to better at finding what they are looking for than what they aren’t. It is much, much too easy to discount or overlook that which is inconsistent with one’s prejudices, when surrounded exclusively by people who share those prejudices.
comment image

Analyses even using the probably-inflated temperature indexes from organizations like GISS and BEST indicate that the IPCC’s climate sensitivity estimates are much too high. Yet the activists who run the leading climate organizations seem to be are uniformly uninterested in such evidence. How can anyone trust the work of people who don’t seem to care at all about evidence suggesting their assumptions are off the mark?

Exhibit A, this conversation with the folks at BEST:
https://twitter.com/ncdave4life/status/1336701210468945923
https://twitter.com/ncdave4life/status/1336707443750035459
https://twitter.com/ncdave4life/status/1364577840021258241comment image

Last edited 5 months ago by Dave Burton
Reply to  Dave Burton
February 25, 2021 2:24 am

You won’t get far until you try to use the terms correctly. There is an important E in ECS. It is equilibrium climate sensitivity. The rise in temperature due to added CO2 when it has all settled down, which takes centuries. You can’t calculate it from a running graph correlation.

The IPCC 3C is for ECS. E. 

Joseph Zorzin
Reply to  Nick Stokes
February 25, 2021 4:43 am

“when it has all settled down, which takes centuries”

And that time frame is understood precisely, because it’s settled science, right?

Carlo, Monte
Reply to  Nick Stokes
February 25, 2021 7:42 am

You can’t calculate it from a running graph correlation.

Nor from meaningless extrapolations into the future of computer models.

Tom Abbott
Reply to  Nick Stokes
February 25, 2021 12:27 pm

“The rise in temperature due to added CO2 when it has all settled down, which takes centuries.”

Ole Bill Gates made a similar bogus (without evidence) claim about CO2 the other day. He said CO2 we put in the Earth’s atmosphere stays there for a thousand years.

Nick, do you have any evidence that CO2 stays in the atmosphere for centuries?

Tom Abbott
Reply to  Tom Abbott
February 27, 2021 5:51 am

Well, I guess Nick doesn’t have any evidence for claiming CO2, once put in the atsmosphere, stays there for centuries.

See what I mean about asking alarmists for evidence? Next thing you know, they disappear and you never hear from them again.

They disappear because they can’t back up their claims with evidence. They think all they have to do is throw something out there and everyone is going to get on board. Nope, skeptics need evidence. Just saying something is so, doesn’t make it so.

Graemethecat
Reply to  Nick Stokes
February 26, 2021 4:10 am

Explain why the historical temperature record has been altered to cool the past and warm the present.

Reply to  Nick Stokes
February 26, 2021 3:15 pm

You really should read the conversation before you critique it, Nick. You obviously didn’t bother.

The gist is that comparing temperature and CO2 trends gives a practical estimate of climate sensitivity, which is roughly midway between TCR (which assumes a much quicker CO2 rise than reality) and ECS (which assumes an equally unrealistic hundreds of years of unchanging CO2 level). So, if we also have an estimate of the ratio of ECS/TCR (which we do), we can calculate estimates for both ECS and TCR from that “practical estimate of climate sensitivity.” That’s what I did.

Here’s the tweetstorm, minimally reformatted for WUWT. I numbered the tweets 1-11, and here I’ve turned each tweet number into a link to the original tweet.
 ‍‍‍‍‍‍ ‍‍

[1/11] BEST’s figures imply MUCH LOWER climate sensitivity than IPCC claims.

You show, “About 2.3°C of warming per doubling of CO2 (ignoring the role of other greenhouse gases and forcings).”

But to deduce climate sensitivity (to CO2), you CANNOT ignore other GHGs.
[cont’d]
 ‍‍‍‍‍‍ ‍‍

[2/11] Even if we assume that none of the warming is natural, if 30% of the warming is due to increases in minor GHGs like O3, CH4, N2O & CFCs, then “climate sensitivity” from a doubling of CO2, according to BEST’s figures, is only 0.7 × 2.3 = 1.6°C.
[cont’d]
 ‍‍‍‍‍‍ ‍‍

[3/11] That’s a “practical estimate” of climate sensitivity, from surface station measurements. However, if the best satellite data were used, instead of BEST’s surface temperatures, sensitivity would be almost 30% lower:
https://woodfortrees.org/plot/best/from:1979/mean:12/plot/uah6/from:1979/mean:12/plot/best/from:1979/trend/plot/uah6/from:1979/trend
https://sealevel.info/BEST_vs_UAH_2020-06-14h_digitization_notes.txtcomment image
[cont’d]
 ‍‍‍‍‍‍ ‍‍

[4/11] That makes climate sensitivity to a doubling of CO2 only 1.23°C.

Since that’s based on real-world forcing (instead of the faster rise used for the TCR definition), the 1.6°C or 1.23°C per doubling is “between TCR & ECS” (probably about an average of TCR & ECS).
[cont’d]
 ‍‍‍‍‍‍ ‍‍

[5/11] ECS is usually estimated at 1.25× to 1.6× TCR. So:

If 1.6°C is avg of TCR & ECS, it means TCR is 1.23 to 1.42°C, and ECS is 1.77 to 1.97°C.

If 1.23°C is avg of TCR & ECS, it means TCR is 0.95 to 1.09°C, and ECS is 1.37 to 1.51°C.

See: https://sealevel.info/BEST_vs_UAH_2020-06-14h_digitization_notes.txt
[cont’d]
 ‍‍‍‍‍‍ ‍‍

[6/11] That gives an overall TCR range of 0.95 to 1.42°C, and an overall ECS range of 1.37 to 1.97°C.

Those sensitivities are obviously FAR BELOW the assumptions baked into most CMIP6 models and IPCC reports, which means their warming projections are much too large.
[cont’d]
 ‍‍‍‍‍‍ ‍‍

[7/11] You say, “About 1.1°C of warming has already occurred,” and “if carbon dioxide concentrations keep rising at historical rates, global warming could more than triple this century.”

That’s wrong, for two reasons.
[cont’d]
 ‍‍‍‍‍‍ ‍‍

[8/11] 1. It assumes WILDLY accelerated warming, from an approx linear continuation of forcing, for which there’s no basis. Even BEST’s 0.192°C/decade yields only 1.536°C of add’l warming by 2100. UAH6’s 0.134°C/decade yields only 1.072°C by 2100.
https://www.sealevel.info/co2.html?co2scale=2comment image
[cont’d]
 ‍‍‍‍‍‍ ‍‍

[9/11] 2. It assumes an implausible continuation of exponentially increasing CO2 level growth (necessary for continuation of the linear trend in forcing). But resource constraints ensure the forcing trend will fall below linear long before 2100.
https://www.researchgate.net/publication/303621100_The_implications_of_fossil_fuel_supply_constraints_on_climate_change_projetions-A_supply-side_analysis
[cont’d]
 ‍‍‍‍‍‍ ‍‍

[10/11] Also, negative feedbacks (mainly terrestrial “greening,” and oceans) are removing CO2 from the air at an accelerating rate.
https://sealevel.info/feedbacks.html#greening
So (unfortunately!) it’s unlikely that mankind’s use of fossil fuels can ever drive CO2 level above 700 ppmv.
[cont’d]
 ‍‍‍‍‍‍ ‍‍

[11/11] Since CO2 forcing trend log(level) is almost certain to fall below linear later this century, rate of temperature increase, which is already too slow to reach the temperatures you project, should slow BELOW even the current slow 0.134°C to 0.192°C/decade linear trend.
###
 ‍‍‍‍‍‍ ‍‍

[12/11] @BerkeleyEarth, do you not have have any comment on the fact that your data implies a much lower climate sensitivity to rising CO2 levels than the IPCC claims?
 ‍‍‍‍‍‍ ‍‍

[13/11] @BerkeleyEarth, are you there?

@RichardAMuller @stevenmosher @RARohde @hausfath @JudithSissener @BerkeleyPhysics
 ‍‍‍‍‍‍ ‍‍

[14/11] @BerkeleyEarth, will you please reply?

The temperature measurements imply TCR between 0.95 & 1.42°C, and ECS between 1.37 & 1.97°C. Will you at least acknowledge that your measurements imply climate sensitivity well below IPCC estimates?
 ‍‍‍‍‍‍ ‍‍

[15/11] @BerkeleyEarth team: @RichardAMuller, @stevenmosher, @RARohde, @hausfath, @JudithSissener —

Will you please acknowledge that your data shows ECS climate sensitivity to a doubling of CO2 is <2°C, and TCR is <1.5°C?

@BerkeleyPhysics, how about a response?

https://sealevel.info/learnmore.htmlcomment image
 ‍‍‍‍‍‍  ‍‍‍‍‍‍ ‍‍

They never replied.

Last edited 5 months ago by Dave Burton
Michael S. Kelly
Reply to  Nick Stokes
February 26, 2021 9:33 pm

There is an important E in ECS. It is equilibrium climate sensitivity.”

This is where I depart from your generally useful commentary (and I mean that sincerely). The climate is a non-equilibrium system, and we haven’t yet determined how far from equilibrium it is. To ascribe a figure of merit to a system to which it absolutely doesn’t apply is…well, not useful. It also isn’t informative, and should not be used as the basis for policy actions that would return humanity to prehistoric subsistence levels, killing off huge segments of the human race.

Tom Abbott
Reply to  Dave Burton
February 25, 2021 12:11 pm

I like that comparison of the changes the Data Manipulators have made to the US surface temperature record.

This demonstrates why I use Hansen 1999, rather than a later chart. It’s the closest thing to an unmodified US chart that I can get.

Hansen 1999 shows the 1930’s, as being warmer than 1998. Specifically, Hansen says 1934, was 0.5C warmer than 1998.

Then, the 2011 US chart was bastardized to make it appear that 1998, was warmer than 1934. This way the Data Manipulators could claim that 1998 was the hottest year ever.

And the 2020 US chart has been further modified to downgrade 1998, in order to allow the Data Manipulators to claim that years subsequent to 1998, were the “hottest year evah!”

If you look at the UAH satellite chart (below) you can see that 1998 and 2016 (the last “hottest year evah!”) are statistically tied for the warmest year since the 1930’s, which makes the Data Manipulators claims of “hottest year evah! all a bunch of lies, because none of those years were warmer than 1998, if you go by the UAH satellite record, the only record that can currently be relied upon to be honest.

How many years after 1998, did the Data Manipulators designate as being the “hottest year evah!”? Half a dozen? Something like that. And every claim is a lie according to the UAH satellite chart because none of those years were warmer than 1998.

And, as you can see, at least in the United States, the temperatures have been in a decline since the 1930’s. CO2 concentrations don’t seem to have any effect on the temperatures in the U.S.

And, all unmodified, regional surface temperature charts have the very same profile that the US chart has, i.e., they all show it was just as warm in the Early Twentieth Century as it is today, which demonstrates that CO2 is *not* the control knob of the atmosphere and is nothing to worry about or does it need to be regulated.

None of the unmodified, regional surface temperature charts resemble the “hotter and hotter” temperature profile of the bogus, bastardized, instrument era, Hockey Stick chart. The Hockey Stick chart is an outliar.

UAH satellite chart:

comment image

And here is a comparison of the Hansen 1999 US surface temperature chart (on the left) with a bogus, bastardized, instrument-era Hockey Stick chart (on the right).

http://www.giss.nasa.gov/research/briefs/hansen_07/

What the data manipulators did was change the temperature profile of the chart on the left into the temperature profile of the chart on the right. Normally, temperatures get warmer for a few decades, and then they get cooler for a few decades, as can be seen in the Hansen 1999 chart.

But the bogus, bastardized instrument-era Hockey Stick chart artificially cooled the warm 1930’s with their computers and turned the temperature profile into a hotter and hotter and hotter profile where we are now at the hottest temperatures in human history. Perfect for selling the Human -caused Climate Change scam. But it’s all a Big Lie which is proven by looking at the regional temperature charts from around the world that all agree it is not any warmer now than in the recent past.

Last edited 5 months ago by Tom Abbott
To bed B
February 25, 2021 12:31 am

Using an average of the 24 hourly temperature readings to compute daily average temperature has been shown to provide a more precise and representative estimate of a given day’s temperature. This study assesses the spatial variability of the differences in these two methods of daily temperature averaging [i.e., (Tmax + Tmin)/2; average of 24 hourly temperature values] for 215 first-order weather stations across the conterminous United States (CONUS) over the 30-yr period 1981–2010. A statistically significant difference is shown between the two methods, as well as consistent overestimation of temperature by the traditional method [(Tmax + Tmin)/2], …There was a monthly averaged difference between the methods of up to 1.5°C, with strong seasonality exhibited.

LaMagna, C. (2018). A Comparison of Daily Temperature-Averaging Methods: Spatial Variability and Recent Change for the CONUS, Journal of Climate, 31(3), 979-996. Retrieved Feb 25, 2021, from https://journals.ametsoc.org/view/journals/clim/31/3/jcli-d-17-0089.1.xml

A paper less than 3 years old.

Its not just that hoping that averaging will cancel out random variations, even if there were no changes to sites that were spread evenly around the globe, is amateurish. They essentially reconstruct an ideal record from a proxy for an intensive property using a method borrowed from mining that is meant for a real intensive property.

Its not just that mean of min/max is randomly, but not perfectly randomly, different from a mean of continuous measurements by a range larger than warming during the past 100 years, even the latter is the result of the effects of not only changing sunlight but moving air. The minimum temperature is of a packet of air long gone by the time the maximum is reached. Its not an intensive property. Your thermometers need to move with the air for that. So the mean of min/max or hourly readings are proxies.

I would expect a good record, ie no site changes and an even spread around the globe, to give a useful average for how the global climate has changed, but barely better than qualitative.

15 years ago, sceptics argued that global temperatures warmed 0.6 degrees in 100 years and most was before 1940. Adjustments of few tenths of a degree and its no longer an issue?

I think AR4 had a line about consensus that at least half the warming since 1950 was due to human emissions. That’s 0.3°C out of a degree since the start of the IR, give or take a few tenths.

Reply to  To bed B
February 25, 2021 2:29 am

“A paper less than 3 years old.”
I posted such an analysis nearly 7 years ago here
https://moyhu.blogspot.com/2014/07/tobs-pictured.html

There is a considerable difference, but mainly it depends on the time of day the min/max is read. It can be higher or lower than the continuous measurement.

The fact is, of course, that most of the data we have available is min/max.

Tim Gorman
Reply to  Nick Stokes
February 25, 2021 5:58 am

The fact is, of course, that most of the data we have available is min/max.”

Woah! Another whopper!

There are all kinds of stations around the world that have multiple temperature readings per day clear back to 2020, ranging from 1 minute intervals to 5 minute intervals!

Since climate is dependent on the *entire* temperature profile, an integral of the daily temperature profile at each station would give significantly better indications of the actual climate at each station. This is commonly known as a degree-day.

Since this data *is* available and is a far better metric than max/min why aren’t the so-called climate scientists using it?

Carlo, Monte
Reply to  Tim Gorman
February 25, 2021 7:43 am

They failed second-year calculus?

Tim Gorman
Reply to  Carlo, Monte
February 25, 2021 9:29 am

I suspect that most of them are not physical scientists but applied mathematicians and computer programmers. They don’t even know the right questions to ask let alone to put in their models.

MarkW
Reply to  Nick Stokes
February 25, 2021 10:24 am

If the best data available is not fit for purpose, then it isn’t fit for purpose.
Using ever more fanciful algorithms to tease the answer you want from poor data isn’t science.

To bed B
Reply to  Nick Stokes
February 25, 2021 1:12 pm

Needed to be done in 1989.

griff
February 25, 2021 12:55 am

Berkley Earth already examined tens of thousands of surface temp readings and demonstrated that there is no UHI effect biasing the trend.

any article which chooses not to include Berkley results would be dishonest.

GregB
Reply to  griff
February 25, 2021 4:42 am

I would agree, and I’ve seen plenty of other studies showing a large impact of UHI on surface temp readings.

I can’t remember where it was, but I really liked the study where someone mounted a thermometer to the top of their car and drove through a city. I recall that the temperature climbed about 3-5 degC, peaking in the center of the city, and then dropped to the baseline on the other side of the city. Quite a striking example of UHI.

Tombstone Gabby
Reply to  GregB
February 25, 2021 8:28 pm

G’day GregB,

Could well have been about 50 years ago, late 60’s early 70’s. Forrest Mims III in one of the “Archer” booklets sold by Radio Shack. He was investigating the use of thermistors for measuring temperature. He drove his daughter to and from school daily.

[Back in the days of the 741 Op Amp, discrete transistors, and the digital wonderland of the 7400 series of DIP chips. (Dual Inline Plastic)]

And of course Anthony, our ‘get out and test it’ host, did something similar – with modern equipment – in Reno if I remember correctly.

Reply to  griff
February 25, 2021 6:06 am

Not again, griff, are you really not able to learn your lessons you get here ?
No, Berkley Earth demonstrated nothing at all.

any article which chooses not to include Berkley results would be dishonest. comment image

MarkW
Reply to  griff
February 25, 2021 10:25 am

When griff gets hold of a good lie, he never lets go.
Berkley Earth’s analysis has been shredded to the point where only a true believer would still quote them.

February 25, 2021 1:23 am

There are two credible and reliable long term measures of temperature. The first is the Central England Temperature series which is a proxy for global temperature … a far far far better proxy for global temperature than some tree-ring. This not only indicates the long term trend, but also the scale of variability and unless you discuss long term variation, there is no point talking about temperature.

The second less reliable record is the yearly maximum (or minimum) temperature from the very few permanent min/max thermometers situated in areas which have not changed their vegetative cover. The reason it needs to be yearly, is that the yearly max temperature (or min if taking a year as summer to summer) cannot be double counted so there is no reason to apply any adjustment nor any debate about adjustments.

And on the other side … no temperature measurement relying on NASA is credible.

Alasdair Fairbairn
February 25, 2021 3:14 am

For my part the there are two major issues at stake here:

1) For a trend to have any validity the data collection methods and recording must be CONSTANT over the period which has proved to be not the case in practical terms where the climate is concerned, since changes in the technology, locations and volume of deployment have regularly occurred. The statistical assumptions and manipulations of variable measurements should therefore be treated with a good deal of scepticism.

2) Temperature is but one of many factors of State which influence the enthalpy of a system. It gives no information on that aspect unless the other factors are accounted for. As an example: Two parcels of air of equal mass could have the same enthalpy but if one of the parcels is moving then it will have a lower temperature. Hence whereas the enthalpy remains constant the temperature changes. It is the enthalpy that determines the climate not necessarily the temperature which is merely an indicator.

IMO the concept of a global mean temperature is just that:— a concept with little value in practical terms; but will leave that to others to argue about.

Derge
February 25, 2021 3:38 am

One aspect of temperature readings that is being absolutely overlooked is this:

There is a latency with liquid in glass (LIG) thermometers versus thermistor temperature sensors (TTS).

A transient spike in temperature from a passing hot wind (say from over asphalt) will quickly register with a TTS (well under a minute), while there’s a latency with LIQ (well over a minute). One will record “record setting temperatures” while the other will not.

This alone can explain modern versus historical temperature discrepancies.

In 1920 that “record setting temperature” needed to be sustained long enough for the LIG to register. Not so much for a TTS sensor in 2014.

MarkW
Reply to  Derge
February 25, 2021 10:27 am

There are more things that can cause brief warm excursions than can cause cold excursions.
Up to and including jet exhausts.

Bill T
February 25, 2021 4:18 am

The main problem is average.temperature. It is well known that nighttime temps have been coming up but daytime high temps have been going down, so the average can be increasing, but only becasue of the increased nighttime temps- all because of increased water vapor as well as urban heat effect. This February at my home in Maine, there were 18 days of below average daytime highs but on those same days, there were 10 days of above average nighttime highs.

Joseph Zorzin
February 25, 2021 5:03 am

“Last month we introduced you to the new reference site EverythingClimate.org

Even now, it’s pretty good. It can of course be improved but I suggest the focus should be on making it readable to the average person who is never going to dig into the issues deeply. It should be, I think, a “Climate science for Dummies”.

Look at the following: https://www.amazon.com/Cranky-Uncle-vs-Climate-Change/dp/0806540273/

John Cook has made a cartoon book making fun of climate “deniers”. How about a cartoon book making fun of climate alarmists? Or- having “EverythingClimate” do that? I know that some will think that’s not professional and won’t change the minds of alarmists. I doubt anything will- so why try to do it with serious writing when cartoons catch the attention of “the masses”. Just a thought- probably a bad idea. But if I were wealthy- I’d pay someone to do the cartoon book of climate alarmists. Some of the cartoons would be of Cook!

Tom Abbott
Reply to  Joseph Zorzin
February 25, 2021 12:40 pm

“I know that some will think that’s not professional and won’t change the minds of alarmists.”

I don’t think we should focus on dedicated alarmists. Rather, focus on the undecided, which would probably be made up of a lot of younger people, not set in their ways yet, and open to taking in information.

The “lack of science evidence” is on the skeptics side. I think it has to be very effective to challenge an alarmist to produce evidence and then no evidence is produced. Sensible people can put two and two together.

bigoilbob
Reply to  Joseph Zorzin
February 26, 2021 6:09 am

“John Cook has made a cartoon book making fun of climate “deniers”. How about a cartoon book making fun of climate alarmists?”

The skill sets to do so successfully just aren’t there. You laugh at each other’s non jokes, but they fall flat in superterranea. It’s why “Conservative Comedy” is an oxymoron….

Sorry, ‘cept for P.J. O’Rourke. Where did he go? I had to look on my book shelf to even remember his name….

Last edited 5 months ago by bigoilbob
Bellman
February 25, 2021 5:56 am

Assume I’m a skeptical persuer after truth, and searching on the internet for answers I happen to find Everything Climate. At first glance I might think this is going to be a good source of information, but I don’t think it would take me long to get suspicous, not least by the complete anonymity of the place. There’s no indication of who any of the articles are by and the About page leaves me none the wiser. But being skeptical it does start of alarm bells – “EC is a website covering both sides of the climate debate – factually”. This is exactly what someone want’s me to think so I’m going to assume the opposite. Maybe this is a fanatical green organization, maybe big oil, who knows? But my initial assumption is that anyone claiming to represent both sides of a supposed debate, and insisting they will be only using facts, almost certainly won’t be.

Taking this article as an example, it starts by making the statement “Measuring the Earth’s Global Average Temperature is a Scientific and Objective Process”, which already feels like a loaded question. This is followed by what I assume is meant to be “both sides” of the “debate”. But neither are addressing the head-line statement. The question really seems to be “how accurate are global temperature sets?”.
At this point it soon becomes clear that the it’s not exactly covering both sides equally. On the Pro side, a few paragraphs from a NASA press release, with no link to the paper they are referring to. On the Con side a much more extensive set of quotes from what we are assured is a peer reviewed paper.

But reading through the Con part, I get increasing confused as to how much is from the McKitrick paper, and how much is the personal opinion of the anonymous blog post writer. It starts, sure enough, with a blockquote, and there a few other passages in quotes, but much of it is not quoted and doesn’t appear to come from the paper at all. This includes the claim that “Global warming is made artificially warmer by manufacturing climate data where there isn’t any.” and conclusion “This kind of statistically induced warming is not real.” Either these are quotes from the McKitrick paper, or are someones interpretation of the claims, or are original research.

Last edited 5 months ago by Bellman
MarkW
Reply to  Bellman
February 25, 2021 10:30 am

In a world in which disagreeing with a progressive can and often does get one fired. Demanding that actual names be used is no different from a demand that only the government approved opinions be allowed.

Bellman
Reply to  MarkW
February 25, 2021 10:58 am

Then use pseudonyms. I’m not asking that everyone uses their real names – that would be hypocritical. What I’m saying is that it looks fishy if something claiming to be offering objective non-partisan analysis, has zero indication of whose behind the website. How do I know they really are being objective and are not a cover for some organization or individual with an agenda to prove.

Joseph Zorzin
Reply to  Bellman
February 25, 2021 1:07 pm

You make excellent points- however, if the logic is solid, it makes no difference who wrote it or who paid for it.

Tim Gorman
February 25, 2021 6:20 am

Just two rules of analyzing data in physical science is needed to refute current climate science estimates of climate science.

The last significant digit in any stated answer should usually be of the same order of magnitude (in the same decimal position) as the uncertainty (Richard Taylor, “An Introduction to Error Analysis, 2nd Ed)

The uncertainty of independent, non-correlated data points increases as they are added together in root sum square.

Since the recognized uncertainty for land stations recognized by the federal government is +/- 0.6C the stated value then any temperature stated value from such a station should show no more than an entry in the tenth digit. Any average calculated from multiple such measurements should show nothing past the tenth digit. Thus trying to identify differences in the hundredths digit to show warming is a violation of physical science tenets.

If you add 100 independent, non-correlated temperature values together to calculate an average and each value has an uncertainty of +/- 0.5C then the resulting uncertainty is [ +/- 0.5 x sqrt(100)] = +/- 0.5 x 10 = +/- 5C.

Thus your average becomes useless in trying to identify differences in the tenths or hundredths digit.

Far too many so-called climate scientists want to view these independent, non-correlated temperatures as a common population that can be lumped together in a data set, assumed to be part of a normal probability distribution which is then subject to statistical analysis – i.e. the uncertainty of the total goes down as the number of items in the data set goes up.

That is just a flat-out violation of the tenets of physical science.

Bellman
Reply to  Tim Gorman
February 25, 2021 6:41 am

If you add 100 independent, non-correlated temperature values together to calculate an average and each value has an uncertainty of +/- 0.5C then the resulting uncertainty is [ +/- 0.5 x sqrt(100)] = +/- 0.5 x 10 = +/- 5C.

Which means the uncertainty in the average is ±0.05°C.

Carlo, Monte
Reply to  Bellman
February 25, 2021 7:45 am

Which means the uncertainty in the average is ±0.05°C.

BZZZZT. Try again.

Bellman
Reply to  Carlo, Monte
February 25, 2021 8:09 am

OK, 5 / 100 = 0.05. Do you want me to try yet again or do you want to explain why I’m wrong.

Bellman
Reply to  Bellman
February 25, 2021 8:26 am

Granted, I’m sure that’s not the exact formula, but the general principle is that as you average more data the uncertainties reduce, rather than increase.

Bellman
Reply to  Carlo, Monte
February 25, 2021 9:47 am

I’m more confused than ever now. According to Tim Gorman the problem is the data is Independent and non-correlated, but your link says the problem is the data is correlated. Of course, if the noise isn’t random you cannot simply divide by √N, but your link only says that at most you are left with the same amount of uncertainty as you would have for an individual station, not as is being claimed that it ALWAYS goes up.

Tim Gorman
Reply to  Bellman
February 25, 2021 10:21 am

Tell me how temp data in Thailand can be correlated with temp data in Niger?

I don’t agree with Climate Detective about correlation, even for stations only 1 mile apart. Clouds, rain, etc can cause the temp data even for such stations to be vastly different. Depending on terrain they may even be a quite different altitudes.

Temperature data is highly correlated to the angle of the sun in the sky, both north-south and east-west, and to seasonal variation. It is not nearly as correlated with distance. For some stations there may be a high correlation with close stations, for other stations the correlation may be almost zero. And this doesn’t even include the station calibration or the stations uncertainty. The readings from a newly calibrated station may be different than the readings from a similar station located only feet away, let alone miles away.

Unless you can quantify *all* those various factors among all the stations in your “average” that impact correlation, it is far better to assume no correlation.

Think about it, how correlated are the temp measurements taken in the middle of a 3 acre asphalt parking lot compared to a rural station ten miles upwind in a pristine environment? Can you venture a correlation factor at all?

Reply to  Tim Gorman
February 25, 2021 12:01 pm

Nobody is saying Thailand is correlated with Niger. They are over 3000 km apart so the correlation is zero. But Washington DC is strongly correlated with Baltimore.
The way to test correlation is to measure it. I have here.
https://climatescienceinvestigations.blogspot.com/2020/06/11-correlation-between-station.html
If you disagree then go and measure the correlations yourself, and remember, these are the correlations between anomalies not actual raw temperature readings. The correlations between raw readings are even higher due to the dominance of seasonal variations.

“Think about it, how correlated are the temp measurements taken in the middle of a 3 acre asphalt parking lot compared to a rural station ten miles upwind in a pristine environment?”
You clearly don’t understand correlations. They measure the synchronous movement of two datasets. They do not compare the means or the relative sizes. The rural and urban stations can have different means but if they are close their weather will be the same, so when the temperature of one goes up, so will the temperature of the other. Even if the amounts are different the correlation will be strong.

Tim Gorman
Reply to  Climate Detective
February 25, 2021 1:33 pm

Is Pikes Peak temperatures correlated with Denver temperatures? They are only 100km apart. Is the temperature in Valley Falls, KS correlated with the temperature in Berryton, KS? They are on opposite sides of the Kansas River Valley and have significant differences in their weather and temperatures. Or how about San Diego and Romana, CA? They are only 30 miles apart and have vastly different temperatures throughout the year.

Correlation in anomalies is meaningless. The anomalies on Pikes Peak are vastly different than those in Denver. Same with Valley Falls and Berryton. San Diego can be at 65F while Ramona is at 100F, far above the baseline temperature for that area of CA.

Any two geographical points that are far enough apart that sunrise and sunset occurs at different times of the day simply cannot have either their temperatures or anomalies moving in sync. It’s impossible.

What you are trying to do is ignore the time factor that the temperatures are truly correlated to. You can certainly overlay the temperature curve for one location on top of the temperature curve for another location by ignoring the time factor and make them look like they are in sync – but its a false correlation. They simply do not go up and down at the same time. Plot them on a time axis and you will find that one will start up while the other is still going down. Then you will find that one starts down before the other one.

It’s like saying two sine waves that are out-of-phase are correlated. The correlation factor is less than one. Depending on terrain, altitude, and humidity the correlation may be significantly less than one.

Reply to  Tim Gorman
February 25, 2021 4:24 pm

Do you understand anomalies? They are calculated relative to each monthly mean of that particular temperature record, not some baseline for the region.
Your point about the time difference is irrelevant because the time scale for the correlation is months not hours or minutes. So there is no time difference.

Tim Gorman
Reply to  Climate Detective
February 25, 2021 6:39 pm

So what? Anomalies carry a greater uncertainty than either the individual absolute temperature or the calculated mean. The uncertainty of the absolute temp and the baseline temp add by root sum square.

Months? Really? The monthly figures are not made up from daily temperatures? Where do you go to measure a “monthly” temperature?

Correlation has to go all the way down or its no good!

Reply to  Tim Gorman
February 26, 2021 5:56 am

Even the daily mean is derived from Tmin and Tmax. These are time independent. They are not measured at the same time or relative time everywhere so the time lag between two stations is irrelevant.

Tim Gorman
Reply to  Climate Detective
February 26, 2021 9:07 am

OMG!

Let’s split your statement up.

They are not measured at the same time or relative time everywhere”

True!

” so the time lag between two stations is irrelevant.”

Wrong!

You measure the correlation between two data sets by taking their dot product. The temperature profile at a station is close to being a sine wave. You determine the correlation between two sine waves by taking their dot product.

Asin(x) * Bsin(x+p) where p is the phase difference between the two. They are only perfectly correlated when p = 0. when p = 1.57 radians the correlation is zero.

You simply cannot ignore this basic physical fact.

And this is only *time* dependence on correlation.It doesn’t even begin to address geography and terrain dependence on correlation.

But ignoring seems to be a major meme in climate scientists today – ignore uncertainty, ignore how to properly handle time series, ignore that individual temperature measurements are independent and not part of the probability distribution of a random variable, ignore the fact that the earth is not a billard ball!

It’s willful ignorance from the top to the bottom!

Reply to  Tim Gorman
February 26, 2021 12:35 pm

“The temperature profile at a station is close to being a sine wave. “
Over what time period?
24 hours? Yes.
365 days? Yes.
Time differences of a few minutes matter for the first set of 24 hour data but not for the second set which is recorded daily. The problem is all the actual historical data was recorded daily not by the second. So it is the 365 day periodicity that matters and the small time difference between neighbouring stations doesn’t. It has no measurable impact.

Tim Gorman
Reply to  Climate Detective
February 27, 2021 10:25 am

I’m sorry. I missed this.

So it is the 365 day periodicity that matters and the small time difference between neighbouring stations doesn’t. It has no measurable impact.”

If it is a time series at the start then it is a time series all the way through. Daily temps are a time series. They get grouped into monthly temps as a time series. Those monthly temps get grouped into an annual temp which becomes part of an annual time series.

Reply to  Tim Gorman
February 27, 2021 2:58 pm

So what? The issue is, does the difference in longitude affect correlations? The answer is no because all daily max and min temperature measurements are made relative to local time (i.e the position of the Sun) not GMT.

Clyde Spencer
Reply to  Climate Detective
February 26, 2021 12:02 pm

More properly, what you are calling a mean (which is also the median) should be called the “mid-range” value resulting from Tmin and Tmax.

Reply to  Clyde Spencer
February 26, 2021 12:16 pm

Agreed. But these values are all we have for most station data before 2000.

Clyde Spencer
Reply to  Climate Detective
February 26, 2021 11:58 am

However, you can only potentially get a correlation of 1.00 if the values of the two variables have a 1:1 correspondence in value.

Tim Gorman
Reply to  Clyde Spencer
February 27, 2021 10:21 am

And for temperature, if they have a 1:1 correspondence in time.

Reply to  Clyde Spencer
February 27, 2021 2:52 pm

No.
Two sine waves with the same period and phase difference will have a correlation coefficient of 1.00 even if their amplitudes are completely different, e.g. even if one amplitude is a factor of 1000 greater than the other.

Tim Gorman
Reply to  Climate Detective
February 27, 2021 5:16 pm

“same period and phase difference”

NO KIDDING? But time differences *ARE* phase differences. Two temperature sine waves with a time difference are OUT OF PHASE!

Clyde Spencer
Reply to  Climate Detective
February 27, 2021 7:57 pm

I don’t think that you are right because the formula for the correlation coefficient has xy co-variance in the numerator and the product of the x standard deviation and y standard deviation in the denominator, and those will differ by a factor of the square root of 1,000.

Basically, a correlation coefficient of 1.00 means that the independent variable is a perfect predictor of the dependent variable. That is, they are equal! They can’t be equal if one is different by a factor of 1,000.

Clyde Spencer
Reply to  Tim Gorman
February 26, 2021 11:56 am

Tim
There can be weak correlation for stations in the same hemisphere and at the same longitude, declining as the difference in longitude increases, until one achieves anti-correlation on opposite sides of the world. There can also be short-term auto-correlation in a time series because of thermal mass, although it is not a given. Trying to untangle that would try the patience of a saint.

Tim Gorman
Reply to  Clyde Spencer
February 27, 2021 10:19 am

Oh, I agree. but *weak* correlation means that has to be accounted for. And it doesn’t appear to be so in climate science.

Untangling things is supposed to be the reason for science. Ignoring factors because it is too hard to untangle them just leads to incorrect science.

Tim Gorman
Reply to  Bellman
February 25, 2021 10:09 am

This only applies when you have multiple measurements of the SAME THING using the SAME DEVICE. This allows you to build a probability distribution for the results which can be statistically analyzed.

It is a far different thing when you have single measurements of different things by different devices, i.e. independent populations of size one.

I would also point out that the uncertainty of the mean is what is usually calculated statistically. That is based on the assumption that the mean is the true value of a Gaussian distribution. But if the distribution deviates from Gaussian in any way, e.g. the mean and median are not the same, then that assumption may not apply.

Clyde Spencer
Reply to  Bellman
February 26, 2021 11:46 am

Bellman

You mistakenly claimed, “… as you average more data the uncertainties reduce, rather than increase.”

The reduction of the standard error of the mean only applies to random errors. To be sure that one is only dealing with random errors, the property being measured must be time-invariant, it must be measured with the same instrument, and if recorded by a human, by the same observer. Different observers are liable to introduce systematic errors. Therefore, while the precision may be improved by many observers, they will introduce systematic errors that reduce the accuracy, which isn’t the case with a single observer.

It you try to average measurements that vary with time, you are averaging the trend as well as the error resulting from random error. If the trend is increasing strongly, the apparent error will increase over time. If you use different instruments with different calibrations, you can actually increase the error because you are adding systematic changes and not just random changes. There are many caveats that have to be observed to be able to legitimately claim that the precision is improved with multiple measurement.

Now why would you freely admit that you don’t think you provided the correct formula, and yet be so adamant that you understand the “general principle?” Think about it!

Last edited 5 months ago by Clyde Spencer
Bellman
Reply to  Clyde Spencer
February 26, 2021 12:44 pm

The reduction of the standard error of the mean only applies to random errors.

I’m assuming any errors are random, but even if they are not and all go in the same direction the uncertainty of the average cannot be greater than the individual uncertainties.

To be sure that one is only dealing with random errors, the property being measured must be time-invariant, it must be measured with the same instrument, and if recorded by a human, by the same observer.

That doesn’t follow at all. If you use the same instrument or observer you are more likely to get a systematic error. What if your instrument is badly calibrated or your observer always rounds up when they should be rounding down.

Different observers are liable to introduce systematic errors.

Again, no idea why you’d think that. A single person is more likely to introduce systematic errors than multiple observers.

It you try to average measurements that vary with time, you are averaging the trend as well as the error resulting from random error.

We haven’t got on to time series, at present I’m just talking about the uncertainties of averaging 100 independent thermometers.

If the trend is increasing strongly, the apparent error will increase over time.

Doesn’t follow, unless the uncertainty is a percentage, which we are assuming isn’t the case.

There are many caveats that have to be observed to be able to legitimately claim that the precision is improved with multiple measurement.

Yes, it’s always possible that all errors go the same way, but what I’m trying to establish is why some here think the uncertainties inevitably increase with multiple independent measurements.

Now why would you freely admit that you don’t think you provided the correct formula, and yet be so adamant that you understand the “general principle?”

Because I didn’t want to get too hung up on the exact formula, because I tend to doubt myself, and because I know how many factors might impact on the formula. And because I wanted to be open to the idea that someone might explain something I hadn’t previously known. General principles are more likely to be correct than abstract formulae, but I like to give people the chance to demonstrate why I might be wrong – remembering the adage about you being the easiest person to fool.

Clyde Spencer
Reply to  Bellman
February 26, 2021 4:15 pm

Bellman,

That doesn’t follow at all. If you use the same instrument or observer you are more likely to get a systematic error. What if your instrument is badly calibrated or your observer always rounds up when they should be rounding down.

Yes, an observer will almost always create a systematic error from things such as parallax, and a different mental model for rounding off. That will affect the accuracy, but should provide for the highest precision because they will be consistent. However, with multiple observers, the various systematic errors characteristic of the observer may result in some cancellation of accuracy errors, but will result in less precision. It becomes a trade-off. However, without some standard to compare to, it is problematic that the accuracy can be assessed properly. The trick is to be sure that the instrument is calibrated carefully and correctly. Offhand, I’d expect that instrument calibration is going to be done more carefully in advanced countries than in Third-World countries.

Bellman
Reply to  Clyde Spencer
February 26, 2021 6:36 pm

OK, I assume it was just an error in your original comment, when you said:

Therefore, while the precision may be improved by many observers, they will introduce systematic errors that reduce the accuracy, which isn’t the case with a single observer.

Clyde Spencer
Reply to  Bellman
February 27, 2021 8:26 pm

I will stand by my recent statement,

That will affect the accuracy, but should provide for the highest precision because they will be consistent. However, with multiple observers, the various systematic errors characteristic of the observer may result in some cancellation of accuracy errors, but will result in less precision.

Tim Gorman
Reply to  Bellman
February 27, 2021 10:06 am

I’m assuming any errors are random, but even if they are not and all go in the same direction the uncertainty of the average cannot be greater than the individual uncertainties.”

I gave you the answer to this before. Did you not bother to read it? You measure three sticks using three different devices, each with a different uncertainty specification. Say you come up with 8′ +/- u1, 16′ +/- u2, and 24′ +/- u3 where u1<u2<u3. Now you glue them end to end.

What is the overall uncertainty of the overall length with them glued together? u1? u2? u3? u1+u2+u3? root sum square of u1/u2/u3? (u1+u2+u3)/3?

I am assuming you would say u3, the largest. Right?

That doesn’t follow at all. If you use the same instrument or observer you are more likely to get a systematic error. What if your instrument is badly calibrated or your observer always rounds up when they should be rounding down.”

Every instrument has systematic error. It can’t be avoided. There is no such thing as perfect accuracy with infinite precision. If someone is doing it wrong then you correct the process and redo the measurements. But you see that only works for time-invariant mesurands. How do you go back and remeasure temperature? With temperature you get the uncertainty you get at that point in time and you must do the best you can to identify what that uncertainty is.

“Again, no idea why you’d think that. A single person is more likely to introduce systematic errors than multiple observers.”

You are a mathematician, right? Did you ever take chemistry lab or physics lab or any kind of an engineering lab? If you had you would understand this. Everyone is different and everyone reads and interprets measurements differently. Some have better eyesight or reflexes, some have worse. Some will scootch to the left to read a figure under a meter needle, some to the right – and each gets a slightly different reading, even if a parallax mirror is provided. Some will tighten down a connection screw more tightly than others,changing the resistance between measurand and the measuring device. This is *especially* true when using micrometers where “how tight to screw it down” is a purely subjective decision.

If the same person does the same process over and over all these subjective things will be at least consistent. Some may implement the process better than others but if one person does it then it will still be consistent. It will be far easier for someone else to replicate the measurement consistently even if the difference in measurement is consistently different for each person.

“We haven’t got on to time series, at present I’m just talking about the uncertainties of averaging 100 independent thermometers.”

Every temperature is part of a time series. Why would you think otherwise. What do you think time zones are for? If those 100 independent thermometers are located apart then they are measuring different things at different times! Not only that but each individual station is building a time series when it takes multiple measurements, e.g. Tmax and Tmin occur at different times.

“Doesn’t follow, unless the uncertainty is a percentage, which we are assuming isn’t the case.”

Again, you don’t live in the material world much do you? What happens when you are using a sensor to measure the diameter of a wire being pulled through a die? What happens to that sensor as miles of wire are pulled through it? The apparent error will increase (the wire will appear to narrow) over time as the sensor wears away.

Yes, it’s always possible that all errors go the same way, but what I’m trying to establish is why some here think the uncertainties inevitably increase with multiple independent measurements.”

For one thing, you have your definition wrong. It’s multiple independent measurements of DIFFERENT THINGS!

After all this time and debate you still haven’t even gotten the definitions correct. You are stubbornly clinging to the idea that all measurements generate random errors that create a probability distribution around a mean – whether it is measurements of the same thing or different things.

General principles are more likely to be correct than abstract formulae, but I like to give people the chance to demonstrate why I might be wrong – remembering the adage about you being the easiest person to fool.”

General principles are most likely explained correctly by formulas. Like Gauss’ Law. Or Newton’s Law of Gravity. Or the General Equation for the Propagation of Uncertainty.

Bellman
Reply to  Tim Gorman
February 27, 2021 10:53 am

I gave you the answer to this before. Did you not bother to read it?

I keep reading your answers and they keep avoiding the issue. We are discussing the mean not the sum, hence your example of measuring three different sticks is only relevant if we want to know the combined length of the three sticks, not if you want to know the uncertainty in the mean length of the stick.

Every temperature is part of a time series. Why would you think otherwise.

Because it’s possible to take a single reading? Of course, we are talking about time series to see how temperatures are changing, but at any point we are only interested in the average on a specific day, or time. My point was that the issue we are discussing is how uncertain a specific single average, and don’t want to add needless complications. That’s why it’s better to see how this argument works with heights or other static data.

The apparent error will increase (the wire will appear to narrow) over time as the sensor wears away.

Yes, but the point I was questioning was the claim that if a series is increasing rapidly the errors would increase.

You are stubbornly clinging to the idea that all measurements generate random errors that create a probability distribution around a mean – whether it is measurements of the same thing or different things.

Firstly, I’m not sure how it’s possible not to have a probability distribution, regardless of what you are measuring. Secondly, I’m not claiming anything about what the distribution is. As a thought experiment, I can assume that all errors are positive and equal to the full uncertainty. The average cannot be bigger than the maximum uncertainty, and this doesn’t matter if you are measuring the same thing with the same instrument, or completely different things with different instruments.

Tim Gorman
Reply to  Bellman
February 27, 2021 1:22 pm

We are discussing the mean not the sum,”

How do you calculate the mean if you don’t know the length? And if the length has an uncertainty then the mean will also! And remember, like temperatures, you only get one shot at measuring.

“Because it’s possible to take a single reading? ”

And if you take a reading at t0, t1, t2, t3 ….. that doesn’t define a time series?

“but at any point we are only interested in the average on a specific day, or time”

If each measurement is a single reading at a point in time then how do you take its average? The average would be the reading!

You can certainly average all the values over time by integrating the entire temperature profile – but that is not the mid-range of Tmax and Tmin. For the positive part of the curve it is .637Tmax and for the bottom part it is .637Tmin. And I don’t think anyone in climate science is doing that integration.

My point was that the issue we are discussing is how uncertain a specific single average,”

The uncertainty of a single average with independent components is the combined uncertainty of its components as root sum square.

“Yes, but the point I was questioning was the claim that if a series is increasing rapidly the errors would increase.”

The series is the readings of the sensor. The more time moves on the errors it produces get larger as the sensor wears away. How rapidly the errors grow depends on how fast the sensor wears away. The problem with time series like temperatures is how fast *are* the variances and uncertainties growing over time. Mapping the averages of time separated readings won’t tell you that. The averages hide the variances. That’s why linear regression on time series, especially those concoted from averages are so misleading most of the time.

Firstly, I’m not sure how it’s possible not to have a probability distribution, regardless of what you are measuring.”

And this is why you don’t understand uncertainty. How did you get a physics PhD without understanding this?

 As a thought experiment, I can assume that all errors are positive”

And you continue to show your total lack of understanding concerning error and uncertainty. They are not the same!

“The average cannot be bigger than the maximum uncertainty, and this doesn’t matter if you are measuring the same thing with the same instrument, or completely different things with different instruments.”

The average of *what*?

One more time: If I take a thousand readings of a crankshaft journal with a micrometer then some readings will occur more often that others. That means there is a probability distribution associated with those readings. The one that happens most often, if the probability distribution is Gaussian,is most likely the true value. In this case the uncertainties *can* be minimized by statistics, not eliminated mind you, but minimized. No matter how many measurements you take and how carefully you calculate the standard deviation of the mean, that standard deviation will never go to zero.

Now, if one thousand of us take a single, independent reading of one thousand crankshaft journals using different devices and someone asks me what the size of a crankshaft journal is what do I tell them? I don’t even know if all the crankshafts were from the same type of engine. There is no guarantee that the average of all those readings is the true value or even if there is a true value.

The best I can do is to take an average along with the uncertainty estimate for each measurement and tell them that the average size of the journals that were measured is such and such. And the uncertainty in that average is the root sum square of sum of the uncertainties. (the more journals I measure the greater the uncertainty gets because the measurements are not of the same thing).

Now, for an iterative CGM. We assume that the output of the CGM has some kind of relationship with its inputs. So if we do multiple runs of that CGM it should provide a set of outputs that can be averaged to get a more certain output.

The problem is that if the CGM has an uncertainty in iteration 0 then that uncertainty gets put into iteration 1 and it gets larger. So after 100 runs the uncertainty of the final output can actually be bigger than the average of the outputs. And that is the problem with the CGM’s today.

Bellman
Reply to  Tim Gorman
February 27, 2021 2:05 pm

How do you calculate the mean if you don’t know the length? And if the length has an uncertainty then the mean will also!

Yes, and I say the uncertainty of the mean is equal to the uncertainty of the length divided by the sample size.

And if you take a reading at t0, t1, t2, t3 ….. that doesn’t define a time series?

This is getting increasingly weird. Yes of course if you make a time series you have a time series.

If each measurement is a single reading at a point in time then how do you take its average? The average would be the reading!

I’m really not sure if you are not getting this or just pretending at this point. The average we are talking about is the average of the 100 thermometers.

You can certainly average all the values over time by integrating the entire temperature profile – but that is not the mid-range of Tmax and Tmin.

Agreed. But that’s not what I’m doing in this thought experiment. I have 100 thermometers giving me a single reading, each with an uncertainty of ±0.5°C. I’m adding up all the readings and dividing by 100 and I expect the uncertainty of the average to be less than ±0.5°C, whilst you think it will be ±5°C. I don’t care if each thermometer is recording the temperature at the exact same time, or if they are giving me the max or min or mid point value. I’m just interested in what you think happens when you average anything.

The uncertainty of a single average with independent components is the combined uncertainty of its components as root sum square.

Except your own equations suggest you are wrong.

And this is why you don’t understand uncertainty. How did you get a physics PhD without understanding this?

I didn’t. I think you’re confusing me with Climate Detective. But could you explain how it’s possible to measure anything and not have a probability distribution? You might not know what the distribution is, but it has to exist.

And you continue to show your total lack of understanding concerning error and uncertainty. They are not the same!

I take uncertainty to mean the bounds of probable errors. I’m not saying errors are the same as uncertainties, but it’s possible to imagine the worst case where every error equals its bounds.

The average of *what*?

The average of the measurements.

No matter how many measurements you take and how carefully you calculate the standard deviation of the mean, that standard deviation will never go to zero.

This is a straw man argument. I’ve never said the uncertainties will go to zero.

Now, if one thousand of us take a single, independent reading of one thousand crankshaft journals using different devices and someone asks me what the size of a crankshaft journal is what do I tell them? I don’t even know if all the crankshafts were from the same type of engine. There is no guarantee that the average of all those readings is the true value or even if there is a true value.

Which I’m sure is true of crankshaft journals. But nobody is saying the average of 100 thermometer readings tell us what any individual thermometer reads. The average is the goal. It’s what we are trying to estimate by averaging.

And the uncertainty in that average is the root sum square of sum of the uncertainties. (the more journals I measure the greater the uncertainty gets because the measurements are not of the same thing).

This still seems absurd. What if you were measuring millions of the same type of but different crankshafts,each with an uncertainty of 1mm? You cannot tell me the average to within a meter, but if you only averaged 100, you could tell me to within 10mm?

It would help with the rest of the comment if I knew what a CGM was. Maybe you mean GCM, but if so it has nothing to do with this discussion.

Tim Gorman
Reply to  Bellman
February 27, 2021 4:12 pm

Yes, and I say the uncertainty of the mean is equal to the uncertainty of the length divided by the sample size.”

You only have one sample. The measurements you took. You can’t go back and remeasure a temperature nor can you make multiple measurements of it. You’ve got one chance at it and no more.

So what is your sample size? (hint: 1)

This is getting increasingly weird. Yes of course if you make a time series you have a time series.”

It isn’t weird. Taking temperatures *have* to be a time series. You can stop time so taking sequential measurements mean you are creating a time series. What’s weird about that?

“I’m really not sure if you are not getting this or just pretending at this point. The average we are talking about is the average of the 100 thermometers.”

Each one of those 100 thermometers are measuring different things at different times. Each one is creating its own time series with sequential measurements of different things. Again, you can’t stop time and measure the same temperature multiple times. Nor can two different stations measure the same thing, you can’t transport the atmosphere being measured at one to another one.

The correct average would be one that uses the temperature at each 100 stations AT THE SAME PRECISE TIME. If they are offset in time then when one is at Tmax the other one won’t be. So the true average will be something less than Tmax. It will be Tmax_1 + (Tmax_2 – x) where x>0. If you use Tmax at both locations, disregarding the time differential then the average temp indicates a higher average than it should if it is trying to represent the condition of the atmosphere at a specific time.

 I expect the uncertainty of the average to be less than ±0.5°C,:”

You expect the uncertainty to be less than +/- 0.5C because you look at the temperatures as a probability distribution, typically Gaussian, where you can use the standard deviation of the mean to get the mean more accurately – which implies that the mean is the true value and the more accurately you can calculate the mean the more you decrease the uncertainty of what you are measuring. Going along with that is that the measurand must be the same for all measurements and therefore you are building a probability distribution around that measurand.

If the measurand is *not* the same thing then the temperatures do *not* describe a probability distribution. If they don’t describe a probability distribution then the standard deviation of the mean is not applicable. When you combine all of the elements garnered from measuring different things you must use a different method to combine the uncertainties. That method is the root sum square of the uncertainties. And that does not include dividing by the number of measurements – that is used with a probability distribution.

Tim Gorman
Reply to  Bellman
February 27, 2021 4:41 pm

Except your own equations suggest you are wrong.”

No, they do not. The documentation on the internet is legion about how to do this. Taylor’s “An Introduction to Error Analysis” goes into it in detail. So does Bevington’s “Data Reduction and Error Analysis”. When you say *I* am doing things wrong you are saying *they* are doing things wrong. My equations are right out of their books.

“I didn’t. I think you’re confusing me with Climate Detective. But could you explain how it’s possible to measure anything and not have a probability distribution? You might not know what the distribution is, but it has to exist.”

How can different things generate a probability distribution? We aren’t talking about multiple measurements of different things, we are talking about single, independent measurements of different things. How does that generate a probability distribution? The value at one station is not dependent on the value at another station. You aren’t studying a random variable.

I take uncertainty to mean the bounds of probable errors. I”

You are still stuck on believing that uncertainty is error. It isn’t. An uncertainty interval has no probability. It doesn’t give you any indication of where the true value actually is, only where it may be. A probability distribution *will* give you an indication of where the true value might be. They are two different things.

The average of the measurements.”

What does an average of the measurements of different things tell you? When you are looking for an average you are looking for a metric that will tell you something. You can certainly calculate an average of the measurements of different things but that doesn’t mean that average will tell you anything. Think about it. One temperature is 72F and the other is 32F. The average is 52F. So exactly what does that tell you about anything? It doesn’t tell you anything about the 72F temp or the 32F temp. It doesn’t even tell you anything about the climate in between!

This is a straw man argument. I’ve never said the uncertainties will go to zero.”

You keep talking about the standard deviation of the mean and that is what that process is meant to do. Of course it would require an infinite number of measurements so it’s improbable but that doesn’t have anything to do with the intent.


Which I’m sure is true of crankshaft journals. But nobody is saying the average of 100 thermometer readings tell us what any individual thermometer reads. The average is the goal. It’s what we are trying to estimate by averaging”

Again, what does the average tell you? Does it tell you anything about the temperature at each individual station? Does it tell you anything about the temperature at *any* station. You are putting some kind of faith in the average because you view the temps as a probability distribution. But the uncertainty of measurements taken from different things at different times doesn’t create a probability distribution. The average doesn’t tell you that it is the most likely temperature you will encounter.

“This still seems absurd. What if you were measuring millions of the same type ” (bolding mine).

The operative word is *SAME*. But temperatures at different stations are *not* measurements of the SAME temperature, not even the same type of temperature, especially if you are doing a global average. In the case of the same type you can build a probability distribution — IF that same type is made in the same production run. Change the cast, the milling machine, or the milling head and you won’t be measuring the same thing.

Sorry about the acronym mixup, senior moment Of course I am talking about GCM’s. But the GCM’s *are* built to calculate a global average. And they are supposed to be validated against measurements of the global average. But the global average is so uncertain that its wasted effort. Throw a dart against a board loaded with temperatures and you would probably get closer than the global average being measured today.

Bellman
Reply to  Tim Gorman
February 27, 2021 6:30 pm

When you say *I* am doing things wrong you are saying *they* are doing things wrong.

No I’m saying I suspect you are misinterpreting what they say.

We aren’t talking about multiple measurements of different things, we are talking about single, independent measurements of different things.

I think we are talking about different things here. When I say a measurement has a probability distribution, I mean there exists an implied distribution, from which the measurement will be taken. You are talking about how you can use multiple measurements to estimate the distribution.

Obviously if you only have one value and no other information it will be impossible to know what the PDF is but it still exists.

An uncertainty interval has no probability. It doesn’t give you any indication of where the true value actually is, only where it may be. A probability distribution *will* give you an indication of where the true value might be. They are two different things.

And it really doesn’t matter for this argument what the PDF of the uncertainty is. It’s sufficient to know it’s a bound on the error. If it isn’t a reasonable bound and you don’t know the PDF than I’m not sure how useful the concept of an uncertainty measure is. Why say the uncertainty is ±0.5°C, if there’s an unknown chance that the true value could be way outside the uncertainty bound?

But the global average is so uncertain that its wasted effort. Throw a dart against a board loaded with temperatures and you would probably get closer than the global average being measured today.

Yet for all your formulae saying the global average is meaningless, and the uncertainty could be tens of degrees out, all the actual global estimates show very consistent values. If it was actually nothing more than a random set of averages, why do monthly temperatures never change by 5°C in a single month?

Clyde Spencer
Reply to  Tim Gorman
February 27, 2021 8:14 pm

Tim
I have a copy of Taylor’s An Introduction to Error Analysis, which I have been re-reading.

Jim Gorman
Reply to  Clyde Spencer
February 28, 2021 8:47 am

That is an excellent beginning text on uncertainty. It is important to keep in mind the difference between error and uncertainty. The GUM is certainly NOT light reading but it is also enlightening.

I wish I knew where the mistaken assumption originated that the error of the mean defines the precision of the mean (average). It really defines an interval where the mean may lie and that is all. Somehow that interval has become translated into the definition of the precision of the value.

Bellman
Reply to  Tim Gorman
February 28, 2021 4:48 am

You are still stuck on believing that uncertainty is error. It isn’t.

From Bevington and Robinson’s Data Reduction and Error Analysis for the Physical Sciences.

Our interest is in uncertainties introduced by random fluctuations in our measurements, and systematic errors that limit the precision and accuracy of our results in more or less well defined ways. Generally, we refer to the uncertainties as the errors in our results, and the procedure for estimating them as error analysis.

Bellman
Reply to  Bellman
February 28, 2021 5:45 am

Also, from Taylor’s Introduction to Error Analysis.

Most textbooks introduce additional definitions of error, and these are discussed later. For now, error is used exclusively in the sense of uncertainty, and the two words are used interchangeably.

Tim Gorman
Reply to  Bellman
February 25, 2021 8:59 am

You are wrong because when you have INDEPENDENT, NON-CORRELATED data you do *NOT* divide by N to calculate uncertainty.

When combining independent, non-correlated data, each with an uncertainty interval, the uncertainty *ALWAYS* goes up, never down.

You only divide by N when you have dependent, correlated data that forms a probability distribution. Independent, non-correlated data does not represent a probability distribution.

Bellman
Reply to  Tim Gorman
February 25, 2021 9:24 am

I’m always prepared to accept I might be wrong so could you point me to an explanation of this. Because at first glance it seems obviously wrong. Why does being NON-CORRELATED mean uncertainty will increase as sample size increases? Are you really saying that if I take 100 measurements with independent temperature readings, each with an independent error of ±0.5°C, the average could be out by 5°C?

Maybe I’m just misunderstanding what you are trying to say, but even if every measurement was out by the same amount in the same direction, the average would only be out by the same amount.

Tim Gorman
Reply to  Bellman
February 25, 2021 10:02 am

That is EXACTLY what I am saying.

First, uncertainty is not error, especially not random error that can cancel. Uncertainty has no probability distribution.

If you have one ruler with an uncertainty to measure one board and a second ruler with a different uncertainty to measure a second board which uncertainty would you use to describe the uncertainty of their sum? Why would you expect the uncertainty to go *down* when you add the the two measurement results together? In fact, the total uncertainty will be some kind of sum of the two separate uncertainties.

Because of the recognition that some of the uncertainties in a large number of independent measurements of different things may cancel, the uncertainties are usually added as root sum square instead of directly added. If you have a situation where you believe this condition to not be true then direct addition of the uncertainties is certainly legitimate.

If you must think of it statistical terms then ask yourself why variances add as root sum square.Just remember that uncertainties don’t have a probability distribution so they don’t have a variance or standard deviation. This is merely an analogy.

Say you have a function f = a + b. let k = partial derivative of f with respect to a. let m = the partial derivative of f with respect to b. let u_a be the uncertainty for a and u_b be the uncertainty for b.

The standard formula for propagation of error is

u_total^^2 = (k)(u_a^^2) + (m)(u_b^^2) + (k)(m)(u_a)(u_b)(r_ab)

where r_ab is the correlation between a and b.

If a and b are truly independent then r_ab = 0, there is no correlation between them. And the partial derivative of with respect to a and b are both one.

Thus the formula is reduced to u_total = sqrt( u_a^^2 + u_b^^2).

And you can extend that out to however many measurements you have. Thus, if u_a = u_b = …. u_n you wind up with:

u_total = (+/-u) (sqrt(n) and for n=100 u_total = +/- 10u.

if u is +/- 0,5C then for 100 stations your final uncertainty is
+/- 5C.

QED.

Bellman
Reply to  Tim Gorman
February 25, 2021 10:32 am

But everything you are saying there is about summing the data. I’m talking about what happens when you take the average.

Carlo, Monte
Reply to  Bellman
February 25, 2021 11:32 am

Umm, the average is the sum, divided by a constant.

Bellman
Reply to  Carlo, Monte
February 25, 2021 12:53 pm

Yes and the difference between me and Tim Gorman, and I assume you, is that I think you also have to divide the uncertainty by a constant. Gorman seems to think that the uncertainty of the total sum will also be the uncertainty of the average. This makes no sense to me and I think is easily refuted.

In Gorman’s example 100 non-corrolated thermometers, each with an uncertainty ±0.5°C are added together to give a combined uncertainty of ±5°C, which is correct, but then he wants to divide the result by 100 to get an average, whilst saying the uncertainty of the average is still ±5°C. The absurdity of this can be seen if instead of 100 thermometers, you had 1,000,000. This would result in an uncertainty on the average of ±500°C. It’s claiming that it’s possible the average of the million thermometers could be much bigger than any individual thermometer reading.

Tim Gorman
Reply to  Bellman
February 25, 2021 1:39 pm

Wow! You just hit the jackpot for why the “global average temperature” is so meaningless.

As you add more and more stations the uncertainty keeps growing. I gave you the standard equation for the propagation of uncertainty. Unless you can show mathematically how that equation is wrong then you are just whining.

Remember, the uncertainty only gives you the interval in which the true value could lie. It doesn’t tell you the true value. If the interval becomes wider than the result you are looking for then you need to redesign your experimental process, there is something wrong with it.

You may not like that but it *is* the truth. It’s the way physical science works.

Tim Gorman
Reply to  Bellman
February 25, 2021 5:52 pm

Can you even quote the standard equation for propagation of uncertainty?

If you have q = Ab where A is a constant, e.g. your number of stations, then the uncertainty terms include the partial derivative of each member on the right side with respect to q.

The partial derivative of a constant is ZERO. So the contribution of the constant to the uncertainty is ZERO.

Why is this so hard to understand?

Reply to  Tim Gorman
February 25, 2021 8:22 pm

“Why is this so hard to understand?”
Because it is wrong!

Reply to  Climate Detective
February 25, 2021 8:29 pm

q = Ab

So ∆q = A.∆b + b.∆A
∆A = 0
So ∆q = A.∆b
If A = N (number of stations) and b is the mean of q, then the uncertainty in the mean is N times less than the uncertainty in the sum of the readings q.

So the constant A scales the uncertainty.

QED!

Tim Gorman
Reply to  Climate Detective
February 26, 2021 7:52 am

Go look up how to propagate uncertainty of a product.

if f = Ab then you can either use fractional uncertainty or the standard equation for propagation of error.

Let’s do fractional uncertainty.

(∆f/f)^^2 = (∆b/b)^^2 + (∆A/A)^^2

Since (as you admit) ∆A = 0
the ∆A element disappears leaving:

∆f/f = (∆b/b)

You will get the same result using the standard equation for uncertainty since the partial derivative of f with respect to A is zero, the derivative of a constant is zero.

b is a variable, not a constant. Assuming it is a mean leads to b being a constant. The ∆ of a constant is zero. This would lead to the result that f has zero uncertainty – an obvious fallacy.

You *need* to keep your definitions constant throughout your math.

Reply to  Tim Gorman
February 26, 2021 10:54 am

“b is a variable, not a constant.”
Yes!
“Assuming it is a mean leads to b being a constant.”
NO!!! It is a set of means, one for each month. Each one is the sum of N different measurement from N different stations. It is still a variable (or vector); one that is derived from adding other variables (or vectors).

Let’s set A=N and b = m.
f = Nm
m is a variable for the mean temperature each month.
f is a variable for the total of the temperature readings from N different sites each month.
N is a constant.

So the uncertainty in f is
∆f = N.∆m + m.∆N
∆N = 0 because it is a constant.
So ∆f = N.∆m
This means that the uncertainty (and standard deviation) in the set of mean values in a temperature trend is N times less than the uncertainty ∆f in the sum of the individual records f used to calculate the mean value each month. This is because the mean values are all N times less than the f values. In other words the ratios of error to variable are equal for f and m.
∆f/f = (∆m/m)
Well at least you got that bit right!
This means that the uncertainty in each mean value is N times less than the uncertainty in the sum of the values used to calculate the mean, just as the mean m is N times less than f.
But the uncertainty in the sum is √N times more than the uncertainty in a typical data value in the sum because the variances of these datasets add, not the standard deviations. So the uncertainty in the mean will be √N times LESS than the uncertainty in any one of the variables used to calculate the mean (assuming those individual uncertainties are all approximately of the same magnitude) – unless of course the original datasets used in the mean are correlated.

Tim Gorman
Reply to  Climate Detective
February 27, 2021 8:49 am

NO!!! It is a set of means, one for each month. Each one is the sum of N different measurement from N different stations. “

You have (N1 +/- u1, N2 +/- u2, ……, N12+/- u12)

When you sum N1 thru N12 you add the uncertainties by root sum square. N is *not* a probability distribtion. It is 12 independent, uncorrelated data sets of population 1. N1 is not dependent on and does not drive N2, and on through N12, so they *are* independent. Single *values* have no correlation. N1 = {72} is not correlated with N12 = {30}. The covariance between two values is zero – no correlation.

Let me repeat, N1->N12 do *not* represent a probability distribution. They are twelve separate, individual, independent data values taken from separate, individual, independent populations of temperatures. Each has its own uncertainty interval. They are each separate in time. Each population has its own variance and represent independent time series.

Your equations are wrong.

Exactly what does the mean of month 12 (N) multiplied by 12 (the month) supposed to represent anyway?

When you sum the means to find an average mean you get
f = N1 + N2 +…. +N12

What you have are 12 uncertainties associated with those means, u1 to u12. When you sum those root sum square:

∆f = sqrt(u1^^2 + u2^^2 + …. + u12^^2)

—————————–
∆f = N.∆m + m.∆N
∆N = 0 because it is a constant.
So ∆f = N.∆m

——————————-

There is no uncertainty with m, the number of t