Why We Need an Independent Global Climate Temperature Database

Ever since the beginning of the global warming debate, now labeled “climate change,” there has been one immutable yet little-known fact: All of the temperature data stations used to make determinations about the state of Earth’s temperature are controlled by governments.

In June 1988, when Dr. James Hansen, then-director of NASA’s Institute for Space Studies in Manhattan, went before  the Senate Energy and Natural Resources Committee to say that, “global warming has begun,” he was using temperature data collected by governments worldwide from a weather station network that was never intended to detect a “global warming signal.”

In fact, Dr. Hansen had to develop novel statistical techniques to tease that global warming signal out of the data. The problem is, these weather station networks were never designed to detect such a signal in the first place. They were actually designed for weather forecast verification, to determine if forecasts issued by agencies such as the U.S. Weather Bureau (now the National Weather Service) were accurate. If you make temperature and precipitation forecasts for a location, and there is no feedback of the actual temperatures reached and the rainfall recorded, then it is impossible to improve the skill of forecasting.

The original network of weather stations, called the Cooperative Observer Program (COOP), was established in 1891 to formalize an ad hoc weather observation network operated by the U.S. Army Signal Service since 1873. It was only later that the COOP network began to be used for climate because climate observations require at least 30 years of data from weather stations before a baseline “normal climate” for a location can be established. Once the Cooperative Observer Program was established in the United States, other countries soon followed, and duplicated how the U.S. network was set up on a global scale.

However, the COOP network has several serious problems for use in detecting climate change on a national and global scale. The U.S. temperature readings made by volunteer COOP observers are rounded to the nearest whole degree Fahrenheit when recording the data on a paper form called a B-91. When comparing such coarsely recorded nearest whole degree temperature data to the claims of global warming, which are said to be about 1.8°F (1.0°C) since the late 1800s, the obvious questions of accuracy and precision of the COOP temperature data arise.

Even more concerning is that more than 90% of the COOP stations in the United States used to record climate data have found to be corrupted by local urbanization or contamination effects over time, with hundreds of COOP stations found to have been severely compromised by being placed next to air conditioner exhausts, jet exhausts at airports, and concrete, asphalt, and buildings that have sprung up near stations. All of these heat sources and heat sinks do one thing and one thing only: bias the recorded temperatures upwards.

The crux of the problem is this: the NWS publication “Requirements and Standards for Climate Observations” instructs that temperature data instruments must be “over level terrain (earth or sod) typical of the area around the station and at least 100 feet from any extensive concrete or paved surface,” and that “all attempts will be made to avoid areas where rough terrain or air drainage are proven to result in non-representative temperature data.” However, as detailed in this report, these instructions are regularly violated, not just in the U.S. network, but also in the Global Historical Climate Network.

This isn’t just a U.S. problem; it is a global problem. Examples exist of similarly compromised stations throughout the world, including Italy, the United KingdomChinaAfrica, and Australia.

With such broad corruption of the measurement environment, “The temperature records cannot be relied on as indicators of global change,” said John Christy, professor of atmospheric science at the University of Alabama in Huntsville, a former lead author on the Intergovernmental Panel on Climate Change.

The fact is that all global temperature data are recorded and compiled by government agencies, and the data are questionable due to corruption issues, rounding, and other adjustments that are applied to the data. In essence, the global surface temperatures that are reported are a mishmash of rounded, adjusted, and compromised readings, rather than being an accurate representation of Earth’s temperature. While scholars may claim the data are accurate, any layman can surmise that with all the problems that have been pointed out, it cannot possibly be accurate, only an estimate with high uncertainty.

Only one independent global temperature dataset that is independent of government compilation and reporting methods exists, and that is satellite-derived global temperature data from the University of Alabama at Huntsville (UAH), which are curated by Dr. John Christy and Dr. Roy Spencer.

But, even the UAH satellite dataset doesn’t give a full and accurate picture of global surface temperature because of limitations of the satellite system. At present, the system measures atmospheric temperature of the lower troposphere at about 26,000 feet (8 kilometers) altitude.

To date, there is only one network of climate capable weather stations that is accurate enough to fully detect a climate change signal. This is the U.S. Climate Reference Network (USCRN), which is a state-of-the-art automated system designed specifically to accurately measure climate trends at the surface. Since going into operation in 2005, it has not found any significant warming trend in the United States that can be attributed to climate change.

Unfortunately, the data from the USCRN network are buried by the U.S. government, and are not publicly reported in monthly or yearly global climate reports. It has also not been deployed worldwide.

Given the government monopoly on use of corrupted temperature data, questionable accuracy, and a clear reticence to make highly accurate temperature data from the USCRN available to the public, it is time for a truly independent global temperature record to be produced.

This isn’t “rocket science.” Given that governments are spending billions of taxpayer dollars on climate mitigation programs, doesn’t it make sense to get the most important thing – the actual temperature – as accurate as possible? Given today’s technology, shouldn’t we confirm and validate the need for it with a network and data collection that is not relying on a 100-year-old system? After all, if we can send a man to the moon, surely, we can measure the temperature of our own planet accurately.

Anthony Watts (awatts@heartland.orgis a senior fellow for environment and climate at The Heartland Institute.

This post originally ran in Issues & Insights.

Get notified when a new post is published.
Subscribe today!
5 54 votes
Article Rating
411 Comments
Inline Feedbacks
View all comments
E. Schaffer
April 6, 2023 6:04 am

I just did an article on how climate science (erroneously) got to one of its most basic parameters. I think it is fun and revealing..

https://greenhousedefect.com/basic-greenhouse-defects/the-anatomy-of-a-climate-science-disaster

comment image

Reply to  E. Schaffer
April 6, 2023 6:57 am

That earth energy budget has issues.
Incoming 341w-m²(correct)
Reflected 101.9w-m²(solar heat in stratosphere) correct
Clouds and Atmosphere 79w-m² (49w-m² compression heating+ 30w-m² clouds)
Reflected by surface 23w-m² (351(summer)-328(winter))=23w-m²
161w-m² Absorbed by surface (161×2=322w-m² (winter))
17w-m² Thermals
Evaporation 80w-m²
80+17+23+40 =160w-m2+30w-m²+49w-m²=239w-m² outgoing. (160w-m² on the low side).
175w-m² summer (350w-m²), 164w-m² x 2 328w-m² (winter).
396w-m² is incorrect as based on flat earth measurements.
396/2=198/4=49w-m² compression heating.
333w-m² – 198(compression heating) +135(CO2)(incorrect) as clouds that reach stratosphere near tropics removes 30w-m².

Compression heating : 1.4 x 1.27kg x 334.4m/s squared (111,823) = 198957w-m²/1007/w-m²

earthsenergybudget6.png
Sweet Old Bob
Reply to  slindsayyulegmailcom
April 6, 2023 7:44 am

” Compression heating : 1.4 x 1.27kg x 334.4m/s squared (111,823) = 198957w-m²/1007/w-m² ”

Heating occurs during compression . After compression is complete , heating stops .
Compression of the atmosphere stopped ages ago …..

😉

E. Schaffer
Reply to  slindsayyulegmailcom
April 6, 2023 8:14 am

396w-m² is incorrect as based on flat earth measurements

No, that is not the problem..

prjndigo
Reply to  E. Schaffer
April 6, 2023 6:22 pm

Its fairly close tho since the energy per fixed measured volume at a specific altitude does not change because the controlling factor is gravity regulating pressure.. there it is.

bdgwx
Reply to  slindsayyulegmailcom
April 6, 2023 11:18 am

slindsayyulegmailcom: “That earth energy budget has issues.”

Hmm…

slindsayyulegmailcom: “396w-m² is incorrect as based on flat earth measurements.”

Well there’s your problem.

Reply to  slindsayyulegmailcom
April 6, 2023 7:52 pm

wrong

Reply to  E. Schaffer
April 6, 2023 7:17 am

ES,
in your “fun and revealing” analysis you made the assumption that the ocean exhibits Fresnel reflection. But this is not the case, mostly due to the angle of waves, ripples, and foam from horizontal.
As evidence, I point out that the ocean horizon is a dark line in the far distance, completely unlike a mirror reflector as the Fresnel equation predicts for a perfectly flat surface. An internet search will provide a few papers with correct test results, and unfortunately many who incorrectly assume the Fresnel curve for the purposes they are investigating. But you are investigating the emissivity of the planet, so you should use the “right stuff”.
BTW, I do appreciate the work you have put into your site, sharing your calcs with others. I have been too lazy to do one out of fear of critics, I suppose.

E. Schaffer
Reply to  DMacKenzie
April 6, 2023 8:12 am

You confuse two things.

One is doing all this in infinite resolution, considering cold skin effects, wave and windspeeds on a global scale, different ocean temperatures which will affect emissivities a little bit (unless below freezing), salinity, measurements from all possible angles and so on. If you can do all that, go ahead

The other thing is completely messing up at the basics. That is what the article is about.

Reply to  DMacKenzie
April 6, 2023 8:24 am

The point seems to be that there *is* some level of reflection not accounted for. How much there is can be debated but it shouldn’t be ignored.

Reply to  DMacKenzie
April 7, 2023 1:26 pm

fear of being wrong

Richard Greene
Reply to  E. Schaffer
April 6, 2023 7:26 am

Your “conclusion” is almost entirely a character attack on modern climate science and also one specific scientist, rather than an actual conclusion.

E. Schaffer
Reply to  Richard Greene
April 6, 2023 8:06 am

Yeah, I was not quite happy with it. Maybe I should edit it. Yet it is not an attack on Gavin Schmidt. If an attack, it is on Wilber at al, because they deserve it. Also Trenberth et al is a joke. They just baselessly assume a surface emissivity of one, double down on it and then (ab)use Wilber et al as reference. And while this completely screws up the whole “science”, Gavin Schmidt sits on top of it and is like “hey, where is the problem”..

Of course this is commedy!

Richard Greene
Reply to  E. Schaffer
April 6, 2023 10:53 am

I recommend:
Write a plain English summary and put that paragraph it at the start of the article Delete the last paragraph. Focus on your science, or focus on your politics, but not both in the same article.

This climate change scaremongering junk science is not comedy to me.
It is a tragedy that the best climate in 5000 years — now — has been spun as the beginning of an imaginary climate emergency since the early 1970s, and many people believe that.

It’s easy to criticize the climate predictions because enough climate science knowledge does not exist (not even close) to have any hope of making a correct long term climate prediction, except with a lucky guess. And that is assuming it will ever be possible to predict the climate in 50 to 100 years — I have no logical reason to believe that will ever be possible.

Climate scaremongering is politics, not science.

We climate realists have been arguing that the consensus climate science is wrong since the late 1970s, but that strategy has no effect.

The (junk) science of the IPCC is just right to scare people and allow leftist governments to control them. The IPCC does not care if their climate predictions are right. They only care that scare people with their predictions.

E. Schaffer
Reply to  Richard Greene
April 6, 2023 12:39 pm

We climate realists have been arguing that the consensus climate science is wrong since the late 1970s, but that strategy has no effect.

Many people would make a distinction between the physics of WG1 and what the rest of the IPCC is doing. Then there is the politics, the activists and so on. I have no interest in discussing these because there is no point and it is boring.

The issues of WG1 however are interesting and it is right at the core of things. It is the rabbit hole worth exploring, but no one, or barely anyone is doing it. At least not properly.

Maybe too many on the critical side believe this would be a “pet project” 😉

Reply to  Richard Greene
April 7, 2023 1:35 pm

long term climate prediction, 

1 air temperature will vary with latitude, it will be cold at the poles

  1. winter will be colder than summer in the northern hemisphere
  2. IF it warms, ice will melt
  3. IF it warms sea levels will increase.

We climate realists have been arguing that the consensus climate science is wrong since the late 1970s, but that strategy has no effect.
The (junk) science of the IPCC is just right to scare people and allow leftist governments to control them. The IPCC does not care if their climate predictions are right. They only care that scare people with their predictions.

climate realists are a recent fabrication, they have no agreed upon positions
the strategy of declaring others wrong, never has any long effect

Reply to  Steven Mosher
April 8, 2023 1:08 pm

“the strategy of declaring others wrong, never has any long effect” Then why do you constantly do it?

Reply to  Nansar07
April 11, 2023 1:27 pm

to illustrate the point. nobody is convinced by assumptive closes

Reply to  E. Schaffer
April 7, 2023 1:30 pm

They just baselessly assume a surface emissivity of one

use the figures from Spencer, you know the ones used in UAH

Reply to  E. Schaffer
April 6, 2023 8:22 am

Don’t understand why you got down checks. I read your article, at least most of it, and I could find nothing wrong with it. Seems like you stepped on some climate alarmist’s toes!

Reply to  E. Schaffer
April 6, 2023 8:43 am

It’s a total trainwreck innit.
Just very quickly and looking at your ‘fool-you’ emissivities table, I took myself off and asked ’emissivity of sand’
Sand being a major part of what makes a desert and desert covers 10% of Earth’s total surface

Google told me this:Firstly, emissivity of the samples increases gradually with temperature. For biogenic crust, the emissivity increases from 0.9660 at 25°C to 0.9729 at 45°C. For sand it changes from 0.9388 to 0.9552.

In the ‘fool-you table I don’t see any figures of less than 0.97
So straight off we see why deserts can be Hot Places

prjndigo
Reply to  Peta of Newark
April 6, 2023 6:24 pm

wait until they find out that we can’t actually see more than half the photons that are entering the atmosphere and cannot detect a little more than 1/3 of them…

JCM
Reply to  E. Schaffer
April 6, 2023 8:53 am

The ‘surface’ emissivity is not a primary issue.

The effective radiating surface observed from space is primarily composed of the cloud layer i.e. the condensed full spectrum radiating surfaces of liquid and ice. The equilibrium cloud cover comprises about 2/3ds of effective radiating surface. The terrestrial/oceanic surface below the cloud deck does not contribute to radiative balance. The ‘surface’ emissivity issue, if there is one, can only be a minor issue. If one is concerned with emissivity, it is the 2/3ds of the radiating surface composed of clouds where one must focus.

Such diagrams depicting only 30 Wm-2 full spectrum IR radiating from the cloud deck to space appear to be erroneous.

E. Schaffer
Reply to  JCM
April 6, 2023 9:20 am

The article already addressed your fallacy.

JCM
Reply to  E. Schaffer
April 6, 2023 9:53 am

The article in no way addresses the issue, I’m afraid.

The albedo is a consequence of the cloud fraction, of course. The cloud fraction is a consequence of the radiative equilibrium process in the dynamic condensing atmosphere.

The albedo is not an external ad-hoc parameter. it is coupled to the thermodynamic equilibrium process.

The all sky spectral emissivity observed from space is about 0.7, representing an average effective radiating surface temperature of 278K.

This value is perfectly consistent with the astronomical factors of Solar Luminosity and distance from sun, irrespective of albedo.

E. Schaffer
Reply to  JCM
April 6, 2023 10:17 am

Ok, then once again..

  1. The magnitude of the GHE = emissions surface – emissions TOA. Surface emissions are not measured, but estimated based on temperature and surface emissivity. Thus the magnitude of the GHE depends on surface emissivity
  2. The whole of the GHE gets attributed to GHGs and clouds (they forget aerosols). Thus surface emissivity impacts the significance of individual GHGs.
  3. The same is true for GHG forcings as well as feedbacks. Overestimating surface emissivity necessarilly causes large error margins in these sensitivities.
  4. The only negligible impact is the direct effect on emissions TOA.
JCM
Reply to  E. Schaffer
April 6, 2023 10:36 am

Ah, you are constraining yourself to the Ramanathan style greenhouse definition.

Which is obviously absurd, but it may help you to communicate with Dr Schmidt. How is that going, anyway?

In reality, the optical depth in the gap between the Earth and Cloud is irrelevant.

So it’s all a bit silly, but nibbling around with surface emissivity is an interesting curiosity.

Reply to  JCM
April 6, 2023 12:24 pm

JCM,

Only weighing in here to say that ES’ point 1 is the IPCC’s greenhouse definition, as well. From AR6, that would be 398 – 239 = 159 W/m^2. (Grain of salt optional).

E. Schaffer
Reply to  JCM
April 6, 2023 12:25 pm

Is there any other definition?

DWM
Reply to  E. Schaffer
April 6, 2023 1:02 pm

The definition would be the surface temperature with and without green house gasses. The question would be how to handle water vapor.

DWM
Reply to  E. Schaffer
April 6, 2023 11:04 am

so the GHE = 390 w/m2 – 240 w/m2
= 150 w/m2 and almost constant

JCM
Reply to  DWM
April 6, 2023 11:21 am

I’d say the Ramanathan ratio of surface flux and OLR might appear to be constant, not absolutes.

But it’s ridiculous. In the diagram 40 goes straight out the window, so there is nothing greenhousy about that stuff.

Now it’s only 350-240 (+ and – Schaffer’s emissivity nibbling).

Reply to  E. Schaffer
April 6, 2023 3:16 pm

If you are looking at radiative balances you are not looking at something that controls surface temperature.

Top image in attached depicts the ToA radiation imbalance over the past 16 years. The middle image is the temperature change globally over that period and the bottom chart plots the temperature change against energy absorbed (positive) or released (negative). Temperature change is uncorrelated to the energy uptake anywhere on the globe.

There is no reason why incoming radiation energy and outgoing radiation energy have to be in balance. In fact modern human civilisation is built on the energy that biological processes have accumulated over the millennia. Hansen’s missing heat could very well be on the ocean floor, peat bogs and coral reefs.

The only way to understand the surface temperature is to understanding how energy is transferred, dissipated and stored in the Earth system. These processes have far more impact on surface temperature than anything to do with radiation. Earth is not static like a black body globe and its orientation to its primary energy source is constantly changing as are its internal energy transfer, energy dissipation and energy storage. .

Net_Images.png
JCM
Reply to  RickWill
April 6, 2023 4:47 pm

Hansen’s missing heat could very well be on the ocean floor, peat bogs and coral reefs.

RickWill is correct.

everyone, and I mean everyone, misses that such energy diagrams show the energy imbalance exclusively exists in the surface budget. There is zero zilch nada imbalance in the atmosphere.

Net 161 SW – 396 upward LW + 333 downard LW = 98

The balancing flux of H + LE = 97.

There is a net flux into the surface of +1.

The atmosphere shows no positive imbalance, whatsoever.

It’s so ridiculous it’s comical.

No anomalous atmospheric “heat trapping”.

No mid tropospheric hot spot.

The “heat trapping” is happening below the terrestrial/ocean surface. Full stop. Net in > Net out of the surface.

JCM
Reply to  JCM
April 6, 2023 5:28 pm

any wonder why the official presented TMT observations are not matching model output?

DUH. It is because that is NOT where the “heat trapping” is occurring.

comment image

The atmosphere is perfectly balanced:

239 SW – 169 “atmospheric emission” – 30 “cloud emission” – 40 “window flux” = 0

Helllooooo??? anybody home????

DWM
Reply to  JCM
April 6, 2023 8:41 pm

NASA puts out the same diagram with numbers that balance everywhere. That chart at the top is wrong.

JCM
Reply to  DWM
April 6, 2023 9:23 pm

the chart you’re referencing is only meant to depict an equilibrium state.

bdgwx
Reply to  DWM
April 7, 2023 5:35 am

It’s balanced everywhere in the sense that it complies with the 1LOT. However, it is not balanced at TOA since 341.3 W/m2 – (101.9 W/m2 + 238.5 W/m2) = 0.9 W/m2. But the 0.9 W/m2 shows up at the surface as a retention component.

Reply to  E. Schaffer
April 6, 2023 12:03 pm

Mr. Schaffer, you seem to CONSTANTLY spam threads here with your off-topic pet projects. Stop it, or I’ll put you on moderation. You’ve been warned more than once. No further warnings will be issued.

Anthony

E. Schaffer
Reply to  Anthony Watts
April 6, 2023 12:27 pm

Sorry for that, I will try to not do it again. Although I am not aware of any warnings.

Reply to  E. Schaffer
April 6, 2023 1:14 pm

Thank you. Comments were left, but they may not have been seen. It’s fine to post these things on topic related threads.

Reply to  Anthony Watts
April 6, 2023 3:42 pm

Dear Anthony,

I thought this post was about temperature measurements not the surface energy balance, which for the most part is an artificial construct made up of lots of moving parts.

I believe that in relation to temperature measurements, that rather than building a massive new database, there is a need for a set of protocols that allow any dataset to be investigated from first principles. This has been the focus of my work at http://www.bomwatch.com.au.

Although an increasing number of individual investigations have been published and more are in the pipeline, over the last decade I have looked closely at some 500 long and medium -term datasets from all climate regions in Australia. With the exception of some sites where Stevenson screens were over-exposed to gale-force winds, sea-spray, fog, driving rain etc. that caused thermometers to be wet much of the time, methods were found to be robust and replicable.

The methodology has been outlined as case studies in several reports (e.g. https://www.bomwatch.com.au/climate-data/climate-of-the-great-barrier-reef-queensland-climate-change-at-gladstone-a-case-study/), and expanded-on in my latest series on homogenisation of Australian temperature records (https://www.bomwatch.com.au/data-homogenisation/).

It would be interesting to test the same protocols using maximum temperature and rainfall data from sites in the US, Europe and other places. If you or anyone else are interested I can be contacted at scientist@bomwatch.com.au.

Yours sincerely,

Dr Bill Johnston

simonsays
Reply to  Anthony Watts
April 6, 2023 6:27 pm

It may be off topic, but the article and subsequent discussion is an interesting one and not discussed nearly enough.

What is of concern is what Mr Greene said “We climate realists have been arguing that the consensus climate science is wrong since the late 1970s, but that strategy has no effect”

I don’t agree it has no effect and it is only the questioning of the basic science that the AGW can be held to account.

Like many I see that something is OFF, with the whole AGW science. It clearly goes back to its inception but I can’t get to the bottom of it as we only ever see fleeting debates before they are buried in the plethora of new articles.

The debate around the so called consensus should be a highlighted constant. Like this article or Mr Schaffers contribution, they might be best promoted by being regularly served up as their own category each month. A bit like how we look forward to Lord Monckton monthly update about the pause.

Richard Greene
Reply to  simonsays
April 6, 2023 8:39 pm

I don’t agree it has no effect and it is only the questioning of the basic science that the AGW can be held to account.

40 years of CAGW scaremongering
40 years of increasingly hysterical climate scaremongering
So what effect did a science debate actually have?
Not that there were two sides debating much.

The basic science of AGW is fine except for false claims to know the exact effects of CO2 in the atmosphere, beyond the likely truth of “small and harmless” I know some commenters here reject almost all consensus climate science. I reject the CAGW scaremongering (90%) but accept the AGW science (10%) as being in the ballpark of reality

Of course AGW is more than just CO2 emissions, such as:

Air pollution

All greenhouse gases

Water vapor positive feedback

Albedo changes from land use changes, UHI
and dark soot (pollution) deposited on Arctic ice and snow.

Errors in historical temperature data
that may overstate prior warming
deliberately, or unintentionally

Unknown manmade causes of climate change.

E. Schaffer
Reply to  Richard Greene
April 7, 2023 6:24 am

Unknown manmade causes of climate change.

There is a difference between unknown and ignored. We know we are heating the planet with aviation induced cirrus. It has been around in the literature for a while.

Tom Halla
April 6, 2023 6:07 am

I seriously doubt the current administration would have any interest.

Richard Greene
Reply to  Tom Halla
April 6, 2023 7:17 am

If we apply Biden’s Build Back Better philosophy to the weather station network, it will get worse, (I call it Bidet’s Build Back Baloney)

Ancient Wrench
Reply to  Tom Halla
April 6, 2023 10:29 am

The House of Representatives might be interested.

Nick Stokes
April 6, 2023 6:31 am

“Since going into operation in 2005, it has not found any significant warming trend in the United States that can be attributed to climate change.”

The NOAA does make USCRN available – WUWT displays the NOAA graph on the front page. But it gives just the same results as the larger group of ClimDiv stations:

comment image

tinny
Reply to  Nick Stokes
April 6, 2023 7:42 am

Yes.
Both show no warming.

Reply to  tinny
April 6, 2023 8:15 am

I make the warming rate of USCRN to be 0.31°C / decade.

ClimDiv over the same period shows 0.23°C / decade.

Of course it isn’t significant because it’s only 17 years of variable local data, but that does not mean it shows no warming.It amounts to 0.96°F over the last 17 years.

20230406wuwt2.png
Reply to  Bellman
April 6, 2023 10:11 am

Look at Nick’s USCRN graph. How in frick do you come up with +.31 per decade instead of -.05 per decade ? Or how about the meandering data is not predictive AT ALL, so is insignificantly different from ZERO temperature change ?

Reply to  DMacKenzie
April 6, 2023 10:26 am

I feed the data into an lm function and it tells me. In case you hadn’t noticed, it’s the same data in the USCRN graph. If you want an indication of why there is a rise, try comparing the first have of the data with the second.

Average anomaly :
2005-2013 = –0.12°C
2014-2022 = +0.37°C

bdgwx
Reply to  DMacKenzie
April 6, 2023 11:25 am

I can corroborate Bellman’s calculations. I get the exact same result. And when using the monthly data I get +0.26 C/decade and +0.34 C/decade for nClimDiv and USCRN respectively.

Richard Greene
Reply to  bdgwx
April 6, 2023 8:42 pm

I’d say both 0.26 and 0.34 round to 0.3, which makes both averages similar, in my interpretation. Thanks for the specific trends/

Reply to  Bellman
April 8, 2023 4:36 am

An absolutely perfect example of why the NIST TN1900 EX. 2 methods should be followed.

Take a close look at all these graphs. What do you think the variance in the data distribution of the anomalies should be? The anomalies alone have a large variance at the scale used. That variance along with the average (mean) is what the anomaly values could be.

Three other issues.

1) These anomaly values are computed from the difference of two random variables. The variance of the difference in those random variables is the sum of each of the variances. It will be larger than the variance in the anomaly distribution simply because of the scaling. However, the anomaly should carry the variation of the data distributions used to calculate it.

2) Why is it everyone in climate science ignores basic statistical and scientific practice? An average (mean) should never be quoted without also quoting the variance or standard deviation of the data distribution used to calculate the mean!

3) The shaded error distribution in Bellman’s plot is only for the errors in the linear regression line. It tells you how accurate the line itself is in portraying the data. It IS NOT the variance of the distribution which is what affects the average value.

Reply to  Jim Gorman
April 8, 2023 7:59 am

” The anomalies alone have a large variance at the scale used. ”

That’s why the trend is not significant. That’s why the confidence interval is so large. The uncertainty of the trend depends on the size of the variation, just as the uncertainty of an average depends on the variation in the data.

It’s the same reason why there is so much uncertainty in the pause. If you think of each monthly value as a random variable around the trend, there is a huge range of plausible trends over the last 8 years or so.

Reply to  Jim Gorman
April 8, 2023 8:08 am

“These anomaly values are computed from the difference of two random variables”

Which as I keep trying to explain to you is irrelevant when talking about the rate of change. The base value is fixed. It doesn’t matter how much of an error it has, that error will be the same for every data point. It will not affect the trend.

“An average (mean) should never be quoted without also quoting the variance or standard deviation of the data distribution used to calculate the mean!”

Tell that to Monckton and Spencer. But don’t confuse the standard deviation of the data with the uncertainty of the mean.

bdgwx
Reply to  Jim Gorman
April 8, 2023 1:03 pm

JG said: “An absolutely perfect example of why the NIST TN1900 EX. 2 methods should be followed.”

Are you saying that you now accept type A evaluations and the 1/sqrt(N) rule?

Reply to  bdgwx
April 8, 2023 2:36 pm

Have you *read* 1900? Where does the 1/sqrt(N) come from? From taking an average? Or from finding the standard deviation of the sample means.

bdgwx
Reply to  Tim Gorman
April 8, 2023 3:26 pm

TG said: “Have you *read* 1900?”

I’m the one that notified you about it.

TG said: “Where does the 1/sqrt(N) come from?”

It tells you exactly where it comes from.

For example, proceeding as in the GUM (4.2.3, 4.4.3, G.3.2), the average of the m = 22 daily readings is t̄ = 25.6 C, and the standard deviation is s = 4.1 C. Therefore, the standard uncertainty associated with the average is u(r) = s/√m = 0.872 C.

Reply to  bdgwx
April 8, 2023 4:28 pm

I’m the one that notified you about it.”

That doesn’t mean you read it and understood it. Please list out *ALL* of the assumptions Possolo used in analyzing the maximum temperature at one location.

My guess is that you can’t. Or won’t!

Therefore, the standard uncertainty associated with the average is u(r) = s/√m = 0.872 C.”

And EXACTLY what is that parameter? How closely the average you calculated approaches the population mean? Or is it the measurement uncertainty associated with the stated values.

My guess is that you don’t have a clue and don’t care to have a clue.

bdgwx
Reply to  Tim Gorman
April 8, 2023 7:22 pm

TG said: “Please list out *ALL* of the assumptions Possolo used in analyzing the maximum temperature at one location.”

Possolo says ti = T + Ei where ti is a measurement, T is the measurand, and Ei is an independent random variable with a mean of 0 and gaussian.

TG said: “And EXACTLY what is that parameter?”

Possolo says that it is the uncertainty of the measurand (the monthly mean temperature) and includes the uncertainty contributed by 1) natural variability 2) time of observation variability and 3) the components associated with the calibration and reading of the instrument.

What is interesting about this example is that the model defines a single measurand T being the monthly average maximum temperature and uses ti measurements to quantify it. That is interesting because the individual measurements are clearly of different things due to the differing environmental conditions from day-to-day, but yet are still treated in the model as if they were for the single measurand T.

This is an unequivocal refutation of 1) the argument that the GUM only works when the measurements are of the same thing and 2) that you cannot use the 1/sqrt(N) rule on measurements of different things. It is unequivocal because each of the ti are of different things and yet 1/sqrt(N) was still used.

Reply to  bdgwx
April 9, 2023 7:31 am

Why do you *always* leave out the context?

“This so-called measurement error model (Freedman et al., 2007) may be specialized further by assuming that E1, . . . , Em are modeled independent random variables with the same Gaussian distribution with mean 0 and standard deviation (. In these circumstances, the {ti} will be like a sample from a Gaussian distribution with mean r and standard deviation ( (both unknown).”

“The {Ei} capture three sources of uncertainty: natural variability of temperature from day to day, variability attributable to differences in the time of day when the thermometer was read, and the components of uncertainty associated with the calibration of the thermometer and with reading the scale inscribed on the thermometer. Assuming that the calibration uncertainty is negligible by comparison with the other uncertainty components, and that no other significant sources of uncertainty are in play, then the common end-point of several alternative analyses is a scaled and shifted Student’s t distribution as full characterization of the uncertainty associated with r”

The basic assumption is to ignore measurement uncertainty and use the variation in the stated values as a measure of the uncertainty of the average.

It’s what you and the rest of the climate alarmists ALWAYS do. Assume the measurement uncertainty is random, Gaussian, and cancels – or that it is negligible.

Neither are appropriate assumptions when you are combining measurements from different devices measuring different things!

That is interesting because the individual measurements are clearly of different things due to the differing environmental conditions from day-to-day, but yet are still treated in the model as if they were for the single measurand T.”

Exactly! Possolo is assuming that the 22 measurements are of the same thing – Tmax. It makes things so much easier if you do that, just like assuming all measurement uncertainty is negligible. *NEITHER* are valid assumptions for calculating a global average temperature!

bdgwx
Reply to  Tim Gorman
April 9, 2023 11:26 am

Do you accept that you can average different temperatures or not?

Do you accept that the GUM can be used on measurements of different things or not?

Do you accept that the uncertainty of the average of different temperatures scales by 1/sqrt(N) or not?

Do you accept NIST TN1900 or not?

Reply to  bdgwx
April 9, 2023 5:07 pm

Temperature is an intrinsic property and cannot be averaged.Temperature measurements are measurements of different things.Temperature measurements have uncertainty equivalent to variance in a random variable and therefore the variance must be propagated onto any average.So no, average temperature measurements simply can’t be averaged. The average tells you nothing about the real world. Just as the average value of a 4″ board and a 6″ board is 5″ and that 5″ doesn’t describe anything that exists in the real world, neither does an averaged global temperature.

Even Possolo had to make assumptions that no uncertainty exists and that successive measurements of different things can be equated to successive measurements of the same thing in order to come up with an average and a standard deviation.

The uncertainty of the average does *NOT* scale by 1/sqrt(N). You keep confusing how close you can get to the population average as being the uncertainty of that population average.

You can’t even make sense of the fact that your SEM can be ZERO while the uncertainty of the population mean can be huge!

If the mean of the sample means is equal to the population mean then your SEM is zero. But that population mean can be far from the true value.

Why is that so hard for you to understand? You can only know the accuracy of the population mean by propagating the uncertainties of the distribution elements onto the mean. You want to ignore that simple fact of metrology so you can assume the SEM is the uncertainty of the mean. It simply isn’t.

Do you deny that the sample mean can equal the population mean? Do you deny that in that case there is no uncertainty associated with your calculated sample mean? Do you deny that the population mean can be inaccurate?

bdgwx
Reply to  Tim Gorman
April 9, 2023 6:29 pm

TG said: “Temperature is an intrinsic property and cannot be averaged.”

Possolo computed an average temperature in NIST TN1900 E2.

TG said: “Even Possolo had to make assumptions that no uncertainty exists”

Possolo calculated the standard uncertainty as u(r) = s/sqrt(m) = 0.872 C in NIST TN1900 E2.

TG said: “The uncertainty of the average does *NOT* scale by 1/sqrt(N).”

Possolo used a type A evaluation of uncertainty requiring a division by sqrt(N) in NIST TN1900 E2.

TG said: “You can’t even make sense of the fact that your SEM can be ZERO while the uncertainty of the population mean can be huge!”

It’s not my calculation. It was from Possolo in NIST TN1900 E2.

Now stop deflecting and diverting…

Do you accept that you can average different temperatures or not?

Do you accept that the GUM can be used on measurements of different things or not?

Do you accept that the uncertainty of the average of different temperatures scales by 1/sqrt(N) or not?

Do you accept NIST TN1900 or not?

Reply to  bdgwx
April 10, 2023 6:59 am

In other words you either don’t understand or don’t want to understand what Possolo did in 1900.

Nor do you understand the GUM. You can’t even get the definition of a “measurand” corect. A measurand is *NOT* a collection of different things. It is ONE thing.

bdgwx
Reply to  Tim Gorman
April 10, 2023 9:05 am

TG said: “In other words you either don’t understand or don’t want to understand what Possolo did in 1900.”

I understand that…

1) He computed an average temperature.

2) He assessed the uncertainty of that average.

3) He did so despite the measurements being of different things.

4) He did so by applying 1/sqrt(N).

TG said: “Nor do you understand the GUM.”

It doesn’t matter if I understand the GUM or not. In this context it only matters that Possolo understand it. He’s the one that formulated example E2.

I will say that I believe I understand it enough that I do not challenge NIST TN1900 E2. It looks like Possolo followed the procedure correctly.

TG said: “You can’t even get the definition of a “measurand” corect.”

I think a measurand is a “particular quantity subject to measurement” (JCGM 100:2008 B.2.9). I take it you disagree?

TG said: “A measurand is *NOT* a collection of different things. It is ONE thing.”

I’m glad to hear that you at least accept this. Are you willing to accept NIST TN1900 E2 and all that goes along with it as well?

Reply to  bdgwx
April 12, 2023 9:00 am

You miss the whole point with your bias. You appear afraid that something is going to upset the GAT applecart.

“””””Do you accept that the uncertainty of the average of different temperatures scales by 1/sqrt(N) or not?”””””

Only if the stipulation made in TN1900 are met.

– same shelter
– same month
– same Tmax

If you want to combine with other stations then you must generate your own assertions and justifications.

The TN has several criteria such as independence (not correlated) and others. Here are some examples.

“””””The assumption of independence may obviously be questioned, but with such scant data it is difficult to evaluate its adequacy (Example E20 describes a situation where dependence is obvious and is taken into account). The assumption of Gaussian shape may be evaluated using a statistical test. For example, in this case the test suggested by Anderson and Darling (1952) offers no reason to doubt the adequacy of this assumption. However, because the dataset is quite small, the test may have little power to detect a violation of the assumption. “””””

“””””The equation, ti = τ +Ei, that links the data to the measurand, together with the assumptions made about the quantities that figure in it, is the observation equation. The measurand r is a parameter (the mean in this case) of the probability distribution being entertained for the observations. “””””

“””””Adoption of this model still does not imply that r should be estimated by the average of the observations — some additional criterion is needed. In this case, several well-known and widely used criteria do lead to the average as “optimal” choice in one sense or another: these include maximum likelihood, some forms of Bayesian estimation, and minimum mean squared error.

“””””The associated uncertainty depends on the sources of uncertainty that are recognized, and on how their individual contributions are evaluated. “””””

bdgwx
Reply to  Jim Gorman
April 12, 2023 9:57 am

JG: same shelter

Possolo doesn’t say that.

JG: same month

Possolo doesn’t say that

JG: same Tmax

This has to be a joke right? E2 is literally averaging different Tmax measurements. Literally.

Reply to  bdgwx
April 12, 2023 1:32 pm

You are a cult member in the religion of Climate Alarmism.

Of course Possolo says all three!

“Exhibit 2 lists and depicts the values of the daily maximum temperature that were observed on twenty-two (non-consecutive) days of the month of May, 2012, using a traditional mercury-in-glass “maximum” thermometer located in the Stevenson shelter in the NIST campus that lies closest to interstate highway I-270” (bolding mine, tpg)

You’ve never actually even read TN1900, have you? You just keep spouting the same religious dogma and never look left or right.

bdgwx
Reply to  Tim Gorman
April 12, 2023 2:29 pm

No. He didn’t list all 3. He didn’t even list one of the 3. Your argument is one of affirming a disjunct. You are claiming that because NIST TN1900 E2 is for the scenario of a single shelter for a single month that it necessarily follows that it is impossible to apply the GUM technique to different to different scenarios including one in which there are different shelters and different months. That is a logical fallacy because Possolo did not say the GUM cannot be applied to other scenarios.

And your claim about the 22 Tmax values being of the same thing is absurd. Not even the most contrarian of contrarians is going to be convinced that Tmax values on different days are the same. The are different in every sense of the word…literally.

Reply to  bdgwx
April 13, 2023 4:23 pm

Go away troll. You have no idea of what you are talking about.

Reply to  bdgwx
April 13, 2023 9:55 am

“””””JG: same shelter

Possolo doesn’t say that.”””””

From TN1900:

“””””in this Stevenson shelter,”””””
“””””in that shelter. “””””

______________________________________________

“””””JG: same month”””””

“””””Possolo doesn’t say that”””””

From TN1900

“””””The daily maximum temperature r in the month of May, 2012″””””
“””””thirty-one true daily maxima of that month”””””

________________________________________________

“””””JG: same Tmax”””””

“””””This has to be a joke right? E2 is literally averaging different Tmax measurements. Literally.”””””

Reply to  Tim Gorman
April 9, 2023 6:47 pm

Temperature is an intrinsic property and cannot be averaged.

Sol why do you insist we should be using the average daily temperature?

Reply to  Bellman
April 10, 2023 7:01 am

Sol why do you insist we should be using the average daily temperature?”

As usual you have no understanding of the real world or even mathematics outside statistics.

Degree-days are *NOT* an average! Degree-days are what I advocate for, not “average daily temps”.

After two solid years you can’t even understand this distinction. You are hopeless.

bdgwx
Reply to  Tim Gorman
April 10, 2023 11:37 am

TG said: “After two solid years you can’t even understand this distinction.”

It is my understanding that you are discussing the method from http://www.degreedays.net.

There is no distinction. Using the integration method HDD = Σ[if(T < 65, 65 – T, 0) * interval, begin, end]. So if the data is hourly your interval is 1/24 days. Mathematically this is equivalent to averaging since Σ[if(T < 65, 65 – T, 0) * 1/24, 1, 24] is the same as Σ[if(T < 65, 65 – T, 0), 1, 24] / 24 which is an average of the 24 values. It would be the equivalent of saying Σ[x_i, 1, N] / N is an average but Σ[x_i/N, 1, N] is not which is obviously absurd.

I’ll also remind you that https://www.degreedays.net/calculation says to “calculate the average number of degrees…” as one of the steps.

Reply to  Tim Gorman
April 10, 2023 12:56 pm

You were talking about daily averages not any sort of Degree-days. E.g. when you said a 1.5 degree difference was HUGELY different.

And, without wanting to drag this out longer, the fact you can add degree-days is precisely why your claim about being unable to average temperatures is bogus.

Reply to  bdgwx
April 12, 2023 7:14 am

The GUM does not address how to propagate measurement uncertainty of different things. The term “same measurand” appears too many times to count.

The GUM does address methods for determining experimental uncertainty of the same (or similar) things when necessary. They specify that a coverage factor is appropriate in order to convey the range of possible experimental values.

Why “experimental”? Some “same” things can not be measured repeatedly. Results of a chemical reaction, coronal discharge across an insulator, length a spring stretches when a force is applied, many other things. Multiple “experiments” are required to determine a value to be expected.

TN1900 uses this fact to develop an expanded experimental standard deviation that encompass the values that can be expected for Tmax at a given station over a month. Tmax averaged for the single month is considered experimental MEASUREMENTS of the same thing. Tmax and Tmin are two different things from two different distributions. Their average DOES NOT result in a measurement of either.

As TG has pointed out, using average temperature WITHOUT adequately propagating variance leads one to conclude that the climate in the Sahara desert is the same as in Dubuque, Iowa since they can have the same average yet their variance is drastically different. That’s not a good metric!

Like it or not, averages like Tavg or a monthly average of Tavg is NOT a measurement. Tavg is a statistical parameter but NOT a measurement. An average of Tavg over a month is not a measurement, it is an average of statistical parameters. All of these statistical calculations should have a number of the same parameters, most important a mean μ, and a variance σ. Why do we never see a variance from you.

If you would spend less time cherry picking equations meant for measurements and really learn about and appreciate what a measurement actually is and how to treat it, you would realize that playing with statistical parameters is NOT a substitute for actually dealing with measurements.

Reply to  Jim Gorman
April 12, 2023 9:05 am

The GUM does not address how to propagate measurement uncertainty of different things.

Apart from all the sections explaining combined uncertainty.

You keep going on about the uncertainty in a volume of a cylinder combining the uncertainties of the height and of the radius. That’s propagating the measurements of two different things.

Tmax averaged for the single month is considered experimental MEASUREMENTS of the same thing.

So why can’t you accept the average of multiple stations at different location as being experimental measurements of the same thing – i.e. the surface average? And if you can’t accept that, then why not just treat the average as an exercise in statistics rather than insisting it’s uncertainty be treated as a measurement uncertainty?

If it’s any help, here’s how TN1900 defines a measurement.

Measurement is an experimental or computational process that, by comparison with a standard, produces an estimate of the true value of a property of a material or virtual object or collection of objects, or of a process, event, or series of events, together with an evaluation of the uncertainty associated with that estimate, and intended for use in support of decision-making.

Tmax and Tmin are two different things from two different distributions. Their average DOES NOT result in a measurement of either.

Why do you think this is relevant. A daily average temperature is not meant to be a measure of either the max or min. Do you apply this logic to any combined measurement, such as area or velocity?

Height and width are two different things from two different distributions, and their product DOES NOT result in a measurement of either.

Distance and time are two different things from different distributions, and their quotient DOES NOT result in a measurement of either.

Reply to  Bellman
April 12, 2023 9:48 am

You guys have a totally off base knowledge of measurements. I grew up with a master mechanic. I learned the need for accurate measurements when repairing high horsepower motors and transmissions for tractors, combines, trucks, etc. Poorly done measurements results in return work with no income.

To address your comments. You are talking about measurements used to determine a single measurand. You are measuring THE SAME THING multiple times even the individual pieces of the measurand.

What you keep referring to are different things. An average of different things is NOT a measurement. As Tim tried telling you an average of 6′ and 7′ boards is not a measurement! The average does not exist.

Tmax and Tmin at a station are measurements. Tavg is not a measurement. It is a statistical parameter called μ (mean) and it has a variance.

(A = 1/2bh) is a measurement made up of two individual measurements OF THE SAME THING. You don’t take a “b” measurements from one triangle and an “h” measurement from another different triangle and find an area of either triangle. You can use the definition of an average and claim it is a function, but it IS NOT a measurement function it is a math function.

Reply to  Jim Gorman
April 12, 2023 2:22 pm

Poorly done measurements results in return work with no income.

Good for you. But you can’t but everything inopt the same box. To use one the many cliches that keep cropping up, not everything can be hammered in, just because all you’ve got is a hammer. What is possible in a mechanical workshop, will not work in a different profession.

You are measuring THE SAME THING multiple times even the individual pieces of the measurand.

So you have been saying for the last two years. But you still won’t define what you mean by THE SAME THING.

What you keep referring to are different things.”

Which things? The argument keeps drifting.

An average of different things is NOT a measurement.

Then stop insisting it be treated as one. You keep wanting to have it both ways, insisting the global average has to be subject to all the rules used to measure engine parts, but then insisting it’s not a measurement at all.

As Tim tried telling you an average of 6′ and 7′ boards is not a measurement! The average does not exist.

Yes, he’s told me that ad nauseam, and I keep telling him I disagree. For some reason you think just endlessly repeating it will somehow prove your point.

And now you have the extra problem of persuading me that maximum temperatures taken on different days are all “the same thing” and just aspects of the same average maximum temperature, but that two boards are completely different things, and can never be seen as random variations about the average length of a board.

Does any day in TN1900 have the actual average monthly maximum value? Does that mean it doesn’t exist, and if it doesn’t exist does that mean there is no use in measuring it?

Tmax and Tmin at a station are measurements.

Measurements of what? In the TN1900 example, you are not treating them as measurements of the actual temperature on the day, but as measurements (with random error) of the true average” may maximum daily temperature.

Tavg is not a measurement.

The question is, is it a measurand? It’s a function of two things you have measured. What makes you think you cannot do what GUM 4.1.1 says

In most cases, a measurand Y is not measured directly, but is determined from N other quantities X1, X2, …, XN through a functional relationship f

In this case, X1 = TMax, X2 = TMin, and f is the function (X1 + X2) / 2.

Why is that any less of a measurement than when f is the average of 22 TMax’s?

It is a statistical parameter called μ (mean) and it has a variance.

If it’s a parameter what is the population, and how does it have a variance? A population mean doesn’t have a variance, variance is a parameter.

(A = 1/2bh) is a measurement made up of two individual measurements OF THE SAME THING.

Which wasn’t the question I was asking. I was asking about your statement

Tmax and Tmin are two different things from two different distributions. Their average DOES NOT result in a measurement of either.

Do you think that TMax and TMin for a specific day are not individual measurements of the same thing? How is that different from the breadth and height of a specific triangle being measurements of the same thing?

You don’t take a “b” measurements from one triangle and an “h” measurement from another different triangle and find an area of either triangle.

But I’m not doing that. I’m taking max and min values from the same day, not from random different days. Or possibly I’m measuring the average maximum and minimum values for a specific month. But in either case they are being taken from the same thing – temperatures at a specific station over a specific unit of time.

You can use the definition of an average and claim it is a function, but it IS NOT a measurement function it is a math function.

Go on then. Explain to me how you can tell the difference between a measurement function and a non-measurement function.

I suspect the answer will be that anything you don’t like will be the evil non-measurement type and only things you do like will be the true measurement functions.

Reply to  Bellman
April 12, 2023 1:20 pm

That’s propagating the measurements of two different things.”

But you do NOT average the height and width together to come up with some hokey AVERAGE.

Both height and width are measured in units of length, let’s use feet.

Barrel 1 has a height of 60′ and a width of 40′. Their average is 100/2 = 50′.

Barrel 2 has a height of 70′ and 30′. Their average is 100/2 = 50′.

Barrel 3 (actually a pipe) has a height of 99′ and a width of 1′. Their average is 100/2 = 50′

Does that average of 50′ tell you *anything* about the barrels?

Add ’em all up and you get 300′. Divide by 6 (the number of elements) and you get 50′ as an average.

Now, make those units degK, or lbs, or anything you want. Does the average tell you anything?

Why do you think it tells you something when you are using “degrees”?

When you have different things there is no true value. And if you don’t have a true value then the average tells you nothing useful. Just like the “global average temperature” tells you nothing useful.

Statistical descriptors are *not* the real world no matter how much you wish it were so.

Reply to  Tim Gorman
April 12, 2023 2:52 pm

But you do NOT average the height and width together to come up with some hokey AVERAGE.

There go those goalposts again. I said nothing about averaging height and width. I was asking why Jim thought it mattered that “Tmax and Tmin are two different things from two different distributions. Their average DOES NOT result in a measurement of either.

Why is that a problem with an average but not with a product?

Both height and width are measured in units of length, let’s use feet.

Why use feet? Because it used to be traditional?

Barrel 1 has a height of 60′ and a width of 40′. Their average is 100/2 = 50′.

Are these 2-dimensional barrels? And why on earth do you want to know that average?

Does that average of 50′ tell you *anything* about the barrels?

That’s the question I was asking. I mean yes, it’s going to tell you something, but it’s not very useful.

Add ’em all up and you get 300′. Divide by 6 (the number of elements) and you get 50′ as an average.

Gosh, the average of three things with the same value has the same average. What a coincidence.

Now, make those units degK, or lbs, or anything you want.

What’s a degK?

Why do you think it tells you something when you are using “degrees”?

What particular angels of the barrels are you measuring? It obviously wouldn’t make sense to measure the width and height in °C.

When you have different things there is no true value.

You ignored my second example of combining distance and time. Are they different things, and does that mean velocity is not a true value?

Again, a definition of “different things” would be a help. Are two different lengths of wooden boards “different things”? Are two different maximum temperatures measured on different days, but at the same place using the same instrument, different things?

And if you don’t have a true value then the average tells you nothing useful.

Which must be a shock to all the people who make use of these untrue values all the time.

Statistical descriptors are *not* the real world no matter how much you wish it were so.

Of course they are not the real world, but the are descriptors of the real world. Barrel 1 has an area of 2400 square feet. That’sa descriptor, not the real world. But knowing it might be useful.

Reply to  Bellman
April 13, 2023 4:25 pm

There go those goalposts again.”

Got caught with your pants down again, didn’t you! Now you are trying to wriggle your way out of it — TROLL!

Reply to  Tim Gorman
April 13, 2023 4:38 pm

Me: “Height and width are two different things from two different distributions, and their product DOES NOT result in a measurement of either.”

TG: “But you do NOT average the height and width together to come up with some hokey AVERAGE.

Me: I didn’t say you average them.

TG: Stop wriggling, you were wrong, TROLL.

Reply to  Bellman
April 13, 2023 4:16 am

It isn’t up to me to accept it. TN1900 lists the things necessary to make certain assumptions. It is up to you to do the same.

Let’s list some of them.

  • The daily maximum temperature r in the month of May, 2012, in this Stevenson shelter, maybe defined as the mean of the thirty-one true daily maxima of that month in that shelter.
  • This so-called measurement error model (Freedman et al., 2007) may be specialized further by assuming that E1, …, E are modeled independent random variables with the same Gaussian distribution with mean 0 and standard deviation σ.
  • The assumption of Gaussian shape may be evaluated using a statistical test. For example, in this case the test suggested by Anderson and Darling (1952)
  • Adoption of this model still does not imply that τ should be estimated by the average of the observations — some additional criterion is needed. In this case, several well-known and widely used criteria do lead to the average as “optimal” choice in one sense or another: these include maximum likelihood, some forms of Bayesian estimation, and minimum mean squared error.
  • One potential source of uncertainty is model selection: in fact, and as already mentioned, a model that allows for temporal correlations between the observations may very well afford a more faithful representation of the variability in the data than the model above. However, with as few observations as are available in this case, it would be difficult to justify adopting such a model.

As I have pointed out before, Tmax and Tmin are highly correlated and that contamination destroys any further assumptions that the Law of Large Numbers and Central Limit Theory conclusions apply to further averages.

In order to “uncorrelate” those random variables they must be transformed. Here is a lesson about transforming correlated random variables. Not a simple task.

3.7: Transformations of Random Variables – Statistics LibreTexts

Have you done any of these to generate the proofs necessary to justify “averaging” separate temperature stations?

Perhaps a solution is to make one large random variable consisting of all Tmax and another for Tmin, then use the TN1900 methods for calculating the mean and expanded experimental uncertainty. Although the above criteria needs to be addressed.

Reply to  Jim Gorman
April 13, 2023 10:05 am

It isn’t up to me to accept it. TN1900 lists the things necessary to make certain assumptions. It is up to you to do the same.

Sorry but I’d sooner try thinking for myself. I don’t think TN1900 is meant to be a prescriptive set of rules, it’s intended to help you understand how to make your own evaluations. It’s intended as a guide to evaluating and expressing measurements. And it’s intended for NIST engineers and scientists, not for statisticians. It suggests that if you are using observation models (e.g. example 2), you should work with a statistician.

If the measurement model is an observation equation: use an appropriate statistical method, ideally selected and applied in collaboration with a statistician

It’s a guide, not a list of commandments.

Reply to  Jim Gorman
April 13, 2023 10:31 am

Let’s list some of them.

This will take some time.

The daily maximum temperature … maybe defined as the mean of the thirty-one true daily maxima of that month in that shelter.

Which is a problem because that’s not what the uncertainty is telling you. If the mean is defined as the mean of the 31 days, where is the uncertainty if you have all 31 days? The assumption of the example is not that the mean is defined as the mean of 31 days, it’s that 31 days are a random sample from random temperatures fluctuating around the true mean.

This so-called measurement error model … by assuming that E1, …, E are modeled independent random variables with the same Gaussian distribution with mean 0 and standard deviation σ.

Which is a bad model in this case. The distribution are not Gaussian and certainly not independent. How much of an effect that has on the modeled uncertainty is another question.

The assumption of Gaussian shape may be evaluated using a statistical test. For example, in this case the test suggested by Anderson and Darling (1952)

It continues “However, because the dataset is quite small, the test may have little power to detect a violation of the assumption.”.

There is just not enough data to rule out the possibility that the distribution is Gaussian. But from my own observations, I’d say there was good evidence that in general maximum daily temperatures in May are not Gaussian.

Adoption of this model still does not imply that τ should be estimated by the average of the observations … In this case, several well-known and widely used criteria do lead to the average as “optimal” choice in one sense or another

All very well, but I suspect you can just take it as read that maximum likelihood leads to the assumption that the mean for a single sample is the best estimate of the mean. This might be more useful when applied to averaging multiple stations, where you don’t necessarily assume observed average is the true average – hence the need for adjustments.

One potential source of uncertainty is model selection: in fact, and as already mentioned, a model that allows for temporal correlations between the observations may very well afford a more faithful representation of the variability in the data than the model above.

Yes. See my point about lack of independence above.

I’m surprised the example doesn’t suggest Monte-Carlo methods in this case. It seems to be the preferred method in other examples.

Reply to  Bellman
April 15, 2023 4:59 am

If the mean is defined as the mean of the 31 days, where is the uncertainty if you have all 31 days?”

There is none if you do as you usually do and just assume that all measurement uncertainty is random, Gaussian and cancels. That seems to fix everything every time as far as you are concerned.

“The assumption of the example is not that the mean is defined as the mean of 31 days, it’s that 31 days are a random sample from random temperatures fluctuating around the true mean.”

So what? This example is not meant to be a real world analysis. It is an EXAMPLE of how to treat experimental results. Why do you and bdgwx continue to ignore the assumptions Possolo set out?

“I’m surprised the example doesn’t suggest Monte-Carlo methods in this case. It seems to be the preferred method in other examples.”

I don’t think you truly understand what Monte Carlo techniques can tell you! Monte Carlo techniques are *NOT* a substitute for real world experiments with real world measured values. Get out of your statistical box for once.

Reply to  Tim Gorman
April 15, 2023 1:03 pm

There is none if you do as you usually do and just assume that all measurement uncertainty is random, Gaussian and cancels.

As I’ve said before, the only uncertainty would be measurement uncertainty. Apologies for not dotting every i.

But the point was to compare the idea of a mean as defined by the 31 days, and what the technique of exercise 2 does, which suggests there would be a lot more uncertainty than just measurement uncertainty. SD / √31.

So what? This example is not meant to be a real world analysis.

I wish you;d tell Jim that.

Why do you and bdgwx continue to ignore the assumptions Possolo set out?

I’m not ignoring any assumptions, but as you say it’s only meant to be an example.

Monte Carlo techniques are *NOT* a substitute for real world experiments with real world measured values.

They are a substitute for the approximations of the statistics you fail to understand. They are particularly useful when the approximations used in the standard equations can not be assumed. See the Simple Guide you keep talking about:

When the results produced by the NUM using Gauss’s formula and the Monte Carlo method disagree substantially, then the quality of the approximations underlying Equation (10) in the GUM or Equation (A-3) in TN1297 is questionable, and the Monte Carlo results should be preferred.

Reply to  Bellman
April 15, 2023 1:38 pm

I said: “Monte Carlo techniques are *NOT* a substitute for real world experiments with real world measured values.”

And you go off on a tangent about statistical approximations.

They are a substitute for the approximations of the statistics you fail to understand. They are particularly useful when the approximations used in the standard equations can not be assumed”

unfreakingbelievable.

Reply to  Tim Gorman
April 15, 2023 4:57 pm

Go on then. Explain how you use real world experiments with real world measured values to estimate the uncertainty in the daily maximum temperatures of a single station.

Reply to  Bellman
April 16, 2023 7:05 pm

From the GUM:

4.2.3
Note 1
“””””… The difference between s^2(q_bar) and σ ^2(q_bar) must be considered when one constructs confidence intervals (see 6.2.2). In this case, if the probability distribution of q is a normal distribution (see 4.3.4), the difference is taken into account through the t-distribution (see G.3.2). “””””

You’ll see this referenced in TN 1900. It gives you the sections of the GUM from which its conclusions are made.

You need to reconcile in your mind the difference between:

1) measurement uncertainty – measuring the same thing, multiple times, with the same thing, vs,

2) experimental uncertainty -determined by multiple trials under repeatable conditions.

NIST, in TN1900, declared the measurand to be the average Tmax temperature during a MONTH.

That is not the same as declaring and propagating the measurement error of each reading made throughout the month.

NOAA recommends averaging Tmax and Tmin for the month separately. That is what TN1900 did. It is what everyone does when examining temperatures, even yourself. THAT MAKES THE MEASUREAND THE MONTHLY AVERAGE.

Nobody that I see ever use daily temperatures as the base being examined. Therefore, daily temps are not the measurand being sought. Both you and bdgwx have said that the “functional description” of the measureand is the monthly mean of daily temperatures. Don’t change the goalposts now.

NIST couldn’t have provided a better example than TN1900 for determining the mean AND the expanded experimental uncertainty of a monthly temperature.

The GUM even addresses this as TN 1900 illustrates. The total expanded experimental uncertainty Is the appropriate value that shows the dispersion of values surrounding the measurand.

As I have pointed out many times, ALL AVERAGES MUST INCLUDE A VARIANCE (UNCERTAINTY) TO BE AN APPROPRIATE STATISTICAL DESCRIPTION OF THE UNDERLYING DISTRIBUTION.

Those variances don’t disappear after the first average except in climate science.

The fact that the expanded experimental uncertainty dwarfs the measurement uncertainty should be good news to you because it removes the arguments about the measurement uncertainty of each measurement. It doesn’t remove measurement uncertainty, but removes it from consideration by making it negligible.

Reply to  Jim Gorman
April 17, 2023 2:15 pm

The fact that the expanded experimental uncertainty dwarfs the measurement uncertainty should be good news to you because it removes the arguments about the measurement uncertainty of each measurement. It doesn’t remove measurement uncertainty, but removes it from consideration by making it negligible.

You mean that thing I’ve been trying to tell you and yours the past two years, and kept being called all sorts of names because I was using the stated values rather than propagating the measurement uncertainties? Yes it’s good news that you finally accept that sampling is usually a much bigger uncertainty than the measurements. It would be even better if I thought you would stick to it, rather than claiming next week you only agreed to it “in order hoist me with my own petard.”.

Reply to  Bellman
April 18, 2023 12:59 pm

The issue with this, and I disagree with Jim that you can ignore measurement uncertainty, is that your uncertainty *MUST* be totally random and cancel. If there is *any* systematic uncertainty then ALL the stated values will be offset from the true value and any population average you calculate will be offset by at least the same amount and probably even more. If that systematic uncertainty is calibration drift then it will add to any trend you find and you’ll be unable to subtract that out because it is unknowable.

You keep saying you don’t assume all measurement uncertainty is random, Gaussian, and cancels but you DO IT EVERY TIME. If you would actually read TN1900 for understanding you would see that Possolo ASSUMED no systematic uncertainty. In other words, all measurement uncertainty in the example would be random, Gaussian, and cancel! Over the short time period of a month that might, and I emphasize MIGHT, be justifiable. It is *not* justifiable over a longer period.

I already posted what Taylor has to say about this and, again as usual, you just ignore it and go on your merry way.

Taylor, Pg 110. “If there are appreciable systematic errors, then σ_ẋ gives the random component of the uncertainty in our best estimate for x:

ẟx_ran = σ_ẋ

If you have some way to estimate the systematic component ẟx_sys, a reasonable (but not rigorously justified) expression for the total uncertainty is the quadratic sum of ẟx_ran and ẟx_sys.

You ALWAYS want to assume that ẟx_sys is zero. EVERY SINGLE TIME. That way you can use the variation of the stated values as the uncertainty of the mean instead of having to the hard work of propagating ALL of the total uncertainty!

You can deny it till the cows come home but it is as obvious as the nose on your face that you always assume ẟx_sys = 0.

Reply to  Tim Gorman
April 18, 2023 4:20 pm

If there is *any* systematic uncertainty then ALL the stated values will be offset from the true value and any population average you calculate will be offset by at least the same amount and probably even more.

If you ever calmed down and read what I say, you’d know I agree with you on this. Systematic uncertainty will by definition remain after averaging.

If you would actually read TN1900 for understanding you would see that Possolo ASSUMED no systematic uncertainty.

Funny that. Just like did in the other thread, and had all manor of abuse thrown at me.

Rest of rant ignored.

Reply to  Jim Gorman
April 17, 2023 2:17 pm

As I have pointed out many times, ALL AVERAGES MUST INCLUDE A VARIANCE (UNCERTAINTY) TO BE AN APPROPRIATE STATISTICAL DESCRIPTION OF THE UNDERLYING DISTRIBUTION.”

Take your finger of the shift key and say once and for all which variance you are talking about – the variance of the measurements or the variance of the mean? Tehn expl,ain why you want the variance stated rather than the standard deviation.

Reply to  Bellman
April 18, 2023 1:01 pm

The variance of the sample means is *NOT* the variance of the population. They are two entirely different things. The variance of the sample means tells you how close you are to the population mean. The variance of the population tells you what to expect from the distribution. THEY ARE NOT THE SAME!

Reply to  Tim Gorman
April 18, 2023 3:53 pm

Finally. Yes that’s the point. Now all we have to figure out what Jim means when he keeps going on about stating the variance.

Reply to  Jim Gorman
April 13, 2023 10:36 am

As I have pointed out before, Tmax and Tmin are highly correlated

You keep repeating it – never explain why you think it’s relevant.

What measurements and distributions are you talking about?

If you just want TAvg for a given day, the fact that they are correlated is irrelevant. In fact it’s good that they are correlated, as it implies TAvg is a useful value.

If you are talking about the average of TAvg over a month, or any other average, it’s irrelevant because you are averaging TAvg, not TMax and TMin. TAvg is just a single value which is derived from two correlated values.

Reply to  Jim Gorman
April 12, 2023 9:15 am

As TG has pointed out, using average temperature WITHOUT adequately propagating variance leads one to conclude that the climate in the Sahara desert is the same as in Dubuque, Iowa since they can have the same average yet their variance is drastically different.

But that’s nothing to do with the measurement uncertainty.

Quoting the average maximum temperature for a given month along with the expanded uncertainty interval won’t tell you the variation in the individual daily maximums. Nor will it tell you anything about the average daily temperature, or the diurnal range, or how much the month warmed up, or how humid it was.

Reply to  Bellman
April 12, 2023 1:22 pm

If the point is to determine CLIMATE then the average tells you nothing you can use to determine climate! It doesn’t matter what the measurement uncertainties are!

Reply to  Tim Gorman
April 12, 2023 3:03 pm

Of course the average temperature does not determine climate (or CLIMATE). You would need a whole book to determine all the complexity of a climate. It’s one indicator of one small part of the climate.

It doesn’t matter what the measurement uncertainties are!

Then why do you keep arguing about them?

But in this case the specific claim was about propagating variance. “using average temperature WITHOUT adequately propagating variance leads one to conclude that the climate”

So, what variances do you want to “propagate”, and how will that allow you to determine the climate.

Reply to  Bellman
April 13, 2023 4:27 pm

It’s one indicator of one small part of the climate.”

It’s not an indicator of ANYTHING having to do with the climate.

When two totally different climates can have the same mid-range temperature NO ONE will believe that mid-range value tells you anything!

Reply to  Tim Gorman
April 13, 2023 5:12 pm

You think average temperature has nothing to do with climate?

“When two totally different climates can have the same mid-range temperature NO ONE will believe that mid-range value tells you anything!”

You keep getting the logic in this backwards. It doesn’t matter that two different climates can have the same temperature, the question is if the same climate can have two different average temperatures.

It’s like arguing that if two very different computers can have the same CPU, then knowing what the CPU is will tell you NOTHING about the computer.

Reply to  Bellman
April 15, 2023 5:18 am

You think average temperature has nothing to do with climate?”

I know it doesn’t. The average temp in Las Vegas and Miami can be *exactly* the same and yet they have different climates. There is a LOT more that goes into climate than just temperature. It’s why enthalpy is the *correct* value to look at, not temperature.

“It doesn’t matter that two different climates can have the same temperature, the question is if the same climate can have two different average temperatures.”

I’m not getting *anything* backwards. Did you actually read this before you posted it? The issue is that the avg temp is not a good proxy for climate. The question is not whether two different climates can have different averages, the issue is that two different climates can have the same average. That means that you can’t use temp to identify climate consistently! If the proxy is not a consistent indicator then it is useless.

Reply to  Tim Gorman
April 15, 2023 12:48 pm

Sorry, I forget you can only see words when they are written in block capitals.

You think average temperature has NOTHING to do with climate?

The question is not whether two different climates can have different averages

My question was if two IDENTICAL climates can have different averages.

That means that you can’t use temp to identify climate consistently!

And if I’d claimed you could, that would be a fair argument.

If the proxy is not a consistent indicator then it is useless.

You really need to get out of your binary box. NOT PERFECT does not equal USELESS.

Reply to  Bellman
April 15, 2023 1:21 pm

My question was if two IDENTICAL climates can have different averages.”

Of course they can! The three main climates of boreal, temperate, and sub-tropical allow for huge ranges of average temperature while still being within the climate classifications.

It’s the same with plant hardiness zone classifications. Each zone has a wide average temperature range.

As usual you are doing nothing but demonstrating your complete lack of awareness of the real world. Do you *ever* go outside or just remain in your basement all the time?

Reply to  Tim Gorman
April 15, 2023 1:42 pm

Of course they can! The three main climates of boreal, temperate, and sub-tropical allow for huge ranges of average temperature while still being within the climate classifications.

You’re talking about classes of climate there. Two places can be in the same climate classification, but not have the same climate.

Do you *ever* go outside or just remain in your basement all the time?

Could you for once try to make an argument without these childish insults.

Reply to  Bellman
April 16, 2023 8:26 am

You’re talking about classes of climate there. Two places can be in the same climate classification, but not have the same climate.”

More of your cognitive dissonance. Two locations are subtropical but one doesn’t have subtropical climate and the other one does. Unfreakingbelievable.

You’ll literally say anything, won’t you?

Could you for once try to make an argument without these childish insults.”

You *deserve* a childish insult to a childish comment that two locations with subtropical climate classification can have different climates. Subtropical *is* the climate in that case.

Reply to  Tim Gorman
April 16, 2023 2:23 pm

…cognitive dissonance…Unfreakingbelievable…childish insult to a childish comment…

Do you really not see the difference between two regions belonging to the same classification of climate and two regions having identical climates?

The category of subtropical climates covers a range of different subcategories and even amongst subcategories the climate will vary.

comment image

Do you think everywhere in yellow has an identical climate?

Most subtropical climates fall into two basic types: humid subtropical (Koppen climate Cfa), where rainfall is often concentrated in the warmest months, for example Southeast China and the Southeastern United States, and dry summer or Mediterranean climate (Koppen climate Csa/Csb), where seasonal rainfall is concentrated in the cooler months, such as the Mediterranean Basin or Southern California.

https://en.wikipedia.org/wiki/Subtropics

The subtropical climate in Spain is not the same as the subtropical climate in Florida. And the there are huge differences in climate between different regions of Spain.

climate classification, the formalization of systems that recognize, clarify, and simplify climatic similarities and differences between geographic areas in order to enhance the scientific understanding of climates. Such classification schemes rely on efforts that sort and group vast amounts of environmental data to uncover patterns between interacting climatic processes. All such classifications are limited since no two areas are subject to the same physical or biological forces in exactly the same way. The creation of an individual climate scheme follows either a genetic or an empirical approach.

https://www.britannica.com/topic/classification-1703397

Reply to  Bellman
April 18, 2023 12:42 pm

Do you really not see the difference between two regions belonging to the same classification of climate and two regions having identical climates?”

I think *I* was the one that pointed this out to you. Miami and Las Vegas can have the same mid-range temperature but vastly different climates! And, as usual, you tried to argue against that assertion.

Now here you are arguing for it!

That’s called COGNITIVE DISSONANCE! Or, probably better, being a TROLL!

Reply to  Tim Gorman
April 18, 2023 3:49 pm

I think *I* was the one that pointed this out to you

Yet you are claiming that all sub tropical climates are the same – as when you said:

Two locations are subtropical but one doesn’t have subtropical climate and the other one does. Unfreakingbelievable.

Miami and Las Vegas can have the same mid-range temperature but vastly different climates!

And you’ve forgotten my point, which is that two locations can’t have identical climates.

BTW, Miami and Las Vagas do not have the same average temperatures. Miami is about 4-5°C warmer than Las Vagas.

And, as usual, you tried to argue against that assertion.

You’re either lying or suffering from severe memory loss. I’ve never argued against the idea the two locations can have the same temperature but different climates.

Reply to  Bellman
April 19, 2023 7:58 am

Yet you are claiming that all sub tropical climates are the same – as when you said:”

Both are sub-tropical. *YOU* are the one trying to claim that they aren’t.

If you don’t like this definition of climate then provide your own!

Reply to  Tim Gorman
April 19, 2023 2:37 pm

No, I’m saying they are all of the sub-tropical category. I’m saying that does not mean they have identical climates.

“If you don’t like this definition of climate then provide your own!”

Climate (from Ancient Greek κλίμα ‘inclination’) is commonly defined as the weather averaged over a long period. The standard averaging period is 30 years, but other periods may be used depending on the purpose. Climate also includes statistics other than the average, such as the magnitudes of day-to-day or year-to-year variations.

https://en.wikipedia.org/wiki/Climate

Reply to  Bellman
April 13, 2023 4:49 am

But that’s nothing to do with the measurement uncertainty. 

So you think TN1900 is in error? Why do you think the variance of the experimental results is used to determine the values of the mean that can be expected?

Reply to  Jim Gorman
April 13, 2023 5:34 am

So you think TN1900 is in error?

I said nothing about the holy text. I was responding to your claim that

using average temperature WITHOUT adequately propagating variance leads one to conclude that the climate in the Sahara desert is the same as in Dubuque, Iowa since they can have the same average yet their variance is drastically different.

I stand by my assertion. Whatever you think you are saying there has nothing to do with measurement uncertainty.

Why do you think the variance of the experimental results is used to determine the values of the mean that can be expected?

It’s standard statistics. You are taking a sample from a random distribution (in this case the assumed random distribution of all possible maximum temperatures in that month). Your sample average (in this case 22 assumed independent daily values) has the best likely hood of being the true mean value. But there is uncertainty because of the random nature of the samples. In another universe you could have a different mean purly becasue each day is assumed to give a random maximum value from the population.

In order to asses the extent of that uncertainty you use the standard deviation (not the variance) of the population distribution, divide it by the square root of the sample size. However, you don’t know the standard deviation of the population, so you have to estimate it from the standard deviation of your sample – hence you need to know standard deviation (not the variance) of your “experimental results”.

Now, what I think you are trying to say in the first pat, is that if everything else is equal, you could look at a quoted uncertainty for a particular monthly measurement, and reverse engineer it to figure out what the variance was in the data. But it seems a pointless exercise when you could just look at the variance in the data.

Using the measurement uncertainty as a proxy for variation in the data is problematic because there may be many factors that lead to the uncertainty estimate. For one thing, it depends on the number of days you have data for.

Reply to  Bellman
April 13, 2023 7:54 am

“””””Quoting the average maximum temperature for a given month along with the expanded uncertainty interval won’t tell you the variation in the individual daily maximums”””””

It doesn’t tell you the variance directly. Remember you calculate the variance as part of the procedure. Whether it was quoted directly or not, the SD was stated. An SD of ±4.1 is a variance of 16.8, a pretty large number.

That is not what an expanded experimental uncertainty (EEU) is for. AN EEU is a statement that you are confident at a given % that additional repeats of the experiment will fall within range specified.

In TN1900 the range is @95% coverage (23.8 ◦C, 27.4 ◦C) or 25.6 ±1.8°C. if you read this closely, it doesn’t say that measurement uncertainty doesn’t exist, but that it is negligible compared to the variation in the repeated measurement.

The lesson you should take from this is that averages have variance and it is important in assessing the range of values to be expected.

Reply to  Jim Gorman
April 13, 2023 9:45 am

It doesn’t tell you the variance directly.

Yes, that’s my point. And if you want to know the variance for some reason, you want to look at that, and not try to guess it from the uncertainty of the average.

Using the uncertainty to deduce the standard deviation of the data is very indirect. You need to know how the uncertainty was calculated, you need to know the sample size, and you need to know what other uncertainties have been included in the stated figure.

An SD of ±4.1 is a variance of 16.8, a pretty large number.

That’s 16.8 square degrees. I’ve no idea how you would determine that was a large number. How big to you expect the variance to be? How big is a square degree?

We went through this before, but I just don’t get why you are interested in variance rather than standard deviation. Variance is just a means to an end. It’s not a figure that gives you any meaningful information. There’s a reason why variance is written as σ².

if you read this closely, it doesn’t say that measurement uncertainty doesn’t exist, but that it is negligible compared to the variation in the repeated measurement.

Yes, it says what I’ve been trying to tell you and your kin for the last two years. When you are sampling from variable data, the measurement uncertainty is usually negligible. You usually assume that any measurements you make have a smaller uncertainty than the range of things you are measuring.

The lesson you should take from this is that averages have variance…

Averages don’t have variances is the lesson I’d prefer to take. A distribution has a variance. The sampling distribution of a mean has a variance. But the average doesn’t.

“…and it is important in assessing the range of values to be expected.”

That depends on the purpose of the measurement. As the book says, all measurements are intended to support decision-making. And you need to tailor what and how you measure to support the decision being made.

Looking at the average maximum temperature for one may at one station, as in example 2, what is the point of the measurement? What decision-making does it support?

If the point is to determine the most extreme temperatures you are likely to see, then you want to know the standard deviation or prediction interval of the sample. If the point is to tell you whether on average that location is warmer or cooler than another location, you want to know how certain the average is.

bdgwx
Reply to  Jim Gorman
April 12, 2023 9:53 am

JG: The GUM does not address how to propagate measurement uncertainty of different things.”

Absurd. Every single example in Annex H on Examples is regarding the assessment/propagation of measurement uncertainty of different things. There’s even an example where different temperature measurements are used as inputs a measurand.

JG: Like it or not, averages like Tavg or a monthly average of Tavg is NOT a measurement.

That is an odd statement considering Possolo calls the monthly average temperature a measurand in NIST TN1900 E2. And JCGM 100:2008 B.2.9 defines a measurand as “particular quantity subject to measurement”

JG: If you would spend less time cherry picking equations meant for measurements and really learn about and appreciate what a measurement actually is and how to treat it, you would realize that playing with statistical parameters is NOT a substitute for actually dealing with measurements.

And yet the GUM and NIST TN1900 use statistical techniques when dealing with measurements prolifically.

Reply to  bdgwx
April 13, 2023 9:45 am

“””””Absurd. Every single example in Annex H on Examples is regarding the assessment/propagation of measurement uncertainty of different things. There’s even an example where different temperature measurements are used as inputs a measurand.”””””

You are so full of crap. You can’t even cherry pick properly!

H.1.2 Mathematical model -> “arithmetic mean of n = 5 independent repeated observations.” This was comparing an end gauge block to a standard and to determine “The length of a nominally 50 mm end gauge is determined by comparing it with a known standard of the same nominal length. The direct output of the comparison of the two end gauges is the difference d in their lengths: ”

What do you think machinists do everyday? You would use this same procedure in determining a correction factor when calibrating a device.

The whole purpose is to determine the difference between two things, i.e., that is the measurand.

H.2.2 Obtaining simultaneous resistance and reactance

“””””Consider that five independent sets of simultaneous observations of the three input quantities V, I, and φ are obtained under similar conditions”””””

“””””H.3.1 The measurement problem
A thermometer is calibrated by comparing n = 11 temperature readings tk of the thermometer, each having negligible uncertainty, with corresponding known reference temperatures tR, k in the temperature range 21 °C to 27 °C to obtain the corrections bk = tR, k − tk to the readings. The measured corrections bk and measured temperatures tk are the input quantities of the evaluation. A linear calibration curve is fitted to the measured corrections and temperatures by the method of least squares. “””””

These are not examples of measuring different things. I didn’t even bother to look at the remaining!

They are examples of experimental uncertainty!

Have you ever taken a high level lab courses and done repeatable experiments to find an expected range of results? That is what makes the world go around.

bdgwx
Reply to  Jim Gorman
April 13, 2023 6:54 pm

If you think V, I, and φ are the same thing then you are clearly working with a different definition of “same thing” as compared to everyone else.

Reply to  bdgwx
April 15, 2023 4:45 am

Do you have a cognitive impairment?

How did you get that conclusion from:

Consider that five independent sets of simultaneous observations of the three input quantities V, I, and φ are obtained under similar conditions””

This doesn’t say that V, I, and  φ are the “same thing”.

It speaks to V1 to V5, I1 to I5, and  φ1 to  φ5.

All different values and all different from each other.

Reply to  Jim Gorman
April 15, 2023 4:42 am

statisticians are not physical scientists. You are trying to explain something physical to a blackboard mathematician. As amply demonstrated in the recent threads it’s impossible for them to grasp the fundamental concepts of reality.

Reply to  Jim Gorman
April 12, 2023 1:10 pm

It’s the same old meme EVERY SINGLE TIME! All measurement uncertainty is random, Gaussian, and cancels thus leaving the stated values as 100% correct. Then you can statistically analyze the stated values with no complicating factors.

Reply to  Tim Gorman
April 12, 2023 3:18 pm

Keep repeating this lie enough times and maybe somebody will believe you.

Reply to  Tim Gorman
April 12, 2023 5:45 am

The issue is less whether TN2900 is ultimately “correct” and more about following a standard procedure for determining a better mean and expanded experimental uncertainty as specified in the GUM.

Several friends on Twitter have been convinced that GAT has big statistical problems and one is better served by looking at local and regional temperatures instead.

They have been analyzing different locations around the globe and find many, many locations with little to no warming. Almost enough to question the whole global warming scenario. Up to now nothing where non-UHI stations have a 2 – 3 degree warming that would make an average of 1 to 1.5 degrees of warming!

The article here at WUWT on the Japanese location showing no warming was no surprise. That was already found. As more and more stations are found with no or little warming it will become harder and harder to keep insisting that warming is occuring and accelerating.

bdgwx
Reply to  Jim Gorman
April 12, 2023 7:13 am

JG: The issue is less whether TN2900 is ultimately “correct”

And there it is. The fact that you have to put the word correct in quotes is also telling.

Reply to  bdgwx
April 12, 2023 8:00 pm

Way to cherry pick. You would make a great liberal journalist. You have made zero effort to understand what EXPERIMENTAL uncertainty truly is.

The argument ISNT ABOUT whether it is correct or not.

It is accepted internationaly as an appropriate method for determining expected values when repeated measurements of separate trials under repeatable conditions is done. The expanded uncertainty delivers an adequate interval of acceptable values.

If you want to argue that expanded experimental uncertainty is not correct, you need to take your argument to NIST and the international body, Working Group 1 of the Joint Committee for Guides in Metrology (JCGM/WG 1).

If you want to use it combine means and variances from separate stations, feel free. Just be prepared to justify why you think all the necessary requirements are met for independence and distribution.

bdgwx
Reply to  Jim Gorman
April 13, 2023 5:29 am

And as I keep saying I fully accept NIST TN1900 E2 and consider it correct without any reservations whatsoever.

But the gist I get from your comments is that you not only question its correctness, but you question whether 1/sqrt(N) was even used at all in the example and the fact that temperature measurements made at different times are different.

wh
Reply to  Nick Stokes
April 6, 2023 10:27 am

They use USCRN to correct the shit ClimDiv. There’s no way they can reproduce the same results; it’s just common sense Nick.

bdgwx
Reply to  wh
April 6, 2023 6:02 pm

Not really. USCRN (the network itself) is not used to make adjustments to nClimDiv. The nClimDiv adjustments are all done by the pairwise homogenization algorithm. Of course there is no obvious reason why PHA would exclude USCRN stations in the neighbor list. They are stations too not unlike the other GHCN stations. But the network itself isn’t used to make the corrections. The reason why nClimDiv matches USCRN so well is because PHA is effective.

[Menne & Williams 2009]
[Vose et al. 2014]
[Hausfather et al. 2016]

Richard Greene
Reply to  bdgwx
April 6, 2023 8:43 pm

 the pairwise homogenization algorithm

I can’t take numbers seriously that are not pasteurized too.

bdgwx
Reply to  Richard Greene
April 6, 2023 8:52 pm

I’m not sure what you mean here. Did you mean not pasteurized or just pasteurized. And what is this “pasteurize” method you speak of? I’ve never heard of it and it isn’t mentioned in the Menne & Williams 2009 publication.

Reply to  Nick Stokes
April 6, 2023 12:07 pm

Only because NOAA has used the short-term USCRN data that exists to “adjust” the crappy COOP based ClimDiv data.

Its a useless exercise, because there’s no fix for any of the crappy COOP data pre-2005, and thus trends from COOP data (USHCN/GHCN) are biased upwards.

The point I’m making is that NOAA/NCEI never reports USCRN in any public report/press release.

Richard Greene
Reply to  Anthony Watts
April 6, 2023 12:42 pm

Yoi pointed out the poor weather station siting in 2009
You pointed out no improvement about 10 years later
NOAA obviously did not care.
So can such an organization be trusted wto provide an accurate national average temperature?

Now we have USCRN and NClimDiv almost the same. As if to prove siting does not matter. But that makes no sense.

Most people would assume NOAA is using USCRN to “fix” NClimDiv

i will put on my leftist thinking cap and analyze that belief.

(1) NOAA wants to show as much warming as possible to the general public.

)2) NOAA does not want two averages that look different, because people would ask too many questions.

(3) So NOAA probably decided to choose the US average with the most warming, and then “fixed” the other average to closely match the one with the most warming.

My guess is that NClimDiv showed a faster warming rate than the rural USCRN network. Therefore, NOAA probably “fixed” USCRN to more closely match NClimDiv, rather than “fixing” NClimDiv to more closely match USCRN.

That does not sound very honest.
But NOAA employees are government bureaucrats
Mainly leftists
So should we expect honesty from them?

Nick Stokes
Reply to  Richard Greene
April 6, 2023 1:27 pm

“But NOAA employees are government bureaucrats
Mainly leftists
So should we expect honesty from them?”

Yes, they are just employees. So why should you expect skulduggery from them? Why would they lie to …? There isn’t even anything to be gained by lying.

And hundreds of them are involved. They operate under the scrutiny of FOI, Inspector-general etc. Someone would have blabbed by now.

And it would be such a useless exercise. As a managerial matter, what sense would it make to set up such an elaborate exercise as USCRN, only to throw the results away and replace by USCRN? Or vice versa?

If you put on your rational thinking cap, you will see that you have a relatively small area covered by two systems with far more stations than are needed. They are going to get a very accurate average, and so the two systems will agree with each other, because they are accurately measuring the same thing.


Reply to  Nick Stokes
April 6, 2023 3:55 pm

“But NOAA employees are government bureaucrats
Mainly leftists
So should we expect honesty from them?”

Because commencing in 1993, under under the watchful eye of Neville Nicholls, using the excuse of data homogenisation, Australian scientists within CSIRO and the Bureau of Meteorology, most recently Blair Trewin, have been fiddling the data to find warming that does not exist. See my latest series of reports on at https://www.bomwatch.com.au/data-homogenisation/.

It is now 2023, so these people have been cheating for 30-years.

Trust CSIRO, not likely!

All the best,

Bill Johnston

Richard Greene
Reply to  Bill Johnston
April 6, 2023 7:30 pm

As someone who studies Australian temperature data in detail, and does a very good job, I appreciate reading Mr. Johnston’s opinion on government bureaucrats.

Truth is not a leftist value.

bdgwx
Reply to  Bill Johnston
April 6, 2023 7:30 pm

Bill Johnson: “excuse of data homogenisation”

Stations moved. Time-of-observations changed. Instrument packages changed. I think it is a stretch to dismiss those as “excuses”.

Reply to  bdgwx
April 6, 2023 9:15 pm

Yes I am well aware of all that bdgwx, I should have used the word “guise” rather than excuse.

At Townsville Queensland for instance, the local Bureau of Meteorology Garbutt instrument file told the story of how they negotiated with the Royal Australian Air Force to move the weather station to a mound on the western side of the runway from about 1965 to 1969. Observations commenced at the new site on 1 January 1970.

Letters on the file shows the site moved, aerial photographs and satellite images show it moved, careful analysis of the data shows it moved but they said “Observations have been made at Townsville Airport since 1942. There are no documented moves until one of 200 m northeast on 8 December 1994, at which time an automatic weather station was installed“.

(https://www.bomwatch.com.au/data-quality/climate-of-the-great-barrier-reef-queensland-climate-change-at-townsville-abstract-and-case-study/) They did a similar thing at Rockhampton, Cairns, Marble bar, … all over the place.

A blatant black and white lie comes in only two colours and since we started exposing them on http://www.bomwatch.com.au, Blair Trewin et al have gone very quiet.

All the best,

Bill Johnston

bdgwx
Reply to  Bill Johnston
April 7, 2023 5:29 am

I think it is stretch to call station moves, time-of-observation changes, instrument changes, etc. “guises” as well.

Richard Greene
Reply to  Nick Stokes
April 6, 2023 7:27 pm

Yes, they are just employees. So why should you expect skulduggery from them? Why would they lie to …? There isn’t even anything to be gained by lying.

In your fairy tale world, Mr. Stokes, all government employees are honest and no leftists ever have bias, or intentionally deceive. You must be living in an alternative universe. If governments and their employees were honest, why would this website exist?

The prediction of a coming climate emergency is not honest, because the claim of being able to predict the long term climate trend is not honest.

..

Reply to  Richard Greene
April 8, 2023 5:10 am

If anyone has ever studied government bureaucracy they should have learned that its goal is not the best product but the expansion of the bureaucracy. They exist to find problems that require more and more people in order to provide a solution. The solutions are best when more people are required.

Why more people? That is how people are promoted and gain higher and higher classifications which means what? MORE MONEY.

Lots of stuff gets glossed over, like scientific rigor, since that is not the fundamental incentive at work. where do you think the adage, “good enough for government work” came from?

Reply to  Nick Stokes
April 6, 2023 7:44 pm

How do you know a lefty bureacrat is lying? His/her lips are moving. Remember, “whatever the cost”!

Reply to  Nick Stokes
April 8, 2023 4:52 am

Nick,

They are going to get a very accurate average,”

A true mathematician at work. Concentrate on the SEM that describes the interval within which the mean of a distribution may lay rather than the variance of the data used to calculate the mean. If you know the SEM, tell folks how you calculate the standard deviation of the population of temperature data.

Has anyone examined the distribution of the sample mean distribution to see if it is Gaussian or normal? That is a must if the SEM is to have any value.

Has anyone calculated the variance in the data distributions used to calculate means used to calculate the anomalies?

Has anyone considered why Significant Digit rules are ignored consistently when reporting measured physical quantities? At least Richard Greene got close when he rounded the means from 2 decimal digits to only 1 decimal digit. He should be commended.

bdgwx
Reply to  Richard Greene
April 6, 2023 1:27 pm

RG said: “NOAA probably “fixed” USCRN to more closely match NClimDiv, rather than “fixing” NClimDiv to more closely match USCRN.”

That is an extraordinary claim. Can you provide extraordinary evidence to support it?

Richard Greene
Reply to  bdgwx
April 6, 2023 7:34 pm

I wrote “probably”

I commented that it seemed suspicious that NClimDiv and USCRN were so similar. I speculated logically that one or both averages could have been “fixed” to have that result. If both averages are really so similar, NOAA would appear to be claiming that weather station siting does not matter. I believe that implication is false.

bdgwx
Reply to  Richard Greene
April 6, 2023 8:45 pm

The code that makes the adjustments to nClimDiv (and USHCN and GHCN) is available here. I actually managed to get it to run on a Linux VM I had several years back. I never found any evidence that it was doing anything nefarious. And the USCRN station data is published in near real-time so I don’t know how the “fixing” would even work nevermind how it would go unnoticed.

Reply to  bdgwx
April 6, 2023 7:46 pm

Climate emergency is an extraordinary claim, do you have extraordinary evidence to support it? Thought not!

bdgwx
Reply to  Streetcred
April 6, 2023 8:14 pm

No. But then again I’m not the guy to ask since I don’t advocate for emergency/catastrophe/doom style hypothesis.

Nick Stokes
Reply to  Anthony Watts
April 6, 2023 1:19 pm

Only because NOAA has used the short-term USCRN data that exists to “adjust” the crappy COOP based ClimDiv data.”
When and how do they do that? The system is very transparent. The results are published almost as soon as read. There is a NOAA site here which posts readings within the hour. It just wouldn’t be possible to adjust one lot to the other on that timescale, with results trickling in.

Richard Greene
Reply to  Nick Stokes
April 6, 2023 8:50 pm

Nick “Mr. Government” Stokes declares deceptions by government employees are not possible. Proof of this is that USCRN data are published quickly? How is that proof of anything?

Mom: “Son, did you break that window in your room?”

Son immediately replies “I didn’t do it”

Mom says: “That was a very quick response, son,
so I know you are telling me the truth”

Based on Nick Stokes “logic”

Nick Stokes
Reply to  Richard Greene
April 6, 2023 11:07 pm

Because the results come digitally from AWS to post. There isn’t time for the people you think sit with a thumb on the scale to intervene.

In fact, anyone who tried to fiddle with the posted data would get into serious trouble. A lot of the sites are airports. Plenty of people, from pilots down, would be up in arms if they thought the NOAA was not simply and accurately reporting what was observed. Which they do.

In fact, inference about climate is a very small part of the motivation for weather reporting.

bdgwx
Reply to  Anthony Watts
April 6, 2023 1:23 pm

AW said: “Its a useless exercise, because there’s no fix for any of the crappy COOP data pre-2005, and thus trends from COOP data (USHCN/GHCN) are biased upwards”

According to Hausfather et al. 2016 USHCN-adj matches USCRN pretty well, but if anything it is actually bias downwards. Now, obviously the comparison is only for the overlap period. But it’s not unreasonable to hypothesize that since USHCN-adj is biased downwards during the overlap period then it may be biased downwards prior to the overlap period as well since PHA is applied equally to both segments of data.

Richard Greene
Reply to  bdgwx
April 6, 2023 8:55 pm

Zeke H. is the same guy who ignored the most common ECS with RCP 8.5 climate model predictions. He then claimed if the climate models used TCS and RCP 4.5, that would cut the ECS RCP 8.5 warming rate in about half, and prove that climate models are actually very accurate. That is a deceptive argument even if not actually false.

bdgwx
Reply to  Richard Greene
April 6, 2023 9:33 pm

Your comment does not seem to address anything I said. And if the authors of the publication are a problem for you then don’t read the paper and instead download the USCHN-adj and USCRN data yourself and do your own comparison. That’s what I did.

gezza1298
April 6, 2023 6:38 am

Mr Watts makes an excellent point in stating that the weather stations were set up for weather forecasting. Why do we see so many weatherstations at airports and airfields? Because pilots need to know what the weather is like. It is abuse to try to use these weatherstations as indicators of any change in the climate since they have been subject to huge changes over the decades. London’s main airport Heathrow was a field until the 50s and the first permanent buildings were not constructed until the start of the 60s – now there are 5 terminals. You can also throw in the change from piston engines via turboprops to turbojets and turbofans.

Reply to  gezza1298
April 6, 2023 8:18 am

Weather stations at airfields are necessary to find the right starting speed and length because these are temperature dependant.

strativarius
April 6, 2023 6:43 am

Independent? In today’s world?

That seems to be a pipe dream. Science for science’s sake? Money for god’s sake

Bryan A
April 6, 2023 6:55 am

So, what does our “Stable Climate” look like?
Where is it represented in the data?
If Climate is an average of a 30 year period of weather, when has it ever been stable between concurrent 30 year periods to establish the “Stable Climate”?

Reply to  Bryan A
April 6, 2023 8:20 am

30 years certainly isn’t enough to determine sea level rise acceleration.

DD More
Reply to  Bryan A
April 6, 2023 9:20 pm

Actually, not a lot.
If we are talking about ‘Climate Change’, where it is the average weather over 30 years, here is a thought. Why don’t we breakdown all the areas of the world in to something like 5 main zones and further subdivide these into a total of 30 to 35 Subsections. Then on a map, interpolate onto a 0.5° longitude × 0.5° latitude grid.

Then after 25 or 30 years see where this ‘Climate Change’ is happening and how bad it is. We base the changes as ‘GOOD’ for areas that have More Life and ‘BAD’ for for areas that have Less Life. Let’s Call it a Köppen Classification Map, because that is just what Wladimir Köppen did starting in 1884.

Here is the map with Major Köppen type has changed at least once in 30 years during the period 1901-2010. http://hanschen.org/koppen/img/koppen_major_30yr_1901-2010.png

http://hanschen.org/koppen/img/area_major_1901-2010.png

In graph form for the changes. A & C have more life, i.e. Good – B Dry (includes cold dry) is still better than D (Snow) and E (Polar) i.e. Bad.

So Good Areas, less than 1/2 percent and Very Bad Polar down 3.5% vs Just Bad Dry up 3% & Snow down 1.5%. Don’t see much Catastrophic going on.

Richard Greene
April 6, 2023 7:08 am

I guess commenting rule number one is to never disagree with the owner of the website, or you may never be commenting again. I comment on the articles I read regardless of who wrote them, and I don’t censor myself — we have the social media and leftist biased mainstream media to do censorship.

Summary:
(1) Climate predictions are not based on data. There are no data for the future climate. And no human has ever demonstrated the ability to make accurate long term climate predictions. Climate predictions are merely speculation falsely presented as science.

(2) Actual data, such as a more accurate US average temperature statistic in the future, will not affect scary climate predictions

(3) The belief that the USCRN network is very accurate is speculation, not a proven fact.
NOAA claims it is accurate. That’s not good enough for me.

Details.
It would be nice to have well sited US weather stations and accurate measurements with very little infilling. But that would not necessarily change the US national average temperature by much, if at all. That average is a statistic that only NOAA could generate and verify. The NClimDiv and USCRN averages are whatever NOAA tells us they are. And they are very similar. You just have to trust NOAA government bureaucrats. Do you trust government bureaucrats? i don’t.

Let’s say that US government bureaucrats did have a very accurate US average temperature, and so did every other government in the world. I doubt if that would change the climate change scaremongering. The same governments would make the same claims to scare people, as they have been doing for 40 years.

What if the surface average warming declined to become more like the UAH satellite numbers? Would that reduce climate scaremongering? I say no.

Consider the relatively flat global temperature trend in the past eight years . Did that slow down climate change scaremongering? No. Climate scaremongering is more hysterical than ever.

USCRN
It seems that everyone here claims USCRN is an accurate network and some are disturbed that USCRN not used for the global average — NClimDiv is. I say that fact is irrelevant. For reasons I can’t explain logically, the NClimDiv and USCRN numbers are very similar. So using USCRN for a global average temperature would have virtually no effect.

Now comes the puzzle.
USCRN supposedly has great weather station siting, while NClimDiv is mainly haphazard weather station siting. So why are both US averages so similar? That does not make sense. Just a coincidence?

The important question is do you trust NOAA? Based on their non-response to the weather station siting problems reported here in 2009, I say NOAA can not be trusted.
And if NOAA can not be trusted, then it is not logical to assume the US average temperature from the USCRN network should be trusted. Good data from a bad organization? I don’t think so.

strativarius
Reply to  Richard Greene
April 6, 2023 7:24 am

“”I guess commenting rule number one is to never disagree with the owner of the website, or you may never be commenting again. “”

I think you do Mr Watts a disservice there. What’s the point of commenting if you cannot express your ideas?

Richard Greene
Reply to  strativarius
April 6, 2023 7:35 am

I have had many comments involving Ukraine disappear from this website. They were published, and later disappeared. I assumed a Moderator didn’t like them. Charles told me a computer program made them disappear after a delay. They never returned. I would assume tt would be polite to not criticize the owner of the website. I then criticized his idea in this article, although trying to be more polite than I usually am.

1saveenergy
Reply to  Richard Greene
April 6, 2023 8:00 am

As this is “The world’s most viewed site on global warming and climate change”

Were your many comments involving Ukraine about the climate of Ukraine, or the politics regarding the aggressive invasion of Ukraine ???

If the latter I’m glad they disappeared, plenty of other sites for political ranting; most of us come on here to learn & discuss all things climate related.

Richard Greene
Reply to  1saveenergy
April 6, 2023 11:03 am

One article was about the effect of the Ukraine war on energy and the other article spawned a comment discussion of the Ukraine war that I joined. The Ukraine War and sanctions on Russia are frequently referenced in articles on energy prices and energy supplies.

strativarius
Reply to  Richard Greene
April 6, 2023 8:07 am

Shouldn’t we be discussing matters scientific? That’s what this site is about

Reply to  Richard Greene
April 6, 2023 8:31 am

GFY.

You were using banned words and your comments were automatically trashed.. We went over this via email. You slandered moderators of censoring and abuse. I even removed those words from the banned list, because the world has changed since they were put there.

You were proven wrong and you still have a stick up your ass about it

Again GFY..

At this point your whining can only be attributed to a denial to face reality or outright lying.

Richard Greene
Reply to  Charles Rotter
April 6, 2023 11:25 am

Why you were incredibly rude, Charles Rotter, escapes me.

I complained in the comment section that quite a few of my long comments on Ukraine were published and then later mysteriously got deleted. I NATURALLY assumed a Moderator did not like my comments on Ukraine, becausea computer program would ban the comment immediately and it would never get published in the first place.

You investigated, thank you, and discovered the word “genocide” in several of my comments was a banned word. And in one comment I explained that the N a z i genocide led to the 1948 Genocide Convention — I should have written “German” instead.

For some reason there was a long delay until the computer program deleted my comments, except the one with the “N” word in it.

Mr Rotter reported to me via e-mail that this censorship was not done by a Moderator, and then he removed “genocide” from the banned word list. That was all I ever wanted to know.

What I did not want, and did not deserve, was being repeatedly insulted by Mr. Rotter in most of the e-mails sent to me.

And this Rotter comment is even worse.

Complaining about censorship. And assuming a delayed removal of comments was done by a Moderator, is not an offense that deserves such insults

Charles Rotter, I have taken repeated insults from you in e-mails, for no logical reason, and today is even worse. You are extremely rude, and I deserve an apology, but I doubt if you are capable of that.

And if this becomes the last comment I am ever allowed to make here, which would not surprise me, other people can read your extremely rude comment above, and see I am telling the truth.

Reply to  Richard Greene
April 6, 2023 12:05 pm

You insinuated intentional censorship today after having been demonstrated to be wrong in the past. You deliberately misrepresented what occurred.

You are disingenuous and deceitful and then get all verklempt when I call you out on your bullshit. You will not be banned. You will not be censored. You are simply full of shit about how moderation is done on this site.

Richard Greene
Reply to  Charles Rotter
April 6, 2023 7:48 pm

“You insinuated intentional censorship today after having been demonstrated to be wrong in the past.”

Now you are lying in addition to being extremely rude.

I had too many comments concerning Ukraine disappear after being published

That is censorship whether done by a computer or Moderator.

Other comments that were pro-Ukraine were not deleted.

My comment above SPECIFICALLY stated:
“I have had many comments involving Ukraine disappear from this website. They were published, and later disappeared. I assumed a Moderator didn’t like them. Charles told me a computer program made them disappear after a delay. They never returned”

I specifically stated today that I had originally thought a Moderator was deleting my comments, but found out later it was a delayed reaction of a computer program.

In fact, in your first email to me, Mr. Rotter, you stated that some moderators were over zealous and you would investigate.

A few emails later you determined that the word genocide was causing the computer to delete my Ukraine comments, although not immediately when I posted the comment, but after a significant delay..

Your emails repeatedly insulted me and claimed I was lying to initially assume some unknown moderator was involved. I asked you to lose my email address and stop writing me, but that did not stop you. And apparently you are still on the warpath.

Richard Greene
Reply to  Richard Greene
April 6, 2023 12:10 pm

My primary point is several long comments were censored

I complained about the censorship in other comments.

Censorship happened — that is a fact.

I assumed a Moderator was responsible because that was a logical assumption.

But it does not matter whether a moderator censored, or a computer censored — many of my Ukraine related comments showed up and then later disappeared.

They disappeared forever.

Assuming that a comment not immediately censored for a banned word, was published, and then censored hours later, was done by a moderator, is not a lie or a slander. As you falsely claim.

It was a logical assumption of how my comments were disappearing.

I do not deserve to be called a liar for assuming a moderator was involved.

And you, Charles Rotter, remain an incredibly rude person.

I’d like to know if you have ever used the childish abbreviation GFY in any comment here before, or were you saving your uncontrolled anger for me? GFY hurled at me for the “capital offense” of complaining that many of my comments were disappearing, after they had been published, which they were? That is is not normal behavior of a civilized person.

Reply to  Richard Greene
April 6, 2023 12:23 pm

Suck it up you disingenuous whiny putz.

Your tone policing has no authority here.

Your perfomative outrage makes you look really small.

Reply to  Charles Rotter
April 6, 2023 12:14 pm

What charles said.

sherro01
Reply to  Anthony Watts
April 6, 2023 4:15 pm

What Anthony and Charles said.
Richard Greene, the level of comprehension of science displayed in your comments is poor. You seem to favour word plays on the data and conclusions of others, with little quoted research of your own.
Personally (and my view does not matter much) you remind me of the literary scene of French harridans grouped around the guillotine, crying (in their language) “Off with his head”. Over and over. That is not doing much for the advancement of science.
Geoff S

Richard Greene
Reply to  sherro01
April 6, 2023 8:11 pm

Tpp bad you do not have the coutesy to at least quote ONE SENTENCE from ANY of my comments here in the past five years, as an example of my so called POOR comprehension of science.

Instead, you hurl a series of generic character attacks of the type that is most typical of leftists.

I would imagine that few commenters here do their own original climate science research, much less peer reviewed published research, so them don’t have their “research” quoted.

Do you character attack them too, fpr makimng comments you do not agree with?

Or do you only character attack commenters you do not agree with?

Richard Greene
Reply to  Anthony Watts
April 6, 2023 8:04 pm

If you consider it okay for Charles to verbally attack me repeatedly in emails and now here, for truthfully stating my anti-Ukraine comments were getting posted and later disappearing, that is also unacceptable behavior.

Most of my Ukraine related comments were in response to an article about how Russia and the Ukraine war affected energy.

A few were added to a string of EXISTING comments about Ukraine in an unrelated article. I never started a debate about the Ukraine war after any article that was not about the Ukraine war. But I did notice that pro-Ukraine comments did not mysteriously “disappear” like my anti-Ukraine comments did.

**********************************************************************
I will do you and Charles a great favor, because I am a nice person, and make this the last article on this website that I comment on.
**************************************************************************

Now you can celebrate.

You managed to get rid of me.

All it took were the three rudest letters
in the alphabet from “hostile” Charles:
GFY

And with this goodbye,
I am giving Charles another opportunity
to insult me, which he seems to love,
and get the last word in too.

Reply to  Richard Greene
April 6, 2023 9:15 pm

Richard, it’s not an airport. No need to announce your departure, particularly since no one is on the plane with you.

w.

Reply to  Richard Greene
April 6, 2023 12:13 pm

We are not a Ukraine discussion site, we are a CLIMATE AND WEATHER discussion site. If the fact that the moderation system removes useless off topic comments such as yours upsets you, well then tough noogies.

I almost always disagree with comments made by Nick Stokes, but his comments are always published. Nick knows better than to engage in wildly off-topic comments.

Note my comments admonishing the same sort of behavior from Mr. Schaffer at the top of this thread. You aren’t getting any special treatment.

Stay on topic, be courteous, and you’ll have no problems.

Review the policy page: https://wattsupwiththat.com/policy/

bdgwx
Reply to  Anthony Watts
April 6, 2023 1:49 pm

I have to give praise where it is due. You probably wouldn’t agree with my posts either, but without fail all of my posts have been published AFAIK. The only thing I’ve noticed that sets off the moderation is when I post links to peer reviewed publications. I do that prolifically and I understand it is not the nature of the content that is triggering it, but the just the simple act of including too many hyperlinks. And with the new login mandate I’m finding that I’m actually getting moderated less often to begin with. So yeah, big thanks for allowing me to put my voice into the mix here.

Reply to  bdgwx
April 6, 2023 2:58 pm

4 links or more in a comment triggers moderation.

Richard Greene
Reply to  Anthony Watts
April 6, 2023 8:26 pm

My Ukraine comments were mainly in response to an article whose main point was the effect of the Ukraine war on energy prices. Many of the comments after that article were about Ukraine

A minority of my comments on Ukraine were ADDED in RESPONSE to EXISTING comments on Ukraine even though the article was not about Ukraine. My initial complaint was that I noticed my anti-Ukraine comments mysteriously disappeared after being published, but the pro-Ukraine comments did not disappear.

Only after an investigation was Charles able to determine that a computer moderator was deleting my comments, mainly for the word “genocide’, not a human moderator. And for that initial assumption i have been repeatedly character attacked.

That makes no difference to me.
All I knew were that my comments
were disappearing, only on one subject.

*********************************************************
Charles attacking me today with GFY
is over the top, unjustified anger.

I have decided to stop posting comments here
to prevent vile comments like that from Charles again.
**************************************************************

NOTE: Charles, for the second time, I am asking you
to please lose my email address, and forever stop
sending me emails that include insults.

Reply to  Richard Greene
April 6, 2023 8:41 pm

You ain’t worth it Nancy.

Reply to  Richard Greene
April 6, 2023 9:17 pm

Richard said”

I have decided to stop posting comments here

to prevent vile comments like that from Charles again.

You’re repeating yourself. You said that above. Go already. Don’t go away mad. Just go away.

w.

Reply to  Richard Greene
April 6, 2023 8:29 am

You missed the biggest issue of all – a global “average” tells you nothing about what is happening with the climate. Two different climates can have the same average – which is actually a mi-range value, a median if you will. Two different Tmax’s and Tmin’s can give the same mid-range value. So how do you conclude anything about climate from a mid-range value?

Climate science should join agricultural science and HVAC engineering in using integrative heating/cooling degree-days. Then you can tell what is happening to the actual climate.

Reply to  Tim Gorman
April 6, 2023 12:30 pm

‘Climate science should join agricultural science and HVAC engineering in using integrative heating/cooling degree-days.’

Interesting idea!

bdgwx
Reply to  Frank from NoVA
April 8, 2023 3:41 pm

nClimDiv already includes CDD and HDD.

Reply to  bdgwx
April 8, 2023 4:31 pm

Are you related to a man named Tevye? You keep hollering about TRADITION! The CDD and HDD used in nClimDiv is the *OLD*, traditional way of calculating it. It is not the newest methodology using integration of the entire temperature curve.

When is climate science going to join the 21st century?

bdgwx
Reply to  Tim Gorman
April 8, 2023 8:07 pm

I’m okay with using a different method. Can you explain how you would do it for the nearest station to me: St. Charles Elm Point (USC00237397) for the year 1899.

Reply to  bdgwx
April 9, 2023 7:34 am

You are *still* trying to justify using the same old traditional method!

There is *NO* reason why you can’t do the old way and the new way jointly for those stations that have the capability.

The *only* reason I can think of for not doing so is that it might point out something that the climate alarmists don’t want to come to light!

bdgwx
Reply to  Tim Gorman
April 9, 2023 11:23 am

Stop deflecting and diverting. What is the CDD/HDD for the year 1899 at St. Charles Elm Point (USC00237397) using your method?

Reply to  bdgwx
April 9, 2023 4:57 pm

What’s your point? There is no data in 1899 to be used in the integrative method of calculating hdd/cdd!

So what? Does that mean we should *never* collect data that *is* useful in doing so? That we should never stray from the traditional methods?

Tyeve: “This isn’t the way it’s done, not here, not now.
Some things I will not, I cannot, allow.”

Someday you should watch the movie “Fiddler on the Roof”.

bdgwx
Reply to  Tim Gorman
April 9, 2023 6:21 pm

TG said: “There is no data in 1899 to be used in the integrative method of calculating hdd/cdd!”

Then how am I supposed to use the integrative method on historical data?

TG said: “Does that mean we should *never* collect data that *is* useful in doing so? That we should never stray from the traditional methods?”

This is your strawman. You and you alone owe it. Don’t expect me to defend your arguments especially when they are unreasonable.

Reply to  bdgwx
April 10, 2023 6:57 am

OMG! I’ve told you this TWICE already. YOU DON”T. You run the two methods in parallel! Why is that so hard for you to understand?

This is your strawman. You and you alone owe it. Don’t expect me to defend your arguments especially when they are unreasonable.”

The only thing unreasonable here is the continued use of the argumentative fallacy of Appeal to Tradition.

Let me repeat your argument at its base: Tyeve: “This isn’t the way it’s done, not here, not now. Some things I will not, I cannot, allow.””

You and Tyeve both are hidebound and resemble the establishment against Galileo.

bdgwx
Reply to  Tim Gorman
April 10, 2023 8:45 am

TG said: “OMG! I’ve told you this TWICE already. YOU DON”T. You run the two methods in parallel! Why is that so hard for you to understand?”

How do you run the two methods in parallel on data from 1899?

TG said: “The only thing unreasonable here is the continued use of the argumentative fallacy of Appeal to Tradition.”

I’m not appealing to tradition here. Anybody who tracks my posts knows that I’m a proponent of improvement. If there is a better way then let’s do it. Let’s use better HDD/CDD methods in domains where it is possible.

TG said: “Let me repeat your argument at its base: Tyeve: “This isn’t the way it’s done, not here, not now. Some things I will not, I cannot, allow.”””

That’s not my argument. That’s your argument. I never said anything about sticking with tradition. It was first mentioned by your in this post. Like I said before. Don’t expect me to defend your arguments especially when they are unreasonable and are crafted as strawmen.

Reply to  Tim Gorman
April 6, 2023 4:10 pm

Dear Tim,

Against what base? Why not simply deduct the dataset average and track residuals using a 31-day running mean.

Speaking as an agricultural scientist (which I am), how will degree-days in Alaska relate to degrees-days in Hawaii? or Marble Bar, allegedly the hottest place in Australia?

Yours sincerely,

Bill Johnston

Reply to  Bill Johnston
April 7, 2023 4:19 am

Use anomalies!! /sarc

Reply to  Steve Richards
April 8, 2023 5:34 am

In essence that is what degree days are. If a common base were used globally, when integrating the 5 minute temp data you would end up with an anomaly (heating degree days or cooling degree days) based upon a similar base.

The real issue is that it would remove the incentive to adjust temperatures quite so much. The difference between 72 and 72.145 would disappear. In general even plain old integer temps would suffice.

The key is that a consensus would be required to set the base temperature to be used. That is not much different than the question that is posed here about what is the best temperature for the globe.

Reply to  Bill Johnston
April 8, 2023 2:45 pm

If you are an ag scientist then you should be able to define the base. It’s what growing degree-days are based on. Why should this be so hard to do?

Integrate the entire temperature profile from sunrise to sunrise, sunset to sunset, sunrise to sunset and sunset to sunrise, or from 0000GMT to the next 0000GMT.

The value of the integration will allow you to differentiate between two locations with different climates where the use of a mid-range value will not.

How does UAH manage to handle varying time-of-day observations as it travels around the earth? Using local degree-day integrations would be child’s play compared to that.

Reply to  Tim Gorman
April 8, 2023 5:48 pm

Thank Tim (and Jim), [are you guys related by the way?]

I asked “Why not simply deduct the dataset average and track residuals using a 31-day running mean”; and,

“Speaking as an agricultural scientist (which I am), how will degree-days in Alaska relate to degrees-days in Hawaii? or Marble Bar, allegedly the hottest place in Australia?”

– genuine questions.

A base of 0 degC say for wheat in Victoria (OZ), would accumulate well off-scale out in the desert at Marble Bar??

Which of course is why they use anomalies. But why not anomalies relative to the dataset mean?

Remember too that pre-AWS we don’t have 1 or 5-minute data, High frequency AWS data has to be bought for $$$. Rightly or wrongly, Tav is (Tmax + Tmin)/2, and as I indicated elsewhere, only two of the 1440 1-min samples are important – Tmax and Tmin. It would be handy to have RH & DP-T as well, but we have to buy that as well.

Anyway, I have to move on.

Cheers,

Bill Johnston

Reply to  Bill Johnston
April 9, 2023 7:19 am

If we never start collecting and using 1 min or 5 min data then we will never move on from the old, inaccurate mid-range value calculation. The mid-range value is simply not a statistically or metrology adequate protocol. It may have been the best we used to have but we’ve been able to do better for 40 years.

Teyve from “Fiddler on the Roof” : This isn’t the way it’s done, not here, not now.
Some things I will not, I cannot, allow.

bdgwx
Reply to  Richard Greene
April 6, 2023 9:44 am

RG said: “For reasons I can’t explain logically, the NClimDiv and USCRN numbers are very similar.”

USHCN-FLs.52j is similar to USCRN.

USHCN-raw is not as similar to USCRN.

The reason FLs-52j is similar is because it uses PHA to correct the time-of-observation change, instrument package change, station relocation, etc. biases.

Interestingly USCRN shows more warming than USHCN-adj which is leading some to hypothesize that it is still biased low.

Hausfather et al. 2016

RG said: “USCRN supposedly has great weather station siting, while NClimDiv is mainly haphazard weather station siting. So why are both US averages so similar? That does not make sense. Just a coincidence?”

Same answer as above. Like USHCN-FLs.52j the newer nClimDiv dataset has PHA applied to it to removed the biases.

Reply to  Richard Greene
April 6, 2023 9:45 am

That average is a statistic that only NOAA could generate and verify.

There’s nothing stopping you downloading all the USCRN data and producing your own average.

Richard Greene
Reply to  Bellman
April 6, 2023 9:03 pm

They are all NOAA numbers (data)
If I download NOAA numbers,
they are still NOAA numbers.

The government is the only source of the numbers
You are implying I must trust the government
I do not trust the government.

In addition, does NOAA provide the public with their exact computer program that they use to calculate a US average temperature. There could be many ways to do that and many possible adjustments to the data.

Can we be confident that NOAA’s raw data are REALLY unadjusted raw data, and assume the measurements were accurate in the first place?

And can we really trust NOAA’s claimed (I think) +/- 0.1 degree C. margin of error claim, even now, much less in the late 1800s?

April 6, 2023 7:44 am

“Unfortunately, the data from the USCRN network are buried by the U.S. government, and are not publicly reported in monthly or yearly global climate reports. It has also not been deployed worldwide.”

Given today’s technology, shouldn’t we confirm and validate the need for it with a network and data collection that is not relying on a 100-year-old system?
______________________________________________________________

If USCRN is willfully ignored (I assume that’s exactly what’s going on) then any “new” network & data collection system will also be willfully ignored.

April 6, 2023 7:57 am

Article says”… at least 100 feet from any extensive concrete or paved surface,”…”

As I and others have pointed out gases dissipate heat. Our world relies on this to happen. Cool our cars, heat our homes, dry our hair, etc. The above requirement is an example of gases having that capability.

antigtiff
April 6, 2023 8:05 am

What if ?…….What if there was an automatic temp measurement device on Pitcairn Island for the last 5000 years? Would not this single temp measurement suffice for the entire planet? If the planet warms or cools….it will be reflected on Pitcairn sooner or later? ….and/or Tristan da Cunha Island?

Reply to  antigtiff
April 6, 2023 8:33 am

That is local climate, not regional or global climate. There are so many factors that actually go into determining actual heat content that it is impossible for one location to be an indicator for the whole planet.

antigtiff
Reply to  Tim Gorman
April 6, 2023 8:43 am

Once upon a time…2 times….it was Snowball Earth…..once dinosaurs roamed above the arctic circle….do we measure the temp of a human by sticking a thermometer in all orifices and “averaging” the temp? The Little Ice Age temps were worldwide…and the current warming is reflected world wide.

Reply to  antigtiff
April 6, 2023 8:50 am

What warming? The “annual average global temp”? Since two different climates can have the exact same mid-range temperature how do you know the *earth* is warming? And what warming is it that we are seeing? Min temps? Max temps? Both? How do you tell from a mid-range value?

antigtiff
Reply to  Tim Gorman
April 6, 2023 9:05 am

It is now warmer than a couple centuries ago during the Little Ice Age….and the Dark Ages Cooling Period…There have been many cycles over the last 10000 years and the cycles are roughly a few centuries warming followed by a few cooling…it is reasonable to assume a continuation of these rough cycles until Nature decides to change it…no one knows for sure that CO2 controls climate….no one thoroughly understands climate changes. Heat disperses – it does not accumulate….sooner or later…the planet cools …or warms.

Mr.
Reply to  Tim Gorman
April 6, 2023 9:41 am

Geophysics studies tell us that the upper & lower Mantles of the Earth’s interior are where most of the internal heat of the earth is located. 
Large convection cells of the mantle circulate heat and power rhythm of the earth.

There are identified ‘hot spots’ in the thermo-nuclear reaction that goes on continuously in the fluid innards of this planet.

How can these not be reflected in the (relatively) 70k thin Crust of Earth sitting immediately above the ~ 7,200F molten iron cauldron where we and all life exist?

So “average global temperature” conditions immediately above the Crust (incl oceans and polar ice caps) is a construct that has so many influences and irregularities of measurements as to be nonsensical.

MarkW
Reply to  Mr.
April 6, 2023 12:39 pm

While hotspots can and do move around, the total amount of energy coming from the interior stays pretty close to constant.

MarkW
Reply to  antigtiff
April 6, 2023 12:37 pm

If you have an infection in your finger, that finger will get hot. But the body as a whole won’t change temperature.
A single human body is small enough that most of the time, a single data point is good enough for estimating the temperature of the whole.
This is not true of the Earth as a whole.

Reply to  Tim Gorman
April 6, 2023 1:14 pm

It’s local- sure- and therefore not so great- but maybe no worse than what we’re getting now- it could be a useful- if it was possible to have that real data.

Reply to  antigtiff
April 6, 2023 10:05 am

Mr. Anti,

Meanwhile, off the island, temps in the Sahara change at different rates as the desert expands or contracts. In any case, air temps are too random and variable. Wouldn’t ocean temps at 10,000′ indicate the “global” temp more accurately? Record and track the internal temp.

Reply to  Citizen Smith
April 8, 2023 5:39 am

UAH maybe?

MarkW
Reply to  antigtiff
April 6, 2023 12:34 pm

If there were changes in sea currents, Pitcairn Island could warm or cool, while the Earth as a whole could be doing the opposite.

April 6, 2023 8:39 am

Unfortunately, the data from the USCRN network are buried by the U.S. government

They didn’t do a very good job of burring it. It call all be found here

https://www.ncei.noaa.gov/access/crn/

Reply to  Bellman
April 6, 2023 12:26 pm

You didn’t do a very good job of providing the full context of what I wrote.

“Unfortunately, the data from the USCRN network are buried by the U.S. government, and are not publicly reported in monthly or yearly global climate reports.”

Please, do try to not be a pigheaded commenter.

bdgwx
Reply to  Anthony Watts
April 6, 2023 1:45 pm

I’m sure the “pigheaded” commenter (no offense Bellman) can speak for himself, but you did say USCRN was buried by the US government. All I did was enter “USCRN” into google. I went to the very first site. I clicked “Get Dataset” right on the main page and then selected “Data Access” under the Monthly section and Bob’s your Uncle I see all of the data right there. It doesn’t appear to be buried very well to me either. However, I do agree that it does not appear that USCRN is specifically mentioned in the monthly or yearly reports.

Reply to  Anthony Watts
April 6, 2023 2:34 pm

Yes, sorry. I should have copied the whole sentence. I read it as two separate clauses, they bury the data, and they don’t use them in the global reports. Rather than, they bury the data, by not reporting in them in the global reports.

bdgwx
Reply to  Bellman
April 6, 2023 6:38 pm

That’s how I read too. But even if the context were that it was buried as evidenced by not being mentioned in the reports I would challenge that isn’t strong evidence of it being buried. Afterall, the reports are focused on changes relative to a base period and over longer periods of time. USCRN hasn’t even lasted long enough to establish a baseline with the typical 30yr duration. Plus, not being mentioned in this-or-that report doesn’t mean it isn’t buried. There are lot of datasets that NOAA maintains that are not mentioned in those reports.

Reply to  bdgwx
April 9, 2023 4:27 pm

Ask any reporter the last time they saw:

  1. USCRN data in a NOAA public press release.
  2. USCRN data in a story they did about climate for the public to read.

If you didn’t know about USCRN from either your own studies or writings here, you’d never even know how to search for it.

Think whatever you want, but is IS buried.

April 6, 2023 8:43 am

Given the government monopoly on use of corrupted temperature data, questionable accuracy, and a clear reticence to make highly accurate temperature data from the USCRN available to the public, it is time for a truly independent global temperature record to be produced.

I’m interested in who would finance such an operation, and how independence could be ensured.

April 6, 2023 9:00 am

There are good points in this article about the nature of the existing sources of surface temperature data.

Nevertheless, the problem of reliable attribution remains.

Even if the temperature acquisition system is perfected, so what?

If it shows warming, why did it warm, does it matter, and is it compelling to do anything differently?

If it shows cooling, why did it cool, does it matter, and is it compelling to do anything differently?

Those questions won’t go away.

But we already know enough not to expect emissions of the otherwise harmless non-condensing trace gases CO2, CH4, and N2O to force heat energy to accumulate on land and in the oceans to harmful extent by what happens in the atmosphere. Watch from space, get a grip about what the atmosphere and ocean circulations do, and move on.

https://wattsupwiththat.com/2022/05/16/wuwt-contest-runner-up-professional-nasa-knew-better-nasa_knew/

Don’t get me wrong – I would not oppose a better system to acquire data. But the bigger problem is harmful policies already taking hold based on unsound attribution.

Richard Greene
Reply to  David Dibbell
April 6, 2023 9:17 pm

Data versus predictions

Historical temperature data are not perfectly accurate

Future climate predictions are data free speculation

Predictions are unrelated to any past temperature trends
Not even just an extrapolation of the 1975 to 2015 warming trend
And the 1940 to 1975 cooling trend is ignored
And the flat 2015 to 2023 trend is ignored

There seems to be little or no correlation between the average temperature trends since 1940 and the very long term (400 years) ECS predictions.

For example, if IPCC predicted that the temperature trend in the next 82 years would be similar to the prior 82 years, from 1940 to 2023, would that scare anyone?

Actually, climate models do say that for TCS with RCP 4.5
But the IPCC prefers to scare people with ECS and RCP 8.5, which approximately doubles the warming rate for TCS with RCP 4.5.

Reply to  Richard Greene
April 7, 2023 3:59 am

“And the 1940 to 1975 cooling trend is ignored
And the flat 2015 to 2023 trend is ignored”

And the evidence from space (hourly CERES + near-real-time GOES) is ignored/explained away in favor of the “forcing + feedback” framing of the climate system response to GHGs.

April 6, 2023 9:36 am

I got myself a new little toy.
It’s an Elitech RC4 datalogger and is such a little sweetie.

Its great beauty to me is how diddy/compact it is, that it’s self powered, that it has a solid state sensor on the end of a wire and that you can program it
Especially 2 things come together in that:

  1. you can program it to record data at intervals of down to 10 seconds
  2. that it has an insane Slew Rate. This little fugger is quick off the mark

After some initial familiarisations, on Sunday afternoon I installed it in my plastic-mushroom Stevenson screen, along with another datalogger. (Big Hefty Brother running at 4 minute logs)
Then on Tuesday morning I ’emptied it’ to see what I’d caught – that is the picture you see.

It’s a beaut. Eat yer heart out Leonardo and leave Lisa alone.
We’re all gonna hang this in Notre Louvres from now on.
😀
Two things: (the second one will be in another post)
1/ I wanted to catch The Event that happens in my part of the world, through the night, whenever temperatures drop to near zero Celsius.
I caught it alright – highlighted in pink
Ignore that spike at the very start. That was me still fiddling with and when installing it in my Stevenson. Goes to show how sensitive it is if nothing else
But, I wanted to be absolutely sure the El Sol would never ‘see’ it and that it was protected by all 3 separate screens that go to make up My Stevenson Screen

Despite having run a few recces around the neighbourhood, I am still none the wiser what causes those wiggles.
Somebody patently is striking up a very big heat machine whenever Jack Frost comes close but the amazing thing is, just how big it must be. There are maybe 5 or 6 Wunderground stations around me within a 2 mile radius and they all see those temperature wiggles.

Just look at it Holy Kow, it’s moving night-time temps by 3, 4 and 5 Celsius!!
From a range measured in miles.
Compare to Heathrow Airport, from where all the very best temperature records are set – it’s only 3 miles from one end to the other (East/West) and less than 2 miles North/South

Takeaway message: If we want super accurate and pristine recordings of temperature, we need to be a damn sight further away than 100 metres from ANY possibilities

If interested, the txt file of the actual data is here at Dropbox

My Garden April 2023.PNG
JCM
Reply to  Peta of Newark
April 6, 2023 12:19 pm

Condensation of frost and dew during the still of night. A release of the latent heat of vaporization back to sensible heat under peak night.

During the day, the opposite, the variation of vapor pressure deficits of turbulent air and associated heat uptake of vaporization under peak day.

radiation enthusiasts ignore such matters.

Reply to  JCM
April 6, 2023 5:19 pm

I agree and it is a problem that rapid-sampling instruments having short time-constants detect signals that reflect processes that are going on – condensation, latent heat transfers etc that affect the sensor, or the screen, but which don’t reflect the bulk of the medium being monitored. The ‘whole air’ does not actually spike over periods of seconds.

In any event, a difference between instruments of +/- 0.3 degC is within the minimum uncertainty band of a single observation. It is not hard to get that that data become pretty rough at each end of the precision spectrum.

Converting variance to signal using precise rapid sampling instruments is as useful as using data for worn out thermometers observed by farm-hands held in a poorly-maintained Stevenson screen at an unknown location within the boundary of Ruthergeln Research Station in the 1920s; or by the lighthouse keeper somewhere near the Cape Otway lighthouse; or the Aeradio met-officer at Mildura aerodrome, to estimate trends in the climate.

All the best,

Bill Johnston

JCM
Reply to  Bill Johnston
April 6, 2023 5:41 pm

condensation, latent heat transfers etc

Such factors are EVERYTHING of interest. It is not a “problem”. It is THE “thing”.

Peta of Newark has captured EXACTLY what climate change is all about.

Reply to  JCM
April 6, 2023 6:27 pm

Dear JCM,

If you want to know the temperature of the air then as Peta has alluded to below, you need a process of attenuating the ‘noise’. Noise is not the same as the temperature you are interested in comparing over time.

Peta:

  1. did another run/plot and used 3 ways to get the day’s average tempI added all the 20 second samples together and divided by n
  2. I looked for the Max and the Min in the 20-second data and did a (Max+Min)/2
  3. I ran a simple average over the 20 second data, to get = 5 minute averages then did a MaxMin on that (to replicate the Mercury thermometer)

However, if you wish to measure the temperature of the processes (condensation, latent and sensible heat exchanges etc.), you need an accurate, rapid-sampling instrument with a short response time.

A sensitive, rapid-sampling instrument will detect small changes in incoming radiation due to cloud, but only insofar as radiation converts into sensible heat, which is then advected by a surface to the air so it can be measured by thermometers or sensors. (Peta of Newark’s screen prevents solar radiation from directly affecting instruments.)

Variance is not interchangeable with ‘signal’ in the sense of the signal being consistent between instruments.

All the best,

Bill Johnston

JCM
Reply to  Bill Johnston
April 6, 2023 6:45 pm

I do not understand. I am under the impression Peta has plotted the temperature measured by the instrument over a day or two. I am under the impression the instrument is about at 2m height. Temperature is temperature. It is not a daily average temperature. It is the raw timeseries logged by the instrument.

Peta has inquired about some external factor which influences the temperature at a magnitude of 2 or 3C. The obviously conclusion is that it is the turbulent fluxes of H and LE. These factors are very real. They are not noise.

Reply to  JCM
April 6, 2023 8:49 pm

I agree JCM that Peta has plotted raw data, but I don’t know what the sample rate is (a sample every 10 seconds, of an average of 10, 1-second readings each 10-seconds).

Looking at the graph (but not the raw data), the wobbles around dawn seem to reflect a process, which as you pointed out is probably related to frost/dewfall (release of latent heat on the screen as moisture condenses; and also, T being around zero, radiative cooling and frost formation), but could also be related to passing clouds (which reduce outgoing LW radiation), breezes stirring the air etc. Air pressure is also highest just before dawn. I question whether the perturbations are representative of the air mass being monitored, or a more local process, my point being that the minimum-T of that process probably does not reflect the wider ‘climate’.
 
Peta’s comments re. cloud is also interesting. Cloud restricts incoming SW radiation and also outgoing LW radiation. However, radiation does not ‘heat the air’ on the way through Instead, the effect of radiation on air temperature is via advection – contact between the near-surface atmosphere and a surface. The surface can be the exterior of the screen.
 
There is, as you also point out, convectional transfers involving latent heat and the balance between that and sensible heat, which is the fraction of the local heat balance measured by sensors held in Stevenson screens. At high resolutions all this and more becomes data – a parcel of air rising off a footpath over there; a puff of exhaust from a car going past etc. There is also the problem of ‘spikes’ which may be electronic or result from over-ranging of the unit’s calibration.
 
 
Not everything measured by a sensitive instrument located in someone’s backyard is ‘climate’ and when the goal is to estimate the temperature of the airmass, in my view (and in the light of my experience working with electronic probes in the past) local microclimatic effects and non-climate noise should be filtered out.
 
 
All the best,
 
 
Bill Johnston
 

JCM
Reply to  Bill Johnston
April 7, 2023 7:28 am

Johnston

non-climate noise should be filtered out

I understand your point. However, what escapes the conceptualization of climates by radiation enthusiasts is that the turbulent fluxes of H and LE are critical to the energy budget and therefore the temperature measured by the instrument.

Peta’s 24 hr or so timeseries highlights this with great clarity.

H & LE are absolutely fundamental climate variables. With respect, filtering out their effects is an absurd notion to me.

as a PS: barring some special property of the “screens” which promote anomalous condensation and evaporation process, such energy budget processes are occurring all throughout the region surrounding the sensor.

Reply to  JCM
April 7, 2023 4:00 pm

I looked up some my old 1960s texts for a graph I thought I had, but to no avail. Then I remembered I summarised some high frequency data for a friend for several automatic weather stations in Queensland. Attached are plots for 1 and 2 January 2017 for the AWS at Bundaberg airport. It is not winter, there is no frost; however, 0.8mm of rain was recorded on 1 Jan, and 14.6mm on the second.

You are looking at raw 1-minute data (N=1440) from midnight. From that data the AWS calculates maximum and minimum temperatures for the day. (I believe each sample is an end of minute spot, taken by a PRT-probe having a time constant of 40 to 80 seconds, so it is an attenuated value). Peta’s data seems to be for a probe that has a short time-constant, so it picks up more noise.

At the end of the day only two of the 1440 readings matter – the highest and the lowest,

The issue of the Bureau’s PRT probes has been raised and discussed on WUWT quite a few times by Jennifer Marohasy who believes that Bureau data are unattenuated ‘spot’ end of minute readings, which is incorrect (see https://doi.org/10.1071/ES19010).

So here we have two datasets – a rapid spot dataset posted by Peta, and the data I’ve attached for the attenuated probe at Bundaberg. Forget about the climate difference, think about the noise.

Cloudiness at night reduces outgoing LW radiation, cloudiness in the day reduces incoming SW. Perhaps the rapid reduction around 500-minutes on 1 January was due to a brief morning shower while lower temperature generally on the 2nd was due to rain that was more prolonged.

There is certainly lots of noise in both datasets during the heat of the day, but which peak is Tmax, and at night which down-spike is Tmin, and more importantly, are those spikes representative of the airmass, or some microclimatic process. When rain falls out of the sky, there is also an instantaneous loss of heat from the surface, due to evaporation from the warm surface. All known knowns, and looking back through some of my now out-of-date texts, radiation balance etc. was an active area of research earlier than the 1950s.

To my mind, much of the noise has to to be averaged out, flattened or filtered in order to make sense of the data.

All the best,

Bill Johnston

BundabergAWS.JPG
JCM
Reply to  Bill Johnston
April 7, 2023 4:18 pm

I understand what you are saying.

My responses initially were to propose a reason why Peta was seeing such short term fluctuations. I hold to the notion that such variations are not only a radiative phenomenon, but relate also to the turbulent fluxes. The high resolution data capture such processes in remarkable detail.

For the study of macro climates, and the human experiences / measurement of climate, one must fully appreciate boundary layer phenomena first and foremost. This is the angle with which I approach the subject.

However, from the perspective of long term historical data analysis, certainly your methods are highly relevant.

cheers.

JCM
Reply to  Bill Johnston
April 7, 2023 4:44 pm

For clarity, to study the mechanisms of “climate change”, I propose that it is in that “noise” that one can appreciate the processes involved. But certainly such modern instruments should not be directly compared to older instrumentation. I think we are on parallel but complementary paths.

Reply to  JCM
April 7, 2023 6:31 pm

Thanks JCM,

I really don’t know. Temperature probes generate heaps of data, especially if you are interested in the raw-raw 1-second data. When we get down to very short intervals, I’m uncertain how ‘huge’ datasets could be handled (I mainly use R, but the data behind the graphs here were generated using lookup tables in Excel), and what fine-resolution data actually mean – what is the signal and more importantly, how or what does that help.

Remembering these are attenuated data, here is another pair of graphs for a single day – 28 February 2017, there had been no rain for 14 days. (Note the change in scale – nights are cooler and days are warmer.)

The top graph is the 1-minute data. For the lower graph I cut and diced the 1440 minutes into 6-minute segments (0.1 hours), and calculated for each of the 6-values, max (red) and min (green).

There was no rain, so presumably no clouds. The range (Tmax-Tmin) can be as high as 1.7 degC in a six minute interval. What if it was 1-second data – it could be all over the shop (or not, I don’t know). The range can also vary from 1.7 to 0.6 degC in 12 minutes, then shoot back up again. Is it the instrument, the 90-litre screen, a car or aircraft going past or what. And where is Tmax and Tmin in all of this? (32.3 and 19.3 degC 13.2 hrs and from 05.4 to 06.1 hrs.)

From a climate perspective I think it is pretty hard to make sense out of what is going on at this resolution.

Cheers,

Bill Johnston

Bundaberg_28 Feb.JPG
Reply to  Bill Johnston
April 8, 2023 9:41 am

Your graphs here are illustrative of some of the incorrect math being done. There are clearly two different distributions of temperature occurring here. The warming is somewhat sinusoidal while the cooling is exponential/polynomial. Using an average of Tmax and Tmin just isn’t correct. At least, we should be averaging the average of each distribution, although that isn’t even correct, we should be averaging the area under each curve.

Anyway, the average of a sine is 0.637 of the max, i.e., about 31, so (0.637 * 31) = 19.7). Estimating the minimum, I would say around 25 for the average.

Traditional => (31 + 19) / 2 = 25 σ = 6
Distributions => (19.7 + 25) / 2 = 22 σ = 3

That is a big difference and a smaller standard deviation (uncertainty) also.

Reply to  Jim Gorman
April 8, 2023 11:39 am

“Anyway, the average of a sine is 0.637 of the max, i.e., about 31, so (0.637 * 31) = 19.7). Estimating the minimum, I would say around 25 for the average.”

How many times, that is not the calculation. The fact you’d managed to get an average daytime temp that is 5°C colder than the average night time temperature should be a clue to that.

If you want to treat daytime temps as a sine wave, you need to multiply the difference between the start and the maximum by 0.637 and then add the start value. Say the temp starts at 26°C the difference is 5 and so the average would be 0.637 * 5 + 26 = 29.2.

But it depends on deciding exactly when the sin starts and ends.

Reply to  Bellman
April 8, 2023 2:43 pm

You are partly correct!

Height of sine = 31-19 = 12

12*0.637 = 7.6
7.6 + 19 = 26.6

(26.6 + 25) = 42.1
42.1 / 2 = 21.1 σ = 0.8

Traditional => (31 + 19) / 2 = 25 σ = 6
Distributions => (26.6 + 25) / 2 = 22 σ = 0.8

Now let’s remember that a sine average of 0.637 is based on a full half cycle from 0 to π. We are not actually seeing that. More like 2π/3. That would give a factor of somewhere around 0.75 and numbers of:

Traditional => (31 + 19) / 2 = 25 σ = 6
Distributions => (28 + 25) / 2 = 26.5 σ = 1.5

Still vastly different.

The biggest difference is that a decay function has its biggest change at the start of the decay.

That is one large reason using one sample point of a distribution is probably the worst choice ever made. It is time to move forward with the science and utilize the data we have use more rigorous mathematics.

Reply to  Jim Gorman
April 8, 2023 4:15 pm

Height of sine = 31-19 = 12

First you point out day and night have different gradients, now you say the mid section of the sine is the minimum night time temperature?

Traditional => (31 + 19) / 2 = 25 σ = 6
Distributions => (28 + 25) / 2 = 26.5 σ = 1.5
Still vastly different.

It would be a lot easier if you just calculated the average from all the measurements rather than play these guessing games. I wouldn’t say a difference of 1.5°C was vastly different. And your sigma values are irrelevant. You can’t just take the deviation between max and min and compare it to the daily distribution, they are two completely different beasts, and neither have any relevance to the average temperature.

That is one large reason using one sample point of a distribution is probably the worst choice ever made.

I’m sure James Six will be burned in effigy for his crimes. But in the mean time, you don’t explain how you would have recorded average temperature.

It is time to move forward with the science and utilize the data we have use more rigorous mathematics.

You don;t need more rigorous mathematics, just more frequent measurements. Then just invent a time machine so we can use the devices in the 19th century.

Reply to  Bellman
April 8, 2023 4:33 pm

“I wouldn’t say a difference of 1.5°C was vastly different.”

It is HUGELY different when you are trying to identify differences out in the hundredths digit!

“You don;t need more rigorous mathematics, just more frequent measurements. Then just invent a time machine so we can use the devices in the 19th century.”

You and bdgwx must be related to the same man – Tevye.

There is *NO* reason why we have to stick with traditional methods ONLY. We’ve had the ability to collect these “more frequent measurements” for 40 years! Why haven’t we? TRADITION! As Tevye would say. “We’ve done it that way forever. If it was good enough for our grandparents then its good enough for us”!

We *could* have had a 40 year length of data but just let it go because of “TRADITION”!

Reply to  Tim Gorman
April 8, 2023 5:32 pm

It is HUGELY different

Why do you even care about the exact average. You’re the one who has bleating on for the last few years that averages tell you NOTHING, that only maximum and minimum temperatures can tell you what’s REALLY happening, that NOBODY experiences the average temperature.

Then you keep insisting that measurement uncertainty has to be multiplied by the square root of the sample size. Why do you think an average made up of hundreds of individual measurements won’t be HUGELY uncertain?

You and bdgwx must be related to the same man – Tevye.

You make that pathetic joke 100 times a day, yet complain about us being stuck in the past.

We *could* have had a 40 year length of data but just let it go because of “TRADITION”!

Because of the need to have a data set longer than 40 years. You’ll be first to complain that people are ignoring how much hotter it was in the 1930s if we only have 40 years of data.

Reply to  Bellman
April 9, 2023 7:14 am

Why do you even care about the exact average. “

Because it is what the climate alarmists are looking at.

“hat only maximum and minimum temperatures can tell you what’s REALLY happening, that NOBODY experiences the average temperature.”

Not even global min and max average temps can tell you anything because of the uncertainty that goes with them!

Then you keep insisting that measurement uncertainty has to be multiplied by the square root of the sample size.”

That’s not *me* insisting on that. Uncertainty never goes down, it only goes up when you are combining single measurements of different things using different devices.

“Why do you think an average made up of hundreds of individual measurements won’t be HUGELY uncertain?”

They *will* be hugely uncertain! It’s what I’ve been trying to tell you for two solid years!

You make that pathetic joke 100 times a day, yet complain about us being stuck in the past.”

I guess you’ve never seen “Fiddler on the Roof?

“Because of the need to have a data set longer than 40 years.”

Every data set has to start being collected at some point. Again, there is no reason that both can’t be done jointly! If you never start using modern methods then you will NEVER have modern methods – you’ll be stuck with TRADITION forever!

Tevye: “This isn’t the way it’s done, not here, not now.
Some things I will not, I cannot, allow.”



Reply to  Tim Gorman
April 9, 2023 1:48 pm

They *will* be hugely uncertain! It’s what I’ve been trying to tell you for two solid years!

Ignoring the point. You want all measurements to be based on averages taken at a fine resolution. The more measurements you take, in your delusion, the higher the uncertainty in the average. So what would make an average based on 1440 readings useful. If the uncertainty of the instrument is ±0.3°C for a single reading, the uncertainty of the average according to your own logic would be ±11°C. How is that more useful than one made with just two measurements?

I guess you’ve never seen “Fiddler on the Roof?

It’s not my favorite musical, but I get the reference. It’s still a poor joke, repeated ad nauseam.

If you never start using modern methods then you will NEVER have modern methods – you’ll be stuck with TRADITION forever!

What do you think the point of CRN is? Why do you think we have satellite data going back 40 years. It isn’t because people are insisting only using traditional measurements. Why do you think we keep seeing new versions of data sets? Because there are better methods of computing data, and because more data is available than 40 years ago.

Reply to  Bellman
April 9, 2023 5:14 pm

You want all measurements to be based on averages taken at a fine resolution. The more measurements you take, in your delusion, the higher the uncertainty in the average. “

There is simply no point in continuing this with you. You are simply unable to grasp the difference between

  1. multiple measurements of the same thing using the same device under repeatability conditions, where multiple measurements *do* lessen the uncertainty and
  2. multiple single measurements of multiple different things using multiple different devices under conditions that can’t be repeated which leads to more uncertainty as you take more measurements of more different things.

Until you can grasp the difference between these two situations you are going to remain stuck in you box where measurement uncertainty is always random, Gaussian, and cancels.

Reply to  Tim Gorman
April 9, 2023 5:38 pm

“There is simply no point in continuing this with you.”

You keep promising that, but never keep your promise.

multiple measurements of the same thing using the same device under repeatability conditions, where multiple measurements *do* lessen the uncertainty and

But you are not measuring the same thing. You are measuring different temperatures throughout the day and night. You keep insisting that it’s impossible to ever measure the same temperature – the instant you measure it it’s changed.

multiple single measurements of multiple different things using multiple different devices under conditions that can’t be repeated which leads to more uncertainty as you take more measurements of more different things

Yet you still can’t explain why you believe that. It’s just repeated as a matter of religious dogma.

How does a measurement error made on one instrument know it can cancel with other errors made by the same instrument, but can;t cancel with errors made on a different instrument? Could you explain what quirk of statistics stops that from happening?

Reply to  Bellman
April 10, 2023 6:41 am

“Yet you still can’t explain why you believe that. It’s just repeated as a matter of religious dogma.
How does a measurement error made on one instrument know it can cancel with other errors made by the same instrument, but can;t cancel with errors made on a different instrument?”

You *still* haven’t read Taylor or Bevington, you’ve only cherry-picked things. Why is root-sum-square addition used?

Reply to  Tim Gorman
April 10, 2023 1:15 pm

Why is root-sum-square addition used?

Because random errors tend to cancel, regardless of whether they are from the same instrument or not.

Reply to  Bellman
April 10, 2023 6:18 pm

There are no random errors with single measurements. The only thing there is with temps are Type B errors specified by NOAA and NWS.

Those do not cancel when averaged. You need to show how two distributions with a standard deviation at vastly different µ’s can have errors that cancel.

Say a Tmax at 80ºF ±1 and 55ºF ±1. Those errors will result in intervals of 79 – 81 and 54 – 56. How can those possibly cancel?

That is a far distant thing when compared to multiple readings of the same thing and you end up with a Gaussian distribution around a true value of µ. Such that you have errors of µ ± ε. Where ε consists of very small values that have a common factor of µ.

Reply to  Jim Gorman
April 10, 2023 7:40 pm

There are no random errors with single measurements.

Sometimes I suspect the problem is we are just speaking different languages.

If there is uncertainty when taking a measurement that means there may be an error, i.e. the difference between the true value and you measured value.

From the GUM

B.2.21

random error

result of a measurement minus the mean that would result from an infinite number of measurements of the

same measurand carried out under repeatability conditions

See “a measurement”.

You won’t know what an individual error is, that’s why it’s uncertain, but it will have one.

You need to show how two distributions with a standard deviation at vastly different µ’s can have errors that cancel.

You need to be clear about what distributions you are talking about. You keep on saying ambiguous things like this. Are you talking about the distributions of the measurement errors, or the distribution of the population you are sampling from?

Say a Tmax at 80ºF ±1 and 55ºF ±1. Those errors will result in intervals of 79 – 81 and 54 – 56. How can those possibly cancel?

If one is +1 and the other one -1, then they will cancel. You won’t know if cancellation has occurred, but on average the sum of the two random errors will be sqrt(2), and average of the two errors will be 1 / sqrt(2).

That is a far distant thing when compared to multiple readings of the same thing and you end up with a Gaussian distribution around a true value of µ.

Again, you need to explain what distribution you are talking about. When we say that the measurement uncertainty will reduce when averaging different things, the distributions are those of the random variables that describe the measurement uncertainties. This is quite separate to the distribution of the things being measured.

And it doesn’t matter if the measurement uncertainties have different distributions. It’s still just a case of applying the standard rules (e.g. GUM equation 10). It simply describes how to combine different uncertainties into a combined uncertainty depending on how you are combining the measurements.

Reply to  Bellman
April 11, 2023 7:34 am

“””””If there is uncertainty when taking a measurement that means there may be an error, i.e. the difference between the true value and you measured value.”””””

Read what I said again.!

“There are no random errors WITH SINGLE MEASUREMENTS”.

How do you get random errorS with one single measurement?

Read what the GUM says again. “””””INFINITE NUMBER OF MEASUREMENTS”””””.

What do you think (μ ± ε) means. μ is the mean of a Gaussian distribution and ε is the random errors forming a Gaussian distribution surrounding the mean!

“””””If one is +1 and the other one -1″””””

That is not what you have with random errors surrounding a mean. Think about what you are saying. Multiple random errors around 80 are values like 79.2, 77.5, 80.3, 81.4. It is the ABSOLUTE values that cancel, not the distance from the mean. One can state the standard deviation as ±1 for example, but that means 68% of the values lay between 79 – 81, and not between -1 – +1.

If it was as simple as you claim why would the GUM make it so plain that everything revolves around measurements of the same thing. Even the definition you posted says, “…the same measurand”. Do you think the GUM wouldn’t have a section about how to cancel error between multiple measurands?

As far as EQ. 10 goes, what do you think the term (∂f/∂xi) is? Look at 5.13, 5.14, 5.15, especially the Examples. Does the GUM use absolute values when computing the partial derivatives?

Reply to  Jim Gorman
April 11, 2023 9:54 am

How do you get random errorS with one single measurement?

You need to explain what you are getting at rather than rely on these word puzzles. You said “There are no random errorS with single measurementS.”

Obviously any one single measurement will only have one single random error.

Read what the GUM says again. “””””INFINITE NUMBER OF MEASUREMENTS”””””

As in the average of random errors will tend to zero as measurements tend to infinity.

What do you think (μ ± ε) means. μ is the mean of a Gaussian distribution and ε is the random errors forming a Gaussian distribution surrounding the mean!

Again, you need to be clear about your terms. Are you talking about measurement errors, or the error of a single value from the population mean?

One can state the standard deviation as ±1 for example, but that means 68% of the values lay between 79 – 81, and not between -1 – +1.

The errors cancel. You can look at it in two ways. Just lookm at the errors, which will tend to zero, or as the measured value, which is true value plus error, and the average will tend towards the true value.

If it was as simple as you claim why would the GUM make it so plain that everything revolves around measurements of the same thing.

Citation with context required. The propagation of uncertainty does not require measurements of the same thing.

Look at Example 1 from TN1900. A function based on subtracting four measurements of three different things and using a variety of methods to determine the uncertainty, all of which require some cancellation of random error.

Do you think the GUM wouldn’t have a section about how to cancel error between multiple measurands?

That’s what combined uncertainty is. The uncertainty of a single measurand determined by a function applied to multiple quantities, which may themselves be viewed as multiple measurands.

As far as EQ. 10 goes, what do you think the term (∂f/∂xi) is?

It’s the partial derivative of f with respect to x_i.

Does the GUM use absolute values when computing the partial derivatives?

What do you mean by the absolute values? This isn’t very complicated, though past experience tells me you complicate it by not understanding what a partial derivative is.

If you are adding or subtracting a loasd of terms the partial derivative for each is just 1.

If you are scaling a value by B, then the partial derivative is B.

If you are averaging a set of n values, the partial derivative for each is 1/n.

If you are multiplying to values x1 and x2, then the partial derivative with respect to x1 is x2, and for x2 it’s x1. (From this you may be able to figure out why multiplication requires adding relative uncertainties.

Reply to  Tim Gorman
April 12, 2023 5:26 am

A lot of trepidation about losing climate alarmism. Traditional methods serve a purpose very nicely don’t you know, why change to something that may ruin it!

Reply to  Jim Gorman
April 8, 2023 5:05 pm

If you want to compare averages, you could always look at the USCRN data.

The daily data has both TMean, (TMax + TMin / 2), and TAvg based on the fine resolution readings for each station. As this is based on the 24 hours from midnight local time it won’t be exactly the same as one based on reading taken at 9am, but it’s a useful test.

My own preliminary tests just compared the difference between TMean and TAvg, for all stations and all days.

The standard deviation is 0.92°C and the 95% interval runs from -1.9°C to + 1.8°C. However the average of all days and stations was only 0.027°C. Whilst individual stations often show systematic biases, on the whole they tend to cancel out, at least across the USA.

Looking at the average difference for each station (that is to a large extent the systematic bias in any individual station), the standard deviation was 0.32°C, with a 95% interval of -0.51 to 0.68°C.

Here’s a map showing the distribution of these differences. This only shows the Contiguous US stations. There are quite a few Alaskan ones included in the stats, but not shown here.

20230408wuwt1.png
Reply to  Bellman
April 9, 2023 6:48 am

Let’s take one day for KS Manhattan 6 SSW.

Look at April 1. T_avg = 7.6, T_mean = 8.1. That’s a 6% difference!

Look at April 7. T_avg = 12.5 and T_mean = 11.2. That’s a 10% difference!

You got a standard deviation of 0.92C? With that kind of uncertainty how do you distinguish anomalies in the hundredths digit?

Whilst individual stations often show systematic biases, on the whole they tend to cancel out, at least across the USA.”

How do you know this? With a standard deviation of 0.92C there is obviously not enough cancellation going to to justify ignoring systematic biases!



Reply to  Tim Gorman
April 9, 2023 7:14 am

“Look at April 1. T_avg = 7.6, T_mean = 8.1. That’s a 6% difference!”

That’s not how temperatures work. You should know that. You were the one insisting that only a fraud measures temperature using a relative scale. In K that’s

TAvg = 280.75, TMean = 281.25. That’s a difference of 0.2%.

You were the one claiming that nobody would worry about a drop of 2.5°C in global annual temperatures. Why would you care about a difference if 0.5°C on a single day? You keep insisting that average temperatures are useless, tell you nothing, and it’s physically impossible to average temperatures in any event. Why now the worry about the exact daily average? Be careful how you handle your own petard.

Reply to  Bellman
April 9, 2023 7:43 am

0.2% is HUGE when you are trying to define differences of 4 x 10^-5%, .i.e. 0.0004%.

I never claimed a *drop* in global average wouldn’t be a worry! It would cause more deaths in winter, it would have a deleterious impact on global food harvests, and would increase energy demand (and therefore prices).

Average temperatures are *NOT* mid-range temperatures. They are called such in climate science but they are not averages. The true average of a temperature range of 80 to 20 would likely be different than one with a temperature range of 60 to 40 but both have the same mid-range value. This is because of the sinusoidal daytime profile versus the exponential/polynomial nighttime temps.

Reply to  Tim Gorman
April 9, 2023 1:31 pm

I never claimed a *drop* in global average wouldn’t be a worry!

Please try to keep your trolling consistent. I asked you

I asked first. Would you worry if global temperatures were to cool by 2.5°C?

You replied

I wouldn’t worry one bit. What do you think the average temp in Alaska is for locations close to the Arctic circle compared to the average temp today in SD? Lots of people live in climates that are 2.5C colder than SD.

https://wattsupwiththat.com/2023/03/12/new-wuwt-global-temperature-feature-anomaly-vs-real-world-temperature/#comment-3693862

Reply to  Tim Gorman
April 9, 2023 7:30 am

“With that kind of uncertainty how do you distinguish anomalies in the hundredths digit?”

You still don’t get the concept of averaging, do you?

There are two sorts of errors, random and systematic, and both are going to be in okay here. That’s why I was looking at the data to try to figure out how much of each there is. But when you average random error it tends to cancel, and when you take anomalies or look at trends systematic errors cancel absolutely.

The question you really want to look at is if there is a systematic bias that changes over time. It’s quit possible, but I don’t think there’s a long enough time scale to tell using USCRN at this point.

This might mean, for example, that TMean shows temperatures have risen by 1.5°C, but using the TAvg measure it’s only a 1.3°C rise. How you interpret that in regard to any target is another question.

“How do you know this? ”

I told you, I took the average of all stations.

” With a standard deviation of 0.92C there is obviously not enough cancellation going to to justify ignoring systematic biases!”

You still havn’t understood the difference between standard deviation of a sample, and standard error of the mean.

Reply to  Bellman
April 9, 2023 4:52 pm

You still don’t get the concept of averaging, do you?”

Averaging doesn’t lessen uncertainty. All you can do is measure how close you are to the population average. You can’t determine the uncertainty of the average. The average could be 100% wrong and you wouldn’t be able to tell from the SEM.

“That’s why I was looking at the data to try to figure out how much of each there is.”

You *still* don’t get the concept of uncertainty, do you?

Uncertainty means YOU DON’T KNOW. You can’t calculate the components of uncertainty statistically. How man times does Taylor, Possolo, and Bevinger have to be quoted to you before that gets into your brain?

Bevington: “The accuracy of an experiment, as we have defined it, is generally dependent on how well we can control or compensate for systematic errors, errors that will make our results different from the “true” values with reproducible discrepancies. Errors of this type are not easy to detect and not easily studied by statistical analysis” (bolding mine, tpg)

I sincerely doubt you will ever understand metrology and uncertainty. You are stuck in a statisticians bubble where measurement uncertainty either doesn’t exist or can be totally discounted through the use of unsupported assumptions.

Reply to  Tim Gorman
April 9, 2023 5:29 pm

You can’t determine the uncertainty of the average.

Then why have you been arguing you can for the last 2 years?

Uncertainty means YOU DON’T KNOW.

If all these measurement theorist end up saying “we just don’t know” then it’s all been a waste of time. What I understand is that whilst you don’t know the exact error of your measurements, you can estimate the likely range of such errors. That’s the whole point of doing these statistics, to provide better estimates of what you don’t know.

You can’t calculate the components of uncertainty statistically.

Then stop doing it. Stop telling me you know what the actual uncertainty of an average is and ignoring all my reasons why you are wrong. You are the one who claims that uncertainty of an average increases the larger the sample. How do you know that except through your (misunderstanding) of statistics? If you can’t tell what the uncertainty of any calculation is, why do you bother to measure anything in the first place?

Bevington: “The accuracy of an experiment, as we have defined it, is generally dependent on how well we can control or compensate for systematic errors, errors that will make our results different from the “true” values with reproducible discrepancies. Errors of this type are not easy to detect and not easily studied by statistical analysis

Yes, systematic errors are not easy to detect when all you have is individual measurements. That’s why you try to avoid them. Are you claiming USCRN has serious systematic errors.

Again though, I am not talking about systematic measurement errors. I’m talking about systematic errors in the difference between TMean and TAvg – something I assumed you would be interested in. That certainly can be analyzed as I’ve just done it. If you look at the difference of an individual station and see that the average difference measured over all days and over several years is +1°C, then there is probably a systematic difference.

I sincerely doubt you will ever understand metrology and uncertainty.

Thanks, from someone with your demonstrable lack of understanding I’ll take your doubt as a complement.

You are stuck in a statisticians bubble where measurement uncertainty either doesn’t exist or can be totally discounted through the use of unsupported assumptions.

Once again, in the hope you might get out of your own rut, I am not talking about measurement uncertainty. I’m talking about the thing you keep going on about, the difference between (max + min) /2, and the average of sub-hourly measurements.

Reply to  Bellman
April 10, 2023 6:06 am

Then why have you been arguing you can for the last 2 years?”

Why do you always insist on taking things out of context? That’s the mark of a troll.

me: “Averaging doesn’t lessen uncertainty. All you can do is measure how close you are to the population average. You can’t determine the uncertainty of the average.”

You can’t determine the uncertainty of the sample average by just calculating how close you are to the population average. The uncertainty of the population average is from propagating the uncertainty of the individual data elements onto the population average. If the sample average and the population averages are the same then the sample average will have the same uncertainty as the population average.

If the sample average has an uncertainty, i.e. a standard deviation of the sample means, then that actually *increases* the total uncertainty that is additive with the uncertainty inherited by the sample mean from the population mean.

If all these measurement theorist end up saying “we just don’t know” then it’s all been a waste of time.”

Oh, malarky! Your lack of experience in the real world is showing again. NO MEASUREMENT IS EVER *EXACT*! Learn that and live with it. That does not mean the measurement is a waste of time. I may not be able to EXACTLY measure the length of a 2’x4′ board. But I can use the uncertainty of the measurement to judge if it will span the needed distance.

This is a perfect example of why, for at least the past two decades, a lot of scientists have been wrongly taught that if you need a statistical analysis done then go find a statistics major. The scientist can’t judge if the statistical analysis is actually meaningful because they don’t know statistics. And the statistician can’t judge the usefulness of the analysis to the real world because they don’t know anything about the real world from which the data is generated.

You have just demonstrated how wrong that is with one, single sentence!

Reply to  Bellman
April 10, 2023 6:39 am

 What I understand is that whilst you don’t know the exact error of your measurements, you can estimate the likely range of such errors. That’s the whole point of doing these statistics, to provide better estimates of what you don’t know.”

But statistics can’t tell you what you don’t know! If u_total = u_random + u_systematic then statistics will *NEVER* be able to distinguish the two components just from the stated values of the measurements.

“Then stop doing it. Stop telling me you know what the actual uncertainty of an average is and ignoring all my reasons why you are wrong.”

Oh, give it a break! I *can* estimate the overall uncertainty without knowing the exact value of the random uncertainty and the systematic uncertainty. Why do you think temperature measurement devices have a listed uncertainty interval? If the uncertainty was all random then wouldn’t it cancel out? Those uncertainty intervals for the measurement devices are estimates based not just on random error but on calibration drift and probablly other systematic biases such as air flow variances, etc.

Yes, systematic errors are not easy to detect when all you have is individual measurements. That’s why you try to avoid them. Are you claiming USCRN has serious systematic errors.”

How do you avoid them in a field measurement device where you are unable to calibrate the device before each measurement? How do you know that USCRN does *NOT* have serious systematic biases (remember, a systematic effect is *NOT* and error, it is a bias). Hubbard and Lin did a study on devices using the newest probes. They found that the components used in the electronics that take the sensor reading cause calibration drift and that is different from the sensor probe calibration drift.

Those USCRN devices *are* field devices. How do *you* know what the systematic biases are at any one time?

Again though, I am not talking about systematic measurement errors. I’m talking about systematic errors in the difference between TMean and TAvg”

OMG! Tmean and Tavg *BOTH* have in-built UNCERTAINTIES. When you compare them by subtracting them THOSE UNCERTAINTIES ADD! Anomalies don’t decrease uncertainties, they GROW THEM!

If Tmean = Tm +/- u1 and Tavg = Ta +/- u2 then when you subtract them you get (Tmean – Tavg) +/- (u1 + u2). (or do an rss addition if you wish – the total uncertainty still goes up)!

HAVE YOU LEARNED NOTHING OVER THE PAST TWO YEARS?

 I’m talking about the thing you keep going on about, the difference between (max + min) /2, and the average of sub-hourly measurements.”

Tmax has uncertainty. Tmin has uncertainty. What do you get when you subtract them?

Tmax = Tday +/- u1
Tmin = Tnight +/- u2

Tdifference = (Tday – Tnight) +/- (u1 + u2)

We’ve been down the path of integrals before. A degree-day is just what it says, it is a TOTAL and not an average. An average would have the units of “degree” all by itself.

∫ T dt is temperature * time, not temperature/time.

You can compare degree-day values directly, you don’t need an average!

Of course that degree-day value will have some uncertainty based on the actual measurements. Go re-read Taylor on how to calculate it.

The issue is that degree-day gives you a direct indication of the climate whereas the mid-range value does not.

Reply to  Tim Gorman
April 10, 2023 3:36 pm

How do you know that USCRN does *NOT* have serious systematic biases (remember, a systematic effect is *NOT* and error, it is a bias).

Bias is the same thing as a systematic error.

I don’t know that USCRN doesn’t have major problems. (It’s certainly not perfect – it’s easy to see some errors from time to time) But it is the system apparently so good it has to be buried by NOAA. You’re the one who keeps insisting that we should use the newest and best equipment, yet for some reason won’t even accept some simple analysis of the system, in case there might be some unknown error.

OMG! Tmean and Tavg *BOTH* have in-built UNCERTAINTIES. When you compare them by subtracting them THOSE UNCERTAINTIES ADD! Anomalies don’t decrease uncertainties, they GROW THEM!

Calm down. Try to avoid these hysterical-teenager impressions, and try to engage with what I’m saying. The purpose of the analysis was to see how much difference there is between TMean and TAvg – something you keep asking about. This started with you and Jim making claims about the difference based some very simplistic, and incorrect, modelling of sine waves etc.

Jim says

Anyway, the average of a sine is 0.637 of the max, i.e., about 31, so (0.637 * 31) = 19.7). Estimating the minimum, I would say around 25 for the average.

Traditional => (31 + 19) / 2 = 25 σ = 6

Distributions => (19.7 + 25) / 2 = 22 σ = 3

That is a big difference and a smaller standard deviation (uncertainty) also.

No mention of uncertainty.

You said

Let’s take one day for KS Manhattan 6 SSW.

Look at April 1. T_avg = 7.6, T_mean = 8.1. That’s a 6% difference!

Look at April 7. T_avg = 12.5 and T_mean = 11.2. That’s a 10% difference!

No mention of uncertainty.

I look at USCRN data, based on high frequency readings, supposedly the gold standard of how to site and run weather stations, and look at the difference of tens of thousands of days worth of data – and suddenly, “but what about all the uncertainty?”.

If Tmean = Tm +/- u1 and Tavg = Ta +/- u2 then when you subtract them you get (Tmean – Tavg) +/- (u1 + u2).

You’re still missing the point of this exercise. The point is to look at the difference between mean and average in order to investigate how much of a problem, as you claim it is. Any uncertainty as you described will simply increase the overall difference.

But the issue is not about measurement uncertainty, it’s about the fact that days are not generally symmetric.

Tmax has uncertainty. Tmin has uncertainty. What do you get when you subtract them?
Tmax = Tday +/- u1
Tmin = Tnight +/- u2

and now you are arguing with figments of your own imagination. I am not looking here at diurnal range. I might do that later, but as with all these uncertainties they are mostly irrelevant. I am not looking at the value of any one day, I’m look at the variation and average over thousands of days,

A degree-day is just what it says, it is a TOTAL and not an average.

And again, I’m not talking about degree-days.

You are wrong of course, calculating the degree days in a single day by adding all the degree minutes will have just the same uncertainty as averaging all the minute by minute temperatures. But I’ve explained all this before and you have no more incentive to understand it now then you did then.

∫ T dt is temperature * time, not temperature/time.

Indeed, but then what happens if you divide it by time?

You can compare degree-day values directly, you don’t need an average!

Provided they are measured over the same time period.

Go re-read Taylor on how to calculate it

I don’t need to. It will be the sum (in quadrature if you assume random independent errors) of all the individual measurements. If an individual measurement has an uncertainty of ±0.3°C, then for a day (assuming one measurement a minute) that will be 0.3 * sqrt(1440) ~= ±11.4. But then you divide the sum by 1440 to turn the degree-minutes into degree-days, so this becomes 11.4 / 1440 = 0.3 / sqrt(1440) = 0.008, which is irrelevant as the daily average is only given to the nearest 0.1°C.

If the the ±0.3 figure includes the possibility of systematic error, then at worst the uncertainty of one day’s worth of degree days is ±0.3°C days.

If you want the total degree days over a year, then you will have to apply the addition rules for all 365 days. For random this would be 0.008 * sqrt(365) = 0.15 degree days. For systematic it will be 0.3 * 365 = 110 degree days.

But if you divide the annual values by 365 to get the daily average you can divide the uncertainties by 365 as well.

bdgwx
Reply to  Tim Gorman
April 9, 2023 6:10 pm

TG said: “You can’t determine the uncertainty of the average.”

Possolo did it in NIST TN1900 E2.

TG said: “You can’t calculate the components of uncertainty statistically.”

Possolo did it in NIST TN1900 E2.

Reply to  bdgwx
April 10, 2023 6:47 am

You just keep on ignoring the assumptions Possolo made! Why do you do that?

He assumed all measurement uncertainty was random and Gaussian and either cancelled or was negligible. He assumed systematic bias was negligible – that’s real statistics for you, isn’t it? Just assume it away if it is inconvenient. That’s why I can’t find any Stat 101 textbook that even mentions measurement uncertainty let alone how to analyze it.

This allowed him to assume that the daily temperatures resembled multiple measurements of the same thing under repeatability conditions.

Thus the variation in the stated values could be used as a proxy for the uncertainty of Tmax.

How do you make that assumption for 1000 locations or more? Ans: you do just like you do and the rest of the climate alarmists do. You assume all measurement uncertainty is random, Gaussian, and cancels. You don’t even assume partial cancellation and use root-sum-square addition of the measurement uncertainties. You just assume it ALL goes away!

bdgwx
Reply to  Tim Gorman
April 10, 2023 8:40 am

TG said: “You just keep on ignoring the assumptions Possolo made! Why do you do that?”

I’m not ignoring nor challenging anything in NIST TN1900 E2.

TG said: “He assumed”

And yet he STILL was able to compute the uncertainty of the average. And he did so even though the daily measurements were of different things. And the rule he used was 1/sqrt(N).

TG said: “How do you make that assumption for 1000 locations or more?”

We can’t expand NIST TN1900 E2 until you first accept it. This includes accepting that 1) computing an average temperature is possible, useful, and meaningful 2) it can be done when the individual observations are of different things and 3) that the uncertainty of the average scales as 1/sqrt(N).

If you can accept those 3 things then we can expand the example.

Reply to  bdgwx
April 10, 2023 5:55 pm

You didn’t read the TN very closely. Here is what it said.

“The daily maximum temperature τ in the month of May, 2012, in this Stevenson shelter, maybe defined as the mean of the thirty-one true daily maxima of that month in that shelter.

Dr. Possolo was very specific in what the measurand was. That month IN THAT SHELTER. He didn’t say that it was the same as other shelters in other locations, you did that. He didn’t say that you could combine with other months or other stations and obtain an adequate answer.

“it can be done when the individual observations are of different things”

You are making an assumption that is not possible from this document. TN 1900 only describes obtaining an average temperature with a corresponding EXPANDED uncertainty.

If you have read what I have posted I do accept that monthly means of separate Tmax and Tmin temperatures with an expanded uncertainty is a legitimate statistical treatment of specified measurand.

The expanded uncertainty is necessary as Dr. Possolo points out here.

Assuming that the calibration uncertainty is negligible by comparison with the other uncertainty components, and that no other significant sources of uncertainty are in play, then the common end-point of several alternative analyses is a scaled and shifted Student’s t distribution as full characterization of the uncertainty associated with τ.

From the GUM.

“Consequently, if the measurand Y is simply a single normally distributed quantity X, Y = X; and if X is estimated by the arithmetic mean X of n independent repeated observations Xk of X, with experimental standard deviation of the mean s(X ),”

Thus the expanded uncertainty defines an interval y − Up to y + Up, conveniently written as Y = y ± Up, that may be expected to encompass a fraction p of the distribution of values that could reasonably be attributed to Y, and p is the coverage probability or level of confidence of the interval.

I also find it hilarious that NOAA recommends rounding the most accurate temperatures, i.e. CRN to one decimal digit and that NIST rounded temperatures that had two decimal digits after conversion to one decimal digit also. Doing so absolutely destroys finding anomalies to the hundredth or thousandth decimal digits. They will disappear in the rounding process.

bdgwx
Reply to  Jim Gorman
April 10, 2023 6:48 pm

JG said: “Dr. Possolo was very specific in what the measurand was.”

Yeah I know. Duh.

JG said: “He didn’t say that it was the same as other shelters in other locations, you did that.”

I did no such thing. Nobody other than you said anything about other shelters. Remember, we are talking about NIST TN1900 E2 and nothing else. Stop deflecting and diverting and stay focused.

JG said: “You are making an assumption that is not possible from this document.”

It’s not an assumption. It is a fact. NIST TN1900 E2 couldn’t be more explicit that the average is computed using Tmax measurements on different days.

JG said: “If you have read what I have posted I do accept that monthly means of separate Tmax and Tmin temperatures with an expanded uncertainty is a legitimate statistical treatment of specified measurand.”

Good. Then we are making progress. Do you also accept that the uncertainty of that average is computed using 1/sqrt(N) as documented in NIST TN1900 E2?

Reply to  bdgwx
April 11, 2023 1:32 pm

“”””” …And he did so even though the daily measurements were of different things. “””””

If you read deeper, Dr. Possolo defined Tmax as the same thing. The temperature readings are experimental readings OF THE SAME THING. It is no different than running an experiment under repeatable conditions and finding the uncertainty based on the variation of the results. Dr. Taylor did the same in his book in an example.

He certainly did not expand it to include other stations.

That is one of the fundamental errors in trying to find a global average anomaly.

Tell us how you calculate the expanded uncertainty of the anomalies.

bdgwx
Reply to  Jim Gorman
April 11, 2023 6:12 pm

Why do I have to read deeper? I’m the one that revealed this to you. And we can argue semantics until we are both blue in the place, but the fact remains that Possolo assessed the uncertainty of the average of different temperatures. If you want call them the “same thing” because they are inputs into a model of a measurand then awesome. That’s one of the points I’ve been trying make. That is a measurement model can take measurements of different things as inputs and produce a single output representing the value of a single measurand (ya know the “same thing” concept). The other point I’m making is that Possolo used the 1/sqrt(N) rule to assess the uncertainty of the average.

Reply to  bdgwx
April 12, 2023 8:46 am

Go back and read the TN and the GUM without your confirmation bias.

Do you have an understanding what “experimental” means in terms of the GUM. Your responses indicate that you don’t have a clue.

A repeated EXPERIMENT using repeated procedures, devices, temperatures, humidities, etc. will result in different measurements for each experiment. That is experimental deviation of results.

You can find in the GUM what it expects this deviation is and how to report it. In essence, the deviation should be expanded to cover a large percent of the values so that people can have an expectation of what could be expected when performing the same procedure.

And, yes each experiment results in different measurements. But that is not the point. Think of all the differences between two measurements stations. Differences in the shelter, the local microenvironment, the systematic errors, drifts, calibration, etc. You are NOT measuring the same thing when using different devices, even if just Tmax.

TN1900 makes the following assertion:

“””””The daily maximum temperature τ in the month of May, 2012, in this Stevenson shelter, MAY BE DEFINED AS THE MEAN OF THE THIRTY-ONE DAILY MAXIMA of that month IN THAT SHELTER. “””””

– Same shelter
– Same month
– Daily maxima

You may wish to make different stipulations, but don’t confuse the issue by saying that TN1900 supports your assertions. You will need to provide your own justification for the assertions that differ from these.

Also don’t be surprised at the expanded experimental uncertainty when you do find the variance when combining two stations. You will still be using a t Factor of around 2 in order to get the expanded experimental uncertainty. Student T distributions are funny that way.

bdgwx
Reply to  Jim Gorman
April 12, 2023 9:35 am

JG: And, yes each experiment results in different measurements. But that is not the point.

It most certainty is one of the points. The other point is that Possolo assessed the uncertainty of the average using the 1/sqrt(N) rule. Up until this time you and Tim have vehemently rejected these two points.

JG: You may wish to make different stipulations, but don’t confuse the issue by saying that TN1900 supports your assertions.

I’m not making any stipulations whatsoever. And the assertions aren’t mine. They are Possolo’s. Again…those assertions are:

1) Temperatures can be averaged.

2) The uncertainty of the average of different temperatures can be assessed.

3) It is assessed using the 1/sqrt(N) rule.

Up until this time you have rejected all 3 assertions. Which of the 3 if any do you still reject?

Reply to  bdgwx
April 12, 2023 1:26 pm

You can’t even figure out when the 1/sqrt(N) rule applies!

It applies when you are determining the interval in which the population average might lie. IT HAS ABSOLUTELY NOTHING TO DO WITH THE ACCURACY OF THE AVERAGE.

Possolo assumes that all measurement uncertainty is insignificant! Does that mean nothing to you?

bdgwx
Reply to  Tim Gorman
April 12, 2023 2:19 pm

TG: You can’t even figure out when the 1/sqrt(N) rule applies!

I’m not the one doing the figuring here. That was Possolo and he figured 1/sqrt(N) should be used to asses the uncertainty of the average of different temperature measurements.

TG: Possolo assumes that all measurement uncertainty is insignificant! Does that mean nothing to you?

I don’t think a standard uncertainty of u = 0.872 C is insignificant.

Reply to  bdgwx
April 13, 2023 9:26 am

I’m not the one doing the figuring here. That was Possolo and he figured 1/sqrt(N) should be used to asses the uncertainty of the average of different temperature measurements.”

I don’t think a standard uncertainty of u = 0.872 C is insignificant.”

You are showing a complete lack of knowledge about how to conduct research experiments.

Possolo is figuring the variation of experiment results and while he may call this uncertainty it actually isn’t. It’s a representation of the variation in the experimental results.

It’s not uncertainty because he is assuming all uncertainty is insignificant. Absolute true values for each experimental measurement.

You simply have no basic understanding of physical science or engineering. If I measure the shear strength of 22 different steel rods from the same batch using an infinitely precise and absolutely accurate measuring device I will *still* get different values in at least some (and probably all) of the measured values.

That variation is *not* measurement uncertainty. It is simply a variation in the composition of the steel rods (plus probably other external factors). What you get from a statistical analysis is a confidence level that the shear strength will lie within some interval. That’s not uncertainty for I can measure each individual one with infinite precision and absolute accuracy!

bdgwx
Reply to  Tim Gorman
April 13, 2023 9:39 am

TG: while he may call this uncertainty it actually isn’t.

Just to make sure I’m understanding you here…are you saying you think Possolo was wrong to state the standard uncertainty u = s/sqrt(m) = 0.872 C because that isn’t actually the uncertainty?

Reply to  bdgwx
April 15, 2023 4:37 am

I’ve explained this at least twice. As usual you refuse to understand the simple things.

Explaining the variation in experimental results is *NOT* the same thing as measurement uncertainty.

Possolo has basically assumed that all measurements are 100% accurate and infinitely precise. No measurement uncertainty at all.

You will *still* get a variation in measurements because of changing conditions between measurements. So while there is no measurement uncertainty there will still be a variation in results.

If I take 22 different experiments trying to measure the speed of sound using infinitely precise and absolutely accurate devices I will get (potentially) 22 different measured values.

I can explain that by specifying the range of possible values that can be expected, perhaps so future experiments can be judged as not being outliers.

As I explained before I can measure the shear strength of 10 different beams using an infinitely precise and absolutely accurate device and still get different values for all ten measurements. I would expect the 11th bean to have a result in that range.

I suppose you can consider that as being uncertain about the absolute characteristics of the different measurands but it is *still* not measurement uncertainty.

I simply cannot understand why these real-world concepts are so hard for some people to understand. It’s as if some people have lived in a dark, environmentally controlled, solitary confinement in a basement somewhere from the time they were born.

Reply to  bdgwx
April 13, 2023 8:18 am

“””””It most certainty is one of the points. The other point is that Possolo assessed the uncertainty of the average using the 1/sqrt(N) rule. Up until this time you and Tim have vehemently rejected these two points.”””””

I have rejected the procedure being done currently when dividing by √n. I can go back and find the posts if I need to.

How many times have I asked you if the averages being averaged are considered samples or the population? When using the CLT to justify how accurate the estimated mean is , you must be using the Standard Deviation of sample Means, i.e., the SEM. That means you have samples and that the sample means distribution is normal. The width of the SEM informs you of an interval within which, the true value of the population mean lays.

The pertinent equations relating the population mean, population SD, sample mean, and SEM are.

Population mean μ = Sample mean ± SEM
Population SD σ = SEM • √n (where n = sample size and not the number of samples)

How many times have you referenced “n” to be the number of samples?

Lastly, if you claim the data you have is the population, then there is no need for a Sample Mean or SEM at all.

Pick your poison, and if it is sampling, I suggest you read up on sampling!

Here is an online simulator to help.

https://onlinestatbook.com/stat_sim/sampling_dist/index.html

bdgwx
Reply to  Jim Gorman
April 13, 2023 9:48 am

JG: I have rejected the procedure being done currently when dividing by √n. I can go back and find the posts if I need to.

No need. Google finds them easily. You’ve stated many times that you think the uncertainty of the average scales as the multiplication of sqrt(N). If Possolo had used that approach he would have gotten for the standard uncertainty u = s*sqrt(m) = 4.1 * sqrt(22) = 19.2 C. And when expanding the uncertainty that would result in ±40.0 C resulting a range of -14.4 to 65.6 C. Does that sound realistic to you?

Reply to  bdgwx
April 12, 2023 6:40 am


I’m not ignoring nor challenging anything in NIST TN1900 E2.”

Of course you are. Each and every assumption Possolo makes is to take a situation where you have multiple single measurements of different things to one where you have multiple measurements of the same thing!

Multiple single measurements of different things IS different from multiple measurements of the same thing! Why you can’t simply admit that is beyond me.

You live in a world where measurement uncertainty doesn’t exist. That’s not the real world.

“And yet he STILL was able to compute the uncertainty of the average.”

By converting the example from one situation to another in order to simplify what he was showing.

What is so hard to understand about that?

” computing an average temperature is possible, useful, and meaningful 2) it can be done when the individual observations are of different things and 3) that the uncertainty of the average scales as 1/sqrt(N).”

  1. Computing an average value from measurements of different things *can* be done. Whether it is useful or not is a different question.
  2. An average value calculated from measurements of different things does *NOT* provide any semblance of a “true value”.
  3. 1/sqrt is how narrow a window you have within which the population average lies. It tells you ZERO about the accuracy (i.e. the measurement uncertainty) of that population average.

Why you find this so hard to understand is just beyond me. Even a sixth grader proficient in math can figure it out. Trust me, I’ve seem ’em do it!

bdgwx
Reply to  Tim Gorman
April 12, 2023 7:08 am

TG said: Multiple single measurements of different things IS different from multiple measurements of the same thing! Why you can’t simply admit that is beyond me.

Completely irrelevant. We are talking about NIST TN1900 E2. Stop deflecting and diverting.

TG said: Computing an average value from measurements of different things *can* be done.

Good. We’re making progress.

TG said: An average value calculated from measurements of different things does *NOT* provide any semblance of a “true value”.

And yet Possolo said the “true value” lies between 23.8 and 27.4 C.

TG said: 1/sqrt is how narrow a window you have within which the population average lies. It tells you ZERO about the accuracy (i.e. the measurement uncertainty) of that population average.

And yet Possolo said that was how you assess the measurement uncertainty for the average temperature in the example.

TG said: Even a sixth grader proficient in math can figure it out.”

Funny because it was you that made 24 algebra mistakes in various comments in an effort to calculate uncertainty. You even said in defense of your mistakes that the computer algebra systems I used must be wrong.

bdgwx said: “I’m not ignoring nor challenging anything in NIST TN1900 E2.”

TG said: Of course you are.

I stand by what I said. I am NOT ignoring or challenging anything in NIST TN1900 E2. Yet here you are still resisting the fact that Possolo used 1/sqrt(N) to assess the uncertainty of the average which you don’t even think provides any semblance of a “true value”.

Reply to  bdgwx
April 12, 2023 1:07 pm

“And yet Possolo said the “true value” lies between 23.8 and 27.4 C.”

Why do you keep ignoring the assumptions Possolo made in order to do this? Did he take temperatures from different measuring stations and combine them into an average? If he didn’t then why do you want to?

And yet Possolo said that was how you assess the measurement uncertainty for the average temperature in the example.”

Under the assumption that he was measuring the same thing multiple times. He wasn’t. This is an example to explain what you do when you *do* have multiple measurements of the same thing!

Multiple measurements of the same thing *can* give you a true value if the proper conditions are met.

Multiple measurements of different things can *NOT* give you a true value because there isn’t one.

Why do you continue to believe that multiple measurements of different things can give you a “true value”? And if you don’t believe that then why do you assume that they do?

bdgwx
Reply to  Tim Gorman
April 12, 2023 2:14 pm

TG: Why do you keep ignoring the assumptions Possolo made in order to do this?

I’m not ignoring anything in NIST TN1900 E2. I accept all of it.

TG: Did he take temperatures from different measuring stations and combine them into an average? If he didn’t then why do you want to?

No he didn’t. What he did was take measurements from different days and combine them into an average. Talk of different stations never occurs in NIST TN1900 E2. It is completely irrelevant.

TG: Under the assumption that he was measuring the same thing multiple times.

The same thing wasn’t measured multiple times. The Tmax on the 1st is different than the Tmax on the 2nd and so on. What is happening is that the measurement model treats the different temperature measurements as if they were measurements of the single measurand (the average monthly temperature). Just because a measurement model can be defined for a single measurand does not mean that the measurements used within the model are of the same thing. And it should be mind numbingly obvious that temperatures on different days are not the same thing.

TG: This is an example to explain what you do when you *do* have multiple measurements of the same thing!

Different temperatures from different days are not the same thing.

TG: Multiple measurements of different things can *NOT* give you a true value because there isn’t one.

Possolo thinks there is a true value. He defines it as τ = t_i – ε_i.

TG: Why do you continue to believe that multiple measurements of different things can give you a “true value”?

I accept NIST TN1900 E2.

Reply to  bdgwx
April 13, 2023 9:16 am

I’m not ignoring anything in NIST TN1900 E2. I accept all of it.”

No, you don’t. You won’t accept that Possolo made an assumption to ignore measurement uncertainty. This allowed him to 1) use the stated values in determining the variation of the data, and 2) assume he was measuring the same thing multiple times.

You keep trying to say he was measuring different things and then averaging them to get a true value. You are simply ignoring all his simplifying assumptions!

“No he didn’t. What he did was take measurements from different days and combine them into an average. Talk of different stations never occurs in NIST TN1900 E2. It is completely irrelevant.”

Get out of your basement and join the real world of research science. His assumption was that he was taking measurements of 22 different experiments on the same thing – Tmax at the same location using the same device. It would be no different than doing 22 different experiments on the speed of light or on the growth rate of a plant, or on the gain of a one-transistor amplifier.

What you get from the research is varying values. You would get the same if you had an infinitely precise device with absolute certainty in the measurement. You would *still* get a variation in the measured values due to random fluctuations in the environment you have no control over.

“The same thing wasn’t measured multiple times”

Of course it was – because of Possol’s assumptions. Tmax is the measurand. It’s no different than measuring the speed of light between two points on different days. You get a variation of Tmax based on the environmental conditions.

You keep wanting to ignore Possolo’s assumptions while stating that you don’t. You want to have your cake and to eat it too!

the measurement model treats the different temperature measurements as if they were measurements of the single measurand (the average monthly temperature).”

No shit, Sherlock! What do you think I’ve been trying to tell you?

You *can* average the values of multiple measurements of the same measurand – assuming they meet the criteria of independence and a Gaussian distribution. Possolo’s assumptions are *necessary* in order to treat the measurements in this fashion. You keep saying you aren’t ignoring his assumptions and that you accept them and then turn around and ignore their impact on the example!

“And it should be mind numbingly obvious that temperatures on different days are not the same thing.”

Under Possolo’s assumptions they *are* the same thing. And they give experimental variation in the values.

“Possolo thinks there is a true value. He defines it as τ = t_i – ε_i.”

But he doesn’t *KNOW* e_i!!!

Possolo: where Ei denotes a random variable with mean 0, for i = 1,… , m, where m = 22 denotes the number of days in which the thermometer was read. This so-called measurement error model (Freedman et al., 2007) may be specialized further by assuming that E1, . . . , Em are modeled independent random variables with the same Gaussian distribution with mean 0 and standard deviation (. In these circumstances, the {ti} will be like a sample from a Gaussian distribution with mean r and standard deviation ( (both unknown).” (bolding mine, tpg)

It is quite obvious that you are doing nothing but trying to cherry-pick out-of-context pieces from TN1900 to support your assertions without having actually read and understood what he is doing.

I accept NIST TN1900 E2.”

No you don’t. You don’t even understand it. How can you accept it?

bdgwx
Reply to  Tim Gorman
April 13, 2023 10:06 am

TG: You keep trying to say he was measuring different things”

That’s what I’m saying because that’s what he did. Tmax measurements on different days are not the same thing. There is no expectation that they have the same value.

TG: His assumption was that he was taking measurements of 22 different experiments on the same thing

The only way I see to reconcile this semantic debate is if I assume your definition of “same thing” is something that can be arbitrarily declared or assumed. Is that your working definition?

TG: But he doesn’t *KNOW* e_i!!!

You don’t have to know e_i to know that τ = t_i – ε_i. That’s actually one of the useful things about algebra. That is you don’t have to know that value of a variable to know that it relates to other variables.

TG: How can you accept it?

Because I don’t think anything in it is incorrect.

Reply to  bdgwx
April 13, 2023 11:49 am

“””””That’s what I’m saying because that’s what he did. Tmax measurements on different days are not the same thing. There is no expectation that they have the same value.”””””

They are measurements of the RESULTS of the same thing. Do you think you can run 6 chemical reaction experiments and get the same result, EXACTLY the same result each time? You can weigh a product of a single trial 10,000 times and reduce the measurement uncertainty of that one trial to a very low number. Do you think the next trial measured 10,000 times will give EXACTLY the same value with the same measurement uncertainty? If you do, you have never, ever studied or worked in a physical type field.

I’ll give you a personal example. When overhauling an engine one of the important jobs is measuring the cylinder bores for wear. I can drop the micrometer down 1 inch parallel with the block and do that 100 times at that same point and get a very good idea what the true value is. That is measurement uncertainty. Now I can rotate the micrometer 90° and take 100 measurements and know that true value pretty accurately. Yet the important measurement is the DIFFERENCE because it begins to tell you the eccentricity of the bore. You do that several more times in different places in the bore and can get a range of values that describe the wear pattern of the bore. That is how you determine to replace the block or at least have it rebored. I don’t go to the next 7 cylinders, measure them, average them, find a Standard Deviation, and say the uncertainty has been reduced by √8! That would be a good way to get free rework back in your lap.

That is experimental uncertainty and is what is covered in the GUM Unit H. It is measuring “different” things but the same result of a trial each time. There is no difference between that and determining the range of difference surrounding the mean Tmax in a given month at a given station.

bdgwx
Reply to  Jim Gorman
April 13, 2023 12:17 pm

Tmax values on different days is more like measuring cylinder bores for wear given many different cylinder bores exposed to different environments. That’s what is happening with Tmax on different days anyway.

Reply to  Jim Gorman
April 15, 2023 5:10 am

I’m getting tired of all of this. bdgwx and bellman see metrology as the language of Venusians.

Reply to  bdgwx
April 15, 2023 4:51 am

That’s what I’m saying because that’s what he did.”

No, his assumptions are such that he is assuming different measurements of the same thing. Why is that such a hard concept for you to grasp? Why do you *insist* on ignoring the assumptions he used?

“The only way I see to reconcile this semantic debate is if I assume your definition of “same thing” is something that can be arbitrarily declared or assumed. Is that your working definition?”

It *is* what Possolo did. Whether it is a correct, real world approach is questionable. But he was setting up an example of how to approach experimental results, not trying to actually describe reality.

Why is this so hard for you to understand? My guess is that you are just a troll.

“You don’t have to know e_i to know that τ = t_i – ε_i. That’s actually one of the useful things about algebra. That is you don’t have to know that value of a variable to know that it relates to other variables.”

It does little good to know that A is proportional to B if you don’t know the constant of proportionality. It’s like trying to solve A = C * B when you don’t know what C is! Once again you show that you simply don’t live in the real world most of the time.

Reply to  Jim Gorman
April 8, 2023 2:22 pm

Thanks Jim,

I have not averaged anything. I’m showing how the data evolves through one day on a specific date at Bundaberg Qld. On any other day it may evolve differently. Of course there are lags involved both in the screen as well as the wider landscape, which gives rise to hysteresis – meaning the warming trajectory is not the same as the cooling. Also the BoM identifies Tmin and Tmax from that single probe trace shown in the top graph, not from two different instruments.

The top graph consists of 1440 datapoints/24h = 1 minute attenuated values, that is what the machine produces. To create the bottom graph I aggregated the 1-minute data into 6 observations/time unit, which is 1/10th of 1-hour and calculates for each 6-min period the max & min, in other words the spread or range of data values within that sample size of 6 values.

Min diverges from max around 7am, the range increases through the day up to about 2pm The max for the day is that spike just under the “7” in the caption – the raw data showed two consecutive values were the same, before the trace fell away. Tmin which happened around dawn 5.30am or so, was the same for a run of some 10-individual data (i.e., Tmin held for 10-minutes). The highest temperature for the day also does not necessarily coincide with the point of inflection in the Max-curve.

There is some dynamic process happening through the heat of the day, that is quite rapid. Air is moving through the Stevenson screen, there is circulation within the screen, there was no rain so there was probably no cloud (perhaps a bit of ciris, which is high cloud made of ice crystals); there would be humidity changes, evaporation going on, all of which would contribute to the observed data. None of this would be detectable using thermometers – they just could not be read quickly enough.

As the heating rate is faster than the cooling rate, the trace is not sinusoidal. Heating is by SW(in) radiation on the screen and the landscape -> advection to the air which is measured by the probe. The rate of cooling is probably the reverse, with the landscape re-radiating LW to space (LW goes on all through the day of course), but the balance tips from SW to LW after around 4pm. The sun is also probably directly overhead at Bundaberg in late February.

I think the process is interesting, particularly from the point of view of the sensitivity of the probe and screen.

I think I’ll leave this now and get on with something else.

All the best,

Bill Johnston

Reply to  Bill Johnston
April 8, 2023 2:55 pm

Here is my 5min data for the past week. It is pretty obvious that part of the profile, daytime, is sinusoidal while the nighttime part of the profile is not.

This isn’t due to the vagaries of “hysterisis” nor is it due to temporary spikes in the temperature (e.g. rain, clouds, etc).

If you do an integration of the entire profile any temporary spikes wind up having a very small impact on total value. I am recording 288 data entries per day. Even 10 temp spikes only represent about 3% of total data. And the variation of those spikes will be even less compared to the absolute value of the temperatures.

Reply to  Tim Gorman
April 8, 2023 5:05 pm

Thanks Tim,

Except that your 5-min data wasn’t there why is daytime … “sinusoidal while the nighttime part of the profile is not”. I would be interested to see your graph.

Hysteresis is real, it happens and shows up in data. It is not just a soil water phenomenon, it represents ‘memory’ in the system – heat does not flow at the same rate into and out of the environment being monitored.

I am also interested in the spikes. Therefore I’m not looking to smooth the data using some function, I could easily enough using a running mean or fitting a LOWESS or splines, but why would that help when my interest was in highlighting what is going on? And why assume a function anyway?

All the best,

Bill Johnston

Reply to  Bill Johnston
April 9, 2023 2:53 am

To close off the conversation, comments are illuminating and useful.

I installed five solar-powered panels in my house to illuminate certain areas using LED technology. While I’m not measuring anything, the panels illuminate when the the sun comes up and throughout the day. But light output flickers when the koala tree next-door blows around. They also momentarily go blank if an aircraft or a bird flies past between the sun and my house

Response to incoming SW radiation is very fast. A cloud, whatever, has a virtually installations effect. But that is noise and on-average noise is not consequential.

For rapid sampling instruments, sorting out spikes from what is happening with the climate is a paramount issue.

Sincerely,

Dr Bill Johnston

Reply to  Bill Johnston
April 9, 2023 6:59 am

Except that your 5-min data wasn’t there why is daytime … “sinusoidal while the nighttime part of the profile is not”. I would be interested to see your graph.”

Sorry! short term memory loss.

I have no doubt that hysteresis exists. But I suspect what you are really talking about is thermal inertia. Thermal inertia is why peak temperatures occur *after* the sun has passed overhead, not while it is directly overhead.

Heat loss at night is very much related to T^4. The derivative is a polynomial of the third power. It therefore looks much like an exponential decay. That is not hysteresis.

weektempdew (1).png
Richard Greene
Reply to  Bill Johnston
April 6, 2023 9:34 pm

Too complicated

Here’s my suggestion

Do a poll of 1000 old timers

Ask them if it’s colder or warmer today
compared with when they were young

The answer is either global warming or global cooling.

That’s all we really need to know.

The global warming CAGW scaremongering will continue anyway.

Reply to  Richard Greene
April 7, 2023 4:11 am

Not too complicated Richard.

Before accepting that data are valid, we need some understanding of the process involved in acquiring the data.

I spend a lot of time researching weather station sites and comparing those findings with what we are told by the Bureau of Meteorology, and what the data says. Detailed studies can take months to complete and I’ve worked out methods of teasing-out information that otherwise would not be apparent.

A good example is my study of the Charleville aerodrome site in central Queensland that became a United States Army Air Force transit base during WWII. (https://www.bomwatch.com.au/bureau-of-meteorology/charleville-Queensland/).

As was the case for Townsville, Rockhampton and other places such as Rutherglen, in order to warm the climate, the Bureau lied about the location of the original Charleville Aeradio site. (My report included an overlay plan and a picture of the site).

Incredibly, in order to cover for that and to merge with prior post office data, they made the data up from 1943 to 1955. Maybe Nick Stokes can explain why they created imaginary data for the 13 years that encompassed the 1950 global warming turning-point.

All this and more can be detected by the full suite of BomWatch protocols.

While I find such studies interesting in themselves, they also highlight the extent to which elite scientists within the Bureau of Meteorology and their masters at CSIRO and Melbourne University (and Neville Nicholls, who is emeritus at Monash Uni) have misled Australians into believing in warming that does not exist.

Some of those scientists spruik for the WWF-affiliated Climate Council. Some have climbed the greasy pole to become Fellows at the Australian Academy of Science. Some are also directly connected to WWF, which for decades has ruled Australia’s climate change agenda.

The overarching issue is that many scientists in institutions we should be able to trust, are so-embedded in the murk of the narrative that they can’t or won’t acknowledge the damage they have done to science; and collaterally, to future generations.No nation on earth would deliberately flush their future prosperity down the toilet like Australia has.

Ruled by elitist bed-wetters and spineless, brain-dead politicians that we have to vote for, like the USA, Australia is also becoming a failing nation!

All the best,

Dr Bill Johnston

DWM
Reply to  JCM
April 7, 2023 8:33 am

radiation enthusiasts ignore such matters (latent heat).

We worry about macro-heat flow and the consequences for humanity. When we get that right then maybe we’ll study latent heat.

JCM
Reply to  Peta of Newark
April 6, 2023 12:28 pm

here

LE.png
Richard Greene
Reply to  Peta of Newark
April 6, 2023 9:20 pm
  1. you can program it to record data at intervals of down to 10 seconds

We need temperature peaks at one second increments,
Maybe one tenth of a second
10 seconds is old school

Richard Greene
Reply to  Peta of Newark
April 6, 2023 9:30 pm

 If we want super accurate and pristine recordings of temperature, we need to be a damn sight further away than 100 metres from ANY possibilities

If we want accurate data, we must have people collecting the data who also want accurate data, and clearly explain all the possible errors in whatever data they present to the public. They should not make false claims of a +/- 0.1 degree margin of error. They must reveal exactly how much infilling was done and exactly what adjustments were made to the raw data initially, and to the data reported to the general public.

I am obviously daydreaming about a level of scientific integrity from government bureaucrats that can not be expected on this planet.

Bob
April 6, 2023 9:37 am

I have said this before, there is damn little that the government should be in charge of. They should have a role as regulator and that’s about it. No organization should be self regulated especially the government. Whoever is measuring and recording our temperature should be scored by independent checkers. That way we have checks and balances from three different organizations.

Richard Greene
Reply to  Bob
April 6, 2023 9:36 pm

Leftists ruin everything they touch
Why should temperature statistics be an exception?

April 6, 2023 10:07 am

Here’s Part 2 of my experimentation with a solid state tempeture data logger….

This is the same screenshot as before with with ‘daytime’ highlighted.

It’s got 2 very interesting parts

First we see the Hedgehog = all those spikes that occur either side of the peak of the curve.
I’ve seen those before in previous tests and is why I wanted to be 1000% sure that El Sol did not shine directly upon the actual sensor probe.
As I said, my homemade screen has 3 separate solar shields

Basically, contrary to what Climate Science tells us – is that El Sol is directly heating the air

Those spikes are caused by passing clouds – such is the combined response times of both the atmosphere and the sensor
We all do recall how ‘The atmosphere is perfectly transparent to solar and infra red radiations’
Yeah. Right. That picture says ‘Bollox’ to that

Second – have any alert enquiring minds sussed it?

This is for Jennifer: Those spikes are telling us why solid state sensors (platinum-resistance or silicon-diode) record higher temps than Mercury thermometers
Clouds cause Global Warming and they do it by ‘cooling’

Picture if you will the day taht that day was – started off blue-sky clear then a whole load of white fluffy clouds blew up and over.
There was quite a lot of them but with short bursts of brilliant sunshine in between.
You can see, just eyeballing, the day’s temperature trajectory and you can poicture how a big heavy slow moving Mercury Thermometer would smooth out that curve

But the solid state sensor would see and record those spikes and when the time came to record the day’s average temperature using (Tmax+Tmin)/2 – the result will be higher than what the Mercury would see.

Play with the data (at Dropbox in my other post)

I did another run/plot and used 3 ways to get the day’s average temp

  1. I added all the 20 second samples together and divided by n
  2. I looked for the Max and the Min in the 20-second data and did a (Max+Min)/2
  3. I ran a simple average over the 20 second data, to get = 5 minute averages then did a MaxMin on that (to replicate the Mercury thermometer)

Ignoring #1, the average for that day’s temperature came out at 0.2 Celsius warmer when using the raw 20 second data than when using the same 20 second-data ran through a ‘5 minute low pass filter’

And that is why solid state thermometers read high – as many have noticed.
Clouds cause warming – and they do it because they are cold.

Ain’t that just sooooo very beautiful.

My Garden April Daytime 2023.PNG
wh
April 6, 2023 10:26 am

I could not agree more. We literally have no good data. They don’t want the most accurate network because they know it’s going to show a very insignificant trend. They’d rather correct and homogenize than start a new dataset, despite the fact that’s it’s just easier to have a perfect station.

bdgwx
Reply to  wh
April 6, 2023 1:37 pm

How do you start a new a dataset and have it retroactively record temperatures back to the 1800’s or even 1700’s?

wh
Reply to  bdgwx
April 6, 2023 3:51 pm

Bdgwx you can’t. Unfortunately we can only start supposedly at 2005. It’s safe to say that it’s warmer than those centuries but by how much will never be known because of these problems (land use change, UHI, sitting problems).

If we start another new dataset, we can use it forever until we can’t. It won’t be useful until 2223 at the earliest, but better now then never. We just really need a new, good, transparent, and independent dataset.

bdgwx
Reply to  wh
April 6, 2023 5:46 pm

Walter said: “It’s safe to say that it’s warmer than those centuries but by how much will never be known”

How do you know it is warmer then?

How do you counter claims that it has warmed more than what existing datasets show?

Reply to  bdgwx
April 8, 2023 10:17 am

You can’t! That is the whole point. Fiddling with past data to create new information (I simply can’t call it new data; it is processed info to meet a need) is not scientific. You want accuracy to 3 decimal places, use measuring devices capable of providing that accuracy. You can’t find a physical laboratory that allows one to claim more precision than what was measured not even with statistics.

You want long records to satisfy the caution of spurious trends. Here is a note from the Cambridge Dictionary.

Looking at different data and subdividing variables will inevitably surface spurious correlations — trends that correlate by chance and have no causal connection.

Find long records, don’t modify data to create them!

Here is a statement from the CRN NOAA site about rounding.

“Temperatures used in calculating departures from normal, degree days, monthly means, etc., should carry one decimal place in final form, and should not be rounded at all prior to the last step in a calculation

Daily Mean = (MaxT + MinT)/2 = XX.0 or XX.5 for any stations observing in integers
Daily Mean = (MaxT.x + MinT.x)/2 = XX.xxx … (no rounding of intermediate value)
For final form, round Daily Mean to one place: XX.x

This pretty much rules out using CRN to obtain hundredths or thousandths digits in anomalies doesn’t it?

Older temperatures should have a 0 or 5 in the tenths digit and nothing else.

Even Dr. Possolo did this in TN 1900.

Just amazing how you all can squeeze more accuracy out of this data than what is was recorded with! I know of no other scientific endeavor that allows this!

Ancient Wrench
April 6, 2023 10:37 am

What happened to the NOAA(?) efforts to characterize the effects of urbanization on weather stations by comparing existing stations to new adjacent properly-sited stations?

bdgwx
Reply to  Ancient Wrench
April 6, 2023 1:36 pm
Ancient Wrench
Reply to  bdgwx
April 6, 2023 9:55 pm

No, that’s just statistical self-stimulation. I seem to recall a study taking an existing station whose data was clearly contaminated by development, then setting up other less-compromised stations only a few hundred feet away. Anthony?

April 6, 2023 10:41 am

The more I study this subject the worse the quality of the data looks. Not just temperature data but as pointed out the entire energy budget ,calculation ,methodology and the physics. There is no way any of the models can be relied on given this. Not even close.

mleskovarsocalrrcom
April 6, 2023 10:44 am

There is no way to get all parties to agree to a one stop climate data base unless those that manipulate it today are given their choice of data sets, parameters, and media representation.

morfu03
April 6, 2023 11:08 am

This is an important article and I keep thinking how this fact, that we do not know the global surface temperature trend (or its anomaly) at the level required could be pointed out even better.

At lot of those temperature (anomaly) over time plots neglect uncertainty, I guess only a rigorous mathematical treatment could reveal.
It seems very obvious for the “satellite temperature products” where several different methods using the same data are plotted as different lines, whereas the correct way should be to treat differences in methodology as systematic uncertainty.
Showing a plot with UAH, GISS and RSS as separate lines is wrong as they are basically based on the same data!

bdgwx
Reply to  morfu03
April 6, 2023 7:13 pm

What you are talking about is sometimes called structural uncertainty in the literature as well. BTW…UAH and RSS are largely (not quite) based on the same data. GISS is a completely different kind of dataset with no overlap with UAH/RSS at all.

MarkW
April 6, 2023 11:55 am

I remember an experiment Anthony performed prior to starting the project to review the ground based sensor network.
He had noticed that originally all of the sensor stands had been painted using whitewash, however gradually the paint of choice had moved to latex based ones.

He put sensors into two boards one painted with whitewash and one painted with white latex. Turns out whitewash reflects infrared better than latex does. As a result the board painted in whitewash was about 1/2 to 1C cooler than the board painted in latex.

Of course, this revalation is still not being used to “adjust” the record.

Reply to  MarkW
April 6, 2023 1:00 pm

That’s why historical ground based measurements are no better than +/- 1 degree. They are not suitable for purpose to use for any resolution finer than that. That is also the reason that satellites were launched, to record more accurately in the troposphere where ALL the CO2 radiative effect that can be measured takes place.

bdgwx
Reply to  doonman
April 6, 2023 8:01 pm

Spot satellite measurements aren’t great. Christy et al. 2003 found that monthly average spot measurements have an uncertainty of ±2.4 C (2σ). They don’t state the instantaneous uncertainty, but it would obviously have to be higher.

April 6, 2023 12:30 pm

No. What makes sense is that there should always be a crisis that cannot be solved without fighting it with your tax dollars.

Rud Istvan
April 6, 2023 12:46 pm

I agree that the current government run weather stations are not fit for climate purpose. I disagree that an independent network is desirable. There are three reasons.

  1. By the time it accumulated 30+ years of data, the whole global warming thing will be busted due to renewables failures.
  2. If the globe was warming due to CO2 plus feedbacks, sea level rise should be accelerating. It isn’t.
  3. We have UAH alternative.
bdgwx
Reply to  Rud Istvan
April 6, 2023 1:34 pm

UAH gets their data from NASA and NOAA.

Reply to  bdgwx
April 6, 2023 9:02 pm

SATELLITE data that is…….satellite.

bdgwx
Reply to  Sunsettommy
April 6, 2023 9:29 pm

Yes. UAH is a satellite dataset. Is that what you are saying/asking?

Richard Greene
Reply to  Rud Istvan
April 6, 2023 9:41 pm
  1. If the globe was warming due to CO2 plus feedbacks, sea level rise should be accelerating. It isn’t.

PROBABLY WRONG
CO2 does not cause melting of most of Antarctica
Antarctica has about 90% of the worlds ice
Therefore, sea level rise acceleration
should not be expected from CO2 melting Antarctica

Rud Istvan
Reply to  Richard Greene
April 8, 2023 9:52 am

See my previous article here years ago on ‘sea level rise, acceleration, and closure’. Specifically the closure part. The two main closure terms are thermosteric rise and Greenland ice sheet. Antarctica ice sheet is last and least. So my statement stands.

April 6, 2023 12:51 pm

Why doesn’t Berkeley Earth count as an independent nonprofit organization?
https://berkeleyearth.org/about/

Reply to  scottjsimmons
April 6, 2023 1:22 pm

Because they use the same corrupted weather station data as NOAA and NASA (with a few exceptions). Mostly all from the GHCN.

Same base soup ingredients, different flavor.

There really is no truly independent surface temperature network.

bdgwx
Reply to  Anthony Watts
April 6, 2023 1:31 pm

Do you know of a global average temperature dataset that spans the same period of record as BEST that you would consider uncorrupted?

Reply to  Anthony Watts
April 6, 2023 1:43 pm

Except Berkeley investigated station quality, urban heat islands and other factors and found that they did not significantly impact global temperature trends. Berkeley is an independent nonprofit that couldn’t find the corruption you claim exists.
https://berkeleyearth.org/papers-climate-science/

Reply to  scottjsimmons
April 6, 2023 2:59 pm

Last time I looked Berkeley remodelled/corrupted Australian temperature series. I’ve done some looking how how Berkeley have changed the temperature data from both Amberley and Bourke. Its unrecognisable. More here: https://jennifermarohasy.com/2016/08/speaking-truth-to-power/

bdgwx
Reply to  Jennifer Marohasy
April 6, 2023 5:41 pm

You say…

The inclusion of Berkeley scientists was perhaps to make the point that all the key institutions working on temperature series (the Australian Bureau, NASA, and also scientists at Berkeley) appreciated the need to adjust-up the temperatures at Amberley.

The problem is that Berkeley Earth does not make any adjustments to the station data when forming the global average temperature. That’s what makes their method unique. See [Rohde et al. 2013] for details regarding their method.

Reply to  Jennifer Marohasy
April 7, 2023 3:41 pm

Last time I checked you have no education or expertise in this field, and you have nothing to offer but a conspiracy theory.

Editor
Reply to  Anthony Watts
April 7, 2023 12:43 am

As you undoubtedly know, Berkeley Earth is worse than that. It was set up by Richard Muller, who lied about being a converted sceptic, specifically to support the estsblishment (and no doubt to gain a bit of fame and/or fortune for Richard Muller).
“CALL me a converted skeptic” – Richard Muller, NYT Jul 2012
“I was never a skeptic” – Richard Muller, 2011
“Global warming. There is a consensus that global warming is real. There has not been much so far, but it’s going to get much, much worse.” – Richard Muller Nov 2008