By Andy May

Steven Mosher complained about my previous post on the difference between the final and raw temperatures in the conterminous 48 states (CONUS) as measured by NOAA’s USCHN. That post can be found here. Mosher’s comment is here. Mosher said the USHCN is no longer the official record of the CONUS temperatures. This is correct as far as NOAA/NCEI is concerned. They switched to a dataset they call *n*ClimDiv in March 2014. Where USHCN had a maximum of 1218 stations, the new *n*ClimDiv network has over 10,000 stations and is gridded to a much finer grid, called *n*ClimGrid. The *n*ClimGrid gridding algorithm is new, it is called “climatological aided interpolation” (Willmott & Robeson, 1995). The new grid has 5 km resolution, much better than the USCHN grid.

While the gridding method is different, the corrections to the raw measurements recorded by the *n*ClimDiv weather stations are the same as those used for the USCHN station measurements. This is discussed here and here. As a result, the *n*ClimDiv and USHCN CONUS yearly averages are nearly the same as seen in Figure 1. The data used to build the *n*ClimDiv dataset is drawn from the GHCN (Global Historical Climate Network) dataset (Vose, et al., 2014).

*Figure 1. The USCRN record only goes back to 2005, it is shown in blue. *n*ClimDiv and USHCN go back to the 19th century and lie on top of one another, with very minor differences. In this plot both datasets are gridded with the new *n*ClimGrid gridding algorithm.*

In Figure 2, USCRN, *n*ClimDiv and USCHN, gridded with *n*ClimGrid, are shown overlain with the average USCHN station data used in my previous post. The station average is plotted with a yellow dashed line.

*Figure 2. Same as Figure 1, but the final USCHN non-gridded final temperature anomalies have been moved to a common reference (1981-2010 average) and plotted on top of the grid averages. The difference between the gridded and non-gridded averages is most noticeable in the peaks and valleys.*

In Figure 2 we can see that the *n*ClimDiv yearly gridded average anomalies are similar to the older, non-gridded USHCN yearly averages. The difference is not in the final data, but in the gridding process. The USCRN reference network station data is also similar to *n*ClimDiv and USHCN, but it only goes from 2005 to the present.

As explained here, by NOAA:

“The switch [from USCHN] to

nClimDiv has little effect on the average national temperature trend or on relative rankings for individual years, because the new dataset uses the same set of algorithms and corrections applied in the production of the USHCN v2.5 dataset. However, although both the USHCN v2.5 andnClimDiv yield comparable trends, the finer resolution dataset more explicitly accounts for variations in topography (e.g., mountainous areas). Therefore, the baseline temperature, to which the national temperature anomaly is applied, is cooler fornClimDiv than for USHCN v2.5. This new baseline affects anomalies for all years equally, and thus does not alter our understanding of trends.”

Prior to making Figure 2, we adjusted the USCHN station average, from our previous post, to the new baseline. The difference is approximately -0.33°C. This shift is legitimate and results from the new gridding algorithm that explicitly accounts for elevation changes, especially in mountainous areas.

**Conclusions**

While NOAA/NCEI has dropped USCHN in favor of a combination of USCRN and *n*ClimDiv, the anomaly record from 1900 to today hasn’t changed in any significant way. The baseline (or reference) changed slightly, but we are plotting anomalies and the baseline is not important, it just moves the graph up and down, the trends and year-to-year changes stay the same. More importantly the adjustments made to the raw data, including the important time-of-day bias corrections and the pairwise homogenization (PHA) changes that looked so suspicious in my previous post have not changed at all and are still used.

The *n*ClimDiv dataset uses a lot more stations than USCHN and if the stations are well sited and well taken care of this is a good change. The USCRN dataset is from a smaller set of weather stations, but these are highly accurate and carefully located. I do not think the USCRN stations are part of the *n*ClimDiv set but are used as an independent check on them. The two systems of stations are operated independently.

My previous post dealt with the corrections, that is final minus raw temperatures, used in the USCHN dataset. They looked very anomalous from 2015 through 2019. The same set of corrections are used in the GHCN dataset, which is the source of the data fed into *n*ClimDiv. So, the problem may still exist in the GHCN dataset. I’ll try and check that out and report on it in a future post.

You can purchase my latest book, *Politics and Climate Change: A History*, here. The content in this post is not from the book.

# Works Cited

Vose, R., Applequist, S., Squires, M., Durre, I., Menne, M., Williams, C., . . . Arndt, D. (2014, May 9). Improved Historical Temperature and Precipitation Time Series for U.S. Climate Divisions. *Journal of Applied Meteorology and Climatology, 53*(5), 1232-1251. Retrieved from https://journals.ametsoc.org/jamc/article/53/5/1232/13722

Willmott, C., & Robeson, S. (1995). Climatologically Aided Interpolation (CAI) of Terrestrial Air Temperature. *International Journal of Climatology, 15*, 221-229. Retrieved from https://rmets.onlinelibrary.wiley.com/doi/abs/10.1002/joc.3370150207

NOAA dropped USHCN because I and the volunteers that photographed hundreds of noncompliant stations turned it into a PR train wreck for them.

They quietly closed a number of embarrassing stations with no public notice, such as Marysville CA, Tucson AZ, and Ardmore OK.

Bottom line – siting makes a difference.

https://wattsupwiththat.com/2015/12/17/press-release-agu15-the-quality-of-temperature-station-siting-matters-for-temperature-trends/

Thanks Anthony

Siting quality doesn’t remove the systematic error due to non-aspirated shields.

USHCN measurement uncertainty is at least ±0.5 C over the entire 1880-2010 range because the shields were and are not aspirated. That ±0.5 C uncertainty does not average away with large data sets.

The 1981-2010 normal uses USHCN measurements, which has the same ±0.5 C uncertainty.

Systematic measurement error is small in the USCRN network because the shields are aspirated. The USCRN overall uncertainty is reduced to about ±0.05 C – the manufacturer’s calibration uncertainty, which does not average away either.

However, when the USHCN 1981-2010 normal is subtracted from USCRN temperatures, the ±0.5 C USHCN normal uncertainty propagates into the USCRN anomalies.

The USCRN

anomaliesthen are subject to a ±0.5 C uncertainty that their temperature measurements did not have.When the normal period eventually becomes 2011-2040 and is fully USCRN, the normal uncertainty will become (should be) about ±0.05 C.

However, the lower limit of uncertainty in the 1880-2005 temperature record will always be at least ±0.5 C because the individual measurements themselves are no more accurate than that.

If the USCRN anomaly record is grafted onto the 1880-2005 USHCN record, the USHCN uncertainty enters with the USHCN record. The extended record will then have an implicit uncertainty of ±0.5 C with respect to the 1880-2005 time-range, courtesy of the grafted USHCN record.

The whole air temperature record is a study in incompetence.

The people at GISS, NOAA, UKMet, and BEST pay no price for their incompetence. They keep their jobs, the science societies remain silent, the money keeps coming in, the media continue to pull their collective forelocks, and the climate nutters keep up their vociferous support.

So, an air temperature record thoroughly corrupted with false precision gets blandly reiterated as the various groups of compilers determinedly reject the hard judgment of measurement science.

“The whole air temperature record is a study in incompetence.”

Do you include the thousands of dedicated and serious people who kept the record for a century prior to 1989? Were they deeply incompetent? They made 50 million recordings on TMAX alone. All garbage?

I don’t think so.

If you goal is “ISO-9000 level accuracy at every station is required to determine the average temperature of the earth” then fine. However, that does not answer the question “Do the individual plots of each station one at a time show any abnormal warming at the location using the ‘less than perfect’ instruments and protocols?”

The plot of a station that was consistently low or high over a century is more valuable than a hyper-modern station that only goes back a decade or two, let alone a model temp reconstruction.

The records being kept were weather records the required accuracy and precision was what was achievable then reading values leaning into a Stevenson screen at night with sleet being blown down your neck. In the Southern hemisphere there were very few observers less than 100 until 1900 and most of those in Australia. The observations are being misused by today’s climate ‘scientists’ as if they were AWOS with a claimed precision and accuracy that was unachievable and most observations are invented. An analogy would be comparing a pace stick to a micrometer – the errors from a century ago are a lot larger than the accuracy claimed by today’s climate ‘scientists’ who are in any case measuring the wrong variable (as I have said several times). Temperature of a volume of atmosphere can be varied as defined in the gas laws and by the enthalpy of the volume being tested due to the latent heat of the varied water content.

It really doesn’t matter how many observation sites you have – if you are measuring the incorrect variables then the output is meaningless. This is not the fault of the Met Observers a century ago; it is the fault of today’s climate ‘scientists’ misusing the century old observations. It is probable that for many sites the humidity is available and the energy content of the air in kilojoules per kilogram could be calculated. However, the required coverage of observation stations particularly in the southern hemisphere was not available until probably 1945.

““Do the individual plots of each station one at a time show any abnormal warming at the location using the ‘less than perfect’ instruments and protocols?””

How do you know? If they have a +/- 0.5C degree of uncertainty then until the trend exceeds that interval, e.g. if a reading is 20C +/- 0.5c, then until the temperature readings exceed 21C you can’t tell if there is any warming at all!

@Ian W

“really doesn’t matter how many observation sites you have – if you are measuring the incorrect variables then the output is meaningless.”

That is exactly 180 degrees wrong.

Here is the correction: It really doesn’t matter how accurate-in-the-perfection-of-the-abstract the cumulative ‘average’ is measured to for any one given station; it matters that you have many observation sites over a long time, each consistent, mostly, in itself. You then can look at 1200 sine curves one by one, and the effect of seeing no abnormal warming in this one, then the next one, then the next one…

And none show abnormal warming.

That is the opposite of meaninglessness.

wl-s “

Do you include the thousands of dedicated and serious people …, etc., etc.”I clearly referred to those producing the modern air temperature record, didn’t I.

Your question is a non-sequitur.

If anything, the carelessness of the modern producers is an insult to all the dedicated and serious people who came before.

And regardless of their dedication and seriousness, the temperatures they recorded from non-aspirated shelters were no more accurate than ±0.5 C.

That’s systematic measurement error, not random error. It does not average to zero in large data sets.

wl-s, your reply is meaningless.

“It does not average to zero in large data sets.”

did it skew all recordings higher or lower?

wl-s,

The issue is *not* individual stations being looked at separately. If that was what the climate scientists were doing then I would agree with you somewhat about following trends.. There are two main issues.

1. What is the NOAA “mobile” lab calibrated against when it gets to the remote site? Do they also transport a mobile standards lab with them everywhere? If not, then the NOAA measurements will have its own uncertainty interval so how will Zumwhere be able to tell how far off their measurement device is?

2. Climate scientists today don’t look at 10,000 individual stations to determine what is going on. They “average” all 10,000 together. Even with stations having +/- 0.05C uncertainty, that uncertainty adds by root sum square. So that overall uncertainty will grow by the sq rt (10^4) * .05 = +/- 5C. With an overall uncertainty of 5C how is anyone going to tell if the “average” temperature went up or down by even a degree let alone by tenths or hundredths of a degree? It’s somewhat ironic that the 100 or so stations originally being used which had +/- 0.5C uncertainty also have an uncertainty interval of +/- 5C when averaged together. [sq rt(100) * 0.5 = 5] From an uncertainty viewpoint the new dataset from 10,000 stations don’t give you any more of a useful average than the older 100 (or so) stations.

If the climate scientists looked at each individual station and applied a plus or minus mark to each and then counted the pluses and minuses they would have a far better “feel” for what is happening with the climate. But how would they then generate millions (billions?) of dollars to create climate models based on an average temperature with a +/- 5C uncertainty interval?

“The issue is *not* individual stations being looked at separately. If that was what the climate scientists were doing then I would agree with you somewhat about following trends.”

Wait … even though you agree observing the sine curves one by one is important, you Yield to Orthodoxy? That is a form of Appeal to Authority.

Wouldn’t the correct path be to help falsify “what climate scientists are doing” on the basis it is fallacious?

wl-s: “

did it skew all recordings higher or lower?”Here we go again.

Uncertainty is not error, wl-s. Uncertainty is the rms of the errors revealed by a series of calibration measurements.

In the case of meteorological temperature stations, systematic measurement errors arise from from wind speed or irradiance. The errors can change magnitude and even sign day-by-day or even hour-by-hour.

The only way to account for such errors is to carry out calibration experiments using well-maintained and well-sited instruments.

The rms calibration uncertainties are then applied to the field station measurements, made using the same types of instruments.

Those (+/-) uncertainties are applied to every single field measurement that enters into the air temperature record. They do not average away. Ever.

wl-s: “

Wouldn’t the correct path be to help falsify “what climate scientists are doing” on the basis it is fallacious?”Done.

Patrick Frank (2019) “Propagation of Error and the Reliability of Global Air Temperature Projections” Frontiers in Earth Science – atmospheres 7(223).

The second paper, ever, to demonstrate that climate models can’t say anything about CO2 emissions.

Patrick Frank (2016) “Systematic Error in Climate Measurements: the global air temperature record” in The Role of Science in the Third Millennium, R. Ragaini ed, 2016, World Scientific: Singapore, pp. 337-351.

The third paper, ever, demonstrating that the global air temperature record is climatologically useless

Patrick Frank (2015) “Negligence, Non-Science, and Consensus Climatology” Energy & Environment 26(3), 391-416.

The first paper, ever, showing that the entire corpus of AGW science contains no science at all.

Patrick Frank (2011) “Imposed and Neglected Uncertainty in the Global Average Surface Air Temperature Index” Energy & Environment 22(4), 407-424.

The second paper, ever, demonstrating that the global air temperature record is climatologically useless

Patrick Frank (2010) “Uncertainty in the Global Average Surface Air Temperature Index: A Representative Lower Limit” Energy & Environment 21(8), 969-989.

The first paper, ever, demonstrating that the global air temperature record is climatologically useless

Patrick Frank (2008) “A Climate of Belief” Skeptic 14(1), 22-30.

The first paper, ever (peer-reviewed and all), to demonstrate that climate models can’t say anything about CO2 emissions.

@Patrick Frank

“The second paper, ever, demonstrating that the global air temperature record is climatologically useless — Patrick Frank (2010) “Uncertainty in the Global Average Surface Air Temperature Index: A Representative Lower Limit” Energy & Environment 21(8), 969-989. — The first paper, ever, demonstrating that the global air temperature record is climatologically useless…”

Does not address my claim.

Conflation of raw measurement with reconstructed temp models.

Red herring of “uncertainty.”

Note: I did not click over to your papers. You would be justified to request that I do, since I am questioning the root of your project outright. Nevertheless, I’ll write the short version of my position:

The RAW recordings of USHCN are relatively pure. Almost naïve in their innocence. Just 50 million TMAX recordings from 1200 stations over 120 years made by an army of people who care about science. This is not a model. It is raw data. There is no gridding, homogenization, imagination, etc.

Are these recordings “uncertain?” Who cares? I don’t.

Because:

=========================

A tailor for a gentleman makes three suits a year for his customer from scratch for 60 years, and he notes the measurements. He plots them in Excel. He likes to observe the sine curve in waistline that reveals the “natural” flow generated by the slow fight to maintain the man’s figure, up and down. The tailor and the man exchange ironic insults over this, the gentleman sure the tailor must have been drunk, or the wind was blowing too hard, the decade of the highest WAIST-MAX.

Laughingly, the tailor takes his assistant’s tape in hand and lays it out on the table next to the honorable tape he used for the the last twenty years. To his horror, he sees that they do not agree! His tape is shorter per foot by 1/8 inch! Quickly he checks the data and graph at the point he began using the new tape. Sure enough, there is a tiny jerky uptick, hardly noticeable, until you know where to look. He fearfully compares his tape with four other tapes, including one from a friend in another shop. “I am out of calibration,” he wails.

“Still,” says the haughty tailor to his amused customer, “this does not change my claim. You have been getting fatter and thinner on the normal curve that nature intended.”

That night, the tailor had a nightmare. He is a member of a trade organization of his craft. In the dream he sees 1200 of his fellow tailors making the same mistake. Some might even be using a tape that is too long! He wakes up in a cold sweat. “All our measurements are uncertain,” he cries out to the bedroom wall. “And I’m terrified I did not record the waistline down to the 1/32 of an inch. I am going to purchase a laser-driven measuring tape tomorrow!”

His guardian angel soothes him back to sleep and invokes a sweet dream. “Your long slow dedication to record consistently, even with one or more errors in your instrument of measurement, has served you well. There was no abnormal waistline explosion to report to your customers. All your suits fit them. Sleep peacefully.”

===========================

Patrick and anyone who cares to read my posts:

There is one dataset extent that reveals in the purest way the reality of surface temperature over 120+ years. That is the RAW version of USHCN. If you look — one by one or on accumulation — at the sine curve of 1200 station [800 recently until the redacted 400 are restored] you see normal natural flow. There is no evidence, to your very eyes, of abnormal warming.

http://theearthintime.com

“Uncertainty” is trumped by “consistency.”

The RAW USHCN is far from “useless.” It is the only 14-carat gold we have.

wl-s, you’ve got no clue.

“you’ve got no clue”

Ah, yes, the Appeal to Nothority.

@windlord-sun

This is not a ‘mathematical issue’ and averaging errors issue.

The metric for energy content of a volume air is kilojoules/kilogram. This has a nonlinear relationship to air temperature. So you cannot tell the energy content of air from temperature unless you also have the humidity of the air and can calculate its enthalpy (specific heat).

It really doesn’t matter how many measurements of the wrong metric you make you cannot ever calculate the change in energy content of the air because of the non-linear relationship between air temperature and heat content. Just small changes in humidity would account for all the changes in measured temperatures – with NO energy change.

Ian,

“Energy content” vs “Surface air temperature” …

What are you saying? That climate science is foolish to examine surface air temperature in the quest to answer “Is there any abnormal warming?”

Please clarify.

wl-s, let me put it another way. Your analogy merely demonstrates that you’ve got no clue.

The USHCN raw temperatures have the identical systematic measurement calibration uncertainty as do the USHCN anomalies: ±0.5 C.

That’s because the calibration uncertainty is a property of the instrument in the field. Nothing removes it.

That’s what I discuss in my lower limit of uncertainty paper.

I don’t discuss adjusted temperatures. I don’t even discuss anomalies, except as derived.

I discuss the measurements themselves. You’ll find your first clue when you figure out what that means.

Pat Frank, “You’ve got no clue.”

a) I am completely clued-in to the ‘ignore the plots of raw, pound the inaccuracy of the instruments, make a model and spout anomaly graphs” game;

b) What you persist in not becoming clued-in on is this: I utterly reject your POV.

Repeating: I do not care a smidge about the uncertainty or inaccuracy. I claim no one should care about it, if the baseline question is: Is there any abnormal warming?

I am proffering a totally different POV. Apparently, you don’t like it. But instead of refuting my position, you simply state my position is wrong simply because it is not yours!

A good debater will at least honor the oppositions full platform and process, and refute it from that place. When you don’t do that, either;

1) you realize my argument destroys your project and you just want to squash me without honoring/refuting it on it’s terms; or

2) you really really really don’t understand my premise, and so pity me for being out on Mars or something, and are praying I’ll get a clue and sign on to your position.

We can only answer “Is there any abnormal warming” by examining unaltered, un-tweaked, un-massaged, raw, long term trends of weather station direct measurement, one by one, to detect abnormal sine waves in one or more of their datasets.

wl-s, how are you going to detect “

abnormal warming” when the systematic measurement uncertainty in your data is so large that you can’t detect any warming at all?That’s the case with the air temperature record. There is no doubt about that. The field calibration experiments have been done. The lower limit of uncertainty due to systematic measurement error is ±0.5 C.

You can have your own lovely

POVand reject all disagreeably contraryPOVs, but you’re still wrong.While the new stations my have a 0.05C uncertainty today what will it be in a year? Are the louvers in the shield going to be cleaned regularly? Are the fan blades going to be cleaned and balanced? Will dirt and insect grime be removed from around the temperature sensor itself on a regular basis? For all 10,000 stations?

Who is charged with doing this? And how many people are assigned this task?

I personally have no faith at all in the systemic uncertainty of these 10,000 stations remaining at +/- 0.05C. My guess is that within five years the routine maintenance of these stations will be slipshod at best if it is even done at all. Who knows what the uncertainty will be then.

Tim you are focused on “uncertainty.” Some of the measuring is primitive and sloppy, so we are slapping a .05C bound on them.

Here is an exaggerated example, for the purpose of illustrating a vital point …

The weather station at Zumwhere University has been logging TMAX, TMIN,SNOW, and PERC for 130 years. They have been doing it the same way forever, meticulously training each new helper on the protocols, including calibration of instruments. Their sine curve of TMAX shows a natural up and down traverse, with the top of each 35-year wave just slightly lower than the previous. This continues right into the measurements of 2020.

Now, a visitor from NOAA insists on calibrating for five days, Zumwhere’s recordings vs his mobile instruments which are new and spectacularly sensitive, accurate, and sure. To the horror of Zumewhere U, they discover that they have been reporting TMAX and TMIN both .765C Degrees too high all this time. What a nightmare.

However …. they have been consistent. It does not matter that they might have “skewed” the global average gridded temp of NOAA by .00001. Nor that they have been in variation from a station 77 miles to the north of Zumwhere.

What matters is that no abnormal deformation of their sine curve ever showed. There was no abnormal warming at Zumwhere.

Now multiply that by 1200.

wl-s, none of your calibrations or meticulousnesses will remove the error wind speed and irradiance errors that impose themselves to non-aspirated shields.

The 10,000 stations are not USCRN. I don’t know where Pat Frank gets +/-0.05C as the USCRN uncertainty, but the others have +/-0,5C uncertainty.

Andy, the USCRN includes aspirated shelters. Aspiration makes all the difference. The air inside the shield accurately reflects the outside air.

Measurement accuracy is no longer impacted by insufficient wind-speed or irradiance. Measurement accuracy then approaches the lab-calibration of the sensor.

The manufacturer’s uncertainty for the aspirated shielded sensors, such as the Yankee MET2010, is ±0.05 C.

The USHCN sensors are not aspirated. They are subject to errors from insufficient wind-speed and irradiance.

The lower limit of USHCN measurement accuracy is ±0.5 C — 10 times higher than USCRN.

“However, when the USHCN 1981-2010 normal is subtracted from USCRN temperatures, the ±0.5 C USHCN normal uncertainty propagates into the USCRN anomalies.”

Thats not how it’s is done

You’ll have to swallow the poison pill to see what is in it. Thanks Nancy.

Doesn’t matter how the anomaly is calculated, Steve. The uncertainty in the USHCN normal will propagate into it.

One apparently reliable temperature record is the Nino 34 tropical Pacific SST. The linked chart has that data for the satellite era:

https://1drv.ms/b/s!Aq1iAj8Yo7jNg3j-MHBpf4wRGuhf

This is an important region for temperature measurement because it is a good indicator of weather across the Pacific and its rim.

Any temperature trend for the last 40 years that varies from the trend in this data set has to be suspect.

Bottom line – siting makes a difference.

=====

Population change also has an effect and is likely related to the siting problem as rural stations over time end up in major population centers.

Nope

Yep, it does,

…. no matter how your mob are at intentionally ignoring the problem.

comparing

https://www.ncdc.noaa.gov/temp-and-precip/national-temperature-index/time-series?datasets%5B%5D=uscrn&datasets%5B%5D=climdiv&datasets%5B%5D=cmbushcn¶meter=anom-tavg&time_scale=ann&begyear=2005&endyear=2020&month=10

Note

1 USCRN is UNADJUSTED GOLD STANDARD

2. USHCN is ADJUSTED and has Sites that are poor quality ( high CRN rating)

3. nCLIMDIV is also ADJUSTED and has sites that are poor quality (high CRN rating)

Result?

ADJUSTMENTS work. The Unadjusted gold standard USCRN SHOWS SLIGHTLY WARMER

temps than adjusted data

I’ll stipulate this: USCRN certainly is unsurpassed in measuring surface air temp. It takes the breath away, the lengths to which NOAA goes to get this accomplished. I stipulate that the stations damn well produce hyper-accurate measurement of surface temp.

for the last 15 years or so.

114 stations contiguous United States

Fine.

This does not address my claim. My claim is that no purported abnormal warming is visible in RAW TMAX USHCN. If USCRN claims abnormal warming over the past 15 years, it ought to have dinged (to say the least) the sine curve of many stations in the United States. We ought to see the tracers of the abnormal warming. Instead, RAW USHCN shows cooling over the period 2005-2019.

Is the absolute unaltered recorded TMAX for the 114 stations available for download?

But there is a problem: Is there an old fashioned weather station close by each of the 114 stations? It would interest me to acquire the raw recordings from them and graph them with their twin in USCRN.

please confirm …

#2, By saying “USHCN is ADJUSTED” do you subsume within that claim that ushcn RAW is adjusted?

Suppose one looked at those 1200 graphs which you believe show sine waves of periodic changes. What could one possibly see that could be pointed at as “abnormal warming”?

Yep USCRN has brought all the data manipulation to an end.

The temperatures leveled of

NOTHING before USCRN has any relevance to REALITY at all.

It is still an agenda driven mal-fabricated mess.

Just like BEST is. !

Adjustments that hit a target. Typical.

And no uncertainty bars.

Let’s see, a lower limit uncertainty would be ±0.5 C for the USHCN anomalies, and the same ±0.5 C will propagate into the UCCRN anomalies from the USHCN normal.

But all displayed as though with perfect accuracy and infinite precision in all measurements.

Utter incompetence. Whether it’s studied or not is the only question.

Anthony you still have not shared the data on sites.

or published your study

no cookie

YAWN,

Everyone had access to the surface station study

You have gotten SO PATHETIC now you have to try to be a yabbering mouthpiece for the WORST data series around.

You’re in no position to gloat, Steve.

If people are interested, I have written several articles about NOAA’s ClimDiv temperature series

Warning – they feature my “infamous” Global Warming Contour Maps (which Anthony doesn’t like)

USA Warming since 1900

https://agree-to-disagree.com/usa-warming

Tavg-Tmin-Tmax Warming

https://agree-to-disagree.com/tavg-tmin-tmax-warming

If you want to learn how Global Warming Contour Maps work, Read this article

Robot-Train Contour Maps

https://agree-to-disagree.com/robot-train-contour-maps

I don’t accept any responsibility for headaches caused by looking at Global Warming Contour Maps

Good presentation of data, Andy. It looks like, at least for CONUS, in 120 years the temperature has changed from -0.7 deg C to about +0.5 deg C, or about 1.2 deg C, or about 1.0 deg C per hundred years. I’m underwhelmed, and losing my current excuse for “survival liquids”, aka cold drinks.

I think that you will find that those are the “adjusted” datasets, so that most of the change is due to the adjustments and not the actual measured temperatures.

As predicted in Menne at el.

I’d like to see a histogram of daily high temperature records for US stations vs time. Fully half of our states still publish all-time high records set prior to 1941. In my city, the local TV weather broadcast always shows the date og that day’s high temperature record. It’s amazing how mamy daily records were set in the 1930s, and even in the late 1800s. Let’s see a histogram which will display the trend over time.

Here’s one from Milwaukee:

http://www.aos.wisc.edu/~sco/clim-history/stations/mke/MKE-HIGH-T-ANN.gif

Wow! Where’s the warming? These maximums certainly didn’t contribute to the “global average temperature” going up! Must be happening somewhere else.

Just as long as we can get to a hundredth of a degree accuracy, that’s all that counts.

Yep. Just as in marketing-speak, a number like $5.99 is somehow more credible than $6

“They switched to a dataset they call nClimDiv in March 2014. Where USHCN had a maximum of 1218 stations, the new nClimDiv network has over 10,000 stations and is gridded to a much finer grid, called nClimGrid. The nClimGrid gridding algorithm is new, it is called “climatological aided interpolation” (Willmott & Robeson, 1995). The new grid has 5 km resolution, much better than the USCHN grid.”

Doesn’t matter. You still can’t average intensive properties and end up with anything meaningful.

+1

Especially if your ‘average’ is really the arithmetic mean of the highest and lowest temperature recorded. The supposed ‘increase in average temperatures’ could easily be due to higher night minimums; the peak high temperatures may even have fallen but the arithmetic mean is higher.

Then the difference in arithmetic mean is added to the reported maximum temperatures and it is claimed that ‘maximum temperatures will go up by….’. That allows scary headlines a lot more than nights will not be as cold in the next decades.

It’s much like Mann’s Bristlecones, and Briffa’s “One Tree” in Yamal. The outliers can overwhelm the rest, especially if you put your thumb on the scales.

Ian

What you called the “arithmetic mean of the highest and lowest temperature” is really more properly called the Mid-Range Value. While the arithmetical operations of arriving at the two are the same (i.e. add all and divide by the number of temps) the classical mean and mid-range have different statistical properties.

The average global temperature IS being influenced more by the daily lows than by the highs. I showed that in Fig. 2 at:

https://wattsupwiththat.com/2015/08/11/an-analysis-of-best-data-for-the-question-is-earth-warming-or-cooling/

The average is often meaningless. For instance, I enlisted in the Navy as a linguist. I selected a language that was so foreign to English it makes Spanish seem like a near twin of English. There were eleven of us in the class on Day 1. We faced our first test at the end of the first week. Four of us scored in the high 90’s, the other seven failed utterly. No one “kind of got it”. No one scored anywhere near the mean.

The mean of that first test offered no useful information about that class. Nor did it represent anyone in that room.

As for that class, the seven were removed over that first weekend. We remaining four graduated together 47 weeks later.

Sample size isn’t meaningless 😀

Krishna, averages would’ve been of more value after the class was whittled down to 4 than they were when the class was 11. But don’t let me interrupt you as you thrash about…

Neither is it meaningful.

A larger sample size with bad data is far less useful that a small sample size with good data.

Sample size is clearly meaningless for a great many averages. 350 million people in the US. The average person has 3.98 limbs. When no sample is average, the average can be nonsense, and the sample size is meaningless..

+1

When U.S. air force discovered the flaw of averages

https://www.thestar.com/news/insight/2016/01/16/when-us-air-force-discovered-the-flaw-of-averages.html

+1

Referencing the almost-two complete min-max anomaly cycles that have occurred from 2010 to 2020, as shown in Figure 2 of the above article: their amplitudes appear to be statistically unusual . . . there is only a single prior instance of a full cycle amplitude of equal magnitude shown, that over the period of 1920-1924. These three unusual-amplitude cycles have amplitudes of 1.3-1.6 °C. Maybe this is within the range of natural variability, maybe not; I don’t know if there is anything to conclude from this observation.

All “cycle” periods (based on counting successive local minimums or successive local maximums) of the data plotted in Figure 2 appear to be within a rather limited range of 5 to 7 cycles per 20 years, or an average of about 1 cycle every 3.3 years, which is generally consistent with the periods between successive El Ninos or successive La Ninas, so the plotted data has not been adjusted to take out this natural variability.

Nonetheless, Figure 2 clearly shows a cooling trend from about 1938 to about 1977, and a “pause” (or “hiatus”) in warming from 1998 to 2019.

So, it does appear that the data comprising Figure 2 has not (yet) been adjusted by AGW/CAGW “scientists”.

Gordon, Every time I look at the data between 2010 and 2019 (inclusive) it looks fishy. This decade is the focus of my work. Who knows what will turn up? I’ve really just started. I’ll write up stuff as soon as possible.

Thanks Andy! I am very glad to know that you are looking into it.

How does this graph comport with the well-known fact that the 1930’s were hotter than the 2000’s in the lower 48 states?

It seems that when all stations were manual Stevenson screen type, the temp anomaly was -0.5. With new automated aspirated stations, the anomaly is +.5 As stations converted, the anomaly rose by a degree from about 1985 to 2005. You really have to ask whether this change is real or an artifact of changing the sampling methods.

The automated stations are sensitive to short term temp spikes due to ground convection “bubbles” that the older bird houses could not react to. This results in a downward adjustment, but is it enough? Plus stations went from recording high/lows on metal floats in the mercury thermometers, to averaging continuous thermistor readings. Many stations moved locations from “handy-for-volunteers” to local airports, which experienced increasing levels of air traffic, jet engines, and runway paving over recent decades. The possibility of interpretive bias in the corrections, adjustment, and homogenization of these effects is large, much higher than the actual instrumentation accuracy. The presently assumed rise of +1 just happens to be where different methodologies seem to agree with each other, given whatever interpretive bias has inadvertently been built into the data adjustment methods. As long as the raw data remains available, future analysis will reveal something different than today’s analysis, all probably within a half degree of the raw data and “homogenizations” by over a degree will be ridiculed….

DMac,

In Australia, the electronic temperature is taken over a one second interval. These is no averaging mathematically of continuous thermistor readings. Geoff S

Bob Tisdale’s book “Extremes and Averages in Contiguous U.S. Climate” has graphs of 100 years of NOAA Continuous U.S. Climate Data (2018 Edition).

A book that NOAA should have published and you cheap bastards should have bought to support his INDEPENDENT RESEARCH.

Carbon Bigfoot, your phrase “. . . and you cheap bastards should have bought to support . . .” demands some clarification.

As in: are you really meaning to offend all WUWT readers/commenters by such an assertion? And/or do you really think it is the responsibility of all WUWT readers/commenters to fund INDEPENDENT RESEARCH.

“Fools rush in where angles fear to tread.”

Andy

“So, the problem may still exist in the GHCN dataset. I’ll try and check that out”There is no corresponding issue in GHCN. It only arises because USHCN replaces missing month data by a local average so that every station in the “final” set has a reading. That is not done for GHCN. You need to be aware too that GHCN is now onto V4.

The reason that USHCN did that interpolation to ensure that each of 2128 stations had a reading (final) was that they were trying to average without taking anomalies. This endeavour was unwise, but the remedy more or less worked.

Thanks Nick. The dataset I have is labeled ghcnm.v4.0.1.20201010. I take this to mean GHCN, monthly, version 4.0.1, October 10, 2020. I’ve actually done quite a lot of work with it, but I have a way to go before I can post anything. I’ll be very interested in your comments.

Good comparison, Andy May! If Stokes and Mosher had a point to make in their prior comments, they failed to elucidate it with clarity. D’OH!

The simple point is, as acknowledged here:

” Mosher said the USHCN is no longer the official record of the CONUS temperatures. This is correct as far as NOAA/NCEI is concerned. They switched to a dataset they call nClimDiv in March 2014.”From the article: “While NOAA/NCEI has dropped USCHN in favor of a combination of USCRN and nClimDiv, the anomaly record from 1900 to today hasn’t changed in any significant way.”

My problem with the current US temperature record is it does not look like the Hansen 1999 US surface temperature chart, where 1934, was 0.5C warmer than 1998, and that makes 1934, 0.4C warmer than 2016, the so-called “hottest year evah!”. Hansen 1999 demonstrates that the US has been in a temperature downtrend since the 1930’s.

The charts mentioned in this post all show the US to be in a temperature uptrend and to be much warmer than the 1930’s. So who is wrong? Hansen, or the current group of Data Manipulators? One of them says the 1930’s was the hottest decade, the other group says the current decade is the hottest decade. They both can’t be correct.

I will go with Hansen 1999. That’s the chart that shows we don’t have anything to worry about from CO2. That’s the reason the alarmist Data Manipulators decided to manipulate the data to make things appear to be hotter today than at any time in the past. The historic temperature record, like Hansen 1999, puts the lie to these claims if anyone cared to pay attention to it.

Hansen 1999:

https://climateaudit.files.wordpress.com/2007/02/uhcnh2.gif

We are not experiencing unprecedented warming today. It was just as warm in the 1930’s as it is today, which means that there is no CO2-caused warming worth speaking about, since there is much more CO2 in the atmosphere now, than in the 1930’s, yet it is no warmer today than it was in the 1930’s.

CO2 is a benign gas that plays a very small role in the Earth’s atmospheric temperatures, and unmodified regional temperature charts from around the world show just that: It was just as warm in the recent past as it is today. CO2 has not caused the temperautures to climb abnormally. The proof is in the written surface temperature records from around the world.

Tom, I think you have a valid point. I plan on looking at the corrections used to GHCN raw data. The changes are mostly due to the corrections applied as far as I can tell.

Andy,

With GHCN-M you have to be careful because ONLY the US stations receive the TOB+PHA

treatment. And only when there is metadata for the Time of Observation. the rest of the world

gets ONLY PHA.

Finally. It is instructive to compare the same station in USHCN with its counterpart in GHCN

Before or after all the “mal-adjustments’ to both sets. ??

The sea surface temperature records, ENSO and AMO, show the same thing. No warming.

Thanks for bringing that up Tom. I looked at the graph in Andy’s article and my first thought was what happened to the 1930s temps, have they been permanently removed too? Along with the ‘medieval warming’ and the ‘little ice age’, going back a bit further.

You queried it much better than I could.

Hey! What do you know, my chart actually showed up in the post! I guess Anthony must be improving the comment software. Good!

J Mac, Nick is correct. Mosher was correct to a point. He also said that that the process used to go from the raw data to the final anomalies had changed. It turns out it didn’t. The nClimDiv dataset uses exactly same corrections. What changed was the number of stations used and the gridding algorithm.

I understand that, Andy. Did any of that offer any distinction to or warrant any change to your original hypothesis? If no, it is a distinction without a difference.

No, It didn’t matter, because the graphs overlaid. I have all the GHCN data, but it is a huge file and takes hours to process on my computer. Fortunately, once I get all of it loaded into R and output to properly formatted RData files, I can work with it more efficiently. I’m nearly there. Even with GHCN/nClimDiv, the last 5-10 years look very strange. So I see your point.

What program is “R?”

Last time I parsed out full daily records from GHCN it came to 500 million records. Is that your count?

actually, 450,396,126 recordings between 1900 and 2019.

… and I agree, there is something bizarre in the recent data, as per this sine wave of it….

https://theearthintime.com/ghcn2019.png

that can’t be right.

R is a free statistical programming language. You can download it here:

https://www.r-project.org/

Andy,

I could see the fidelity in the overlay graphs you provided. Stokes and Mosher’s comments served only to pointlessly distract from your valid and demonstrated hypothesis. They added nothing.

Bingo

Andy May: “R is a free statistical programming language. You can download it here:

https://www.r-project.org/”

Thank you, I’ll look into it.

Did you ever come up with a total number of recordings in GHCN?

One issue with GHCN is that many stations did not and/or are not reporting long term. Huge numbers of stations blinked on/off after a decade or two … or less! Many ‘just got started’ in the last few decades.

In my opinion, those short reports are useless for either POV: 1) looking at each station one by one to see if they reveal any abnormal warming; or 2) attempting to construct a model of precision through gridding, interpolation, meshing with proxies, etc.

Raw count of stations in GHCN: 40,145, with an average lifespan of 38 years.

Once I screened out those short ones, I came up with a long-term station count of 1863, of which 1593 were in the USA. That means that GHCN contains only 270 long-term non-US stations.

The filter was “more than 99 years of records, and still active now.”

My point is this

Global climate records DONT USE USHCN, they DONT USE nCLIMDIV

they use GHCN-M which doesn’t use TOB

They use some sort of other manipulations to try the temperature fabrication in line with their “expectations”

Spreading urban data all over the place doesn’t help relate to reality.

But you are concerned about what is REAL are you mosh, only what the scammer want to be shown as real.

Stipulating — for a brief brief moment — that the suppression of 400 of the 1200 stations of USHCN starting in 1989 was justified …

Examining the RAW datasets for the remaining stations…if you look at the sine wave of each of them one by one, you find no perturbance that would signal abnormal warming. No matter what anomaly charts, temperature reconstructions by gridding, homogenizing, estimating and extrapolating, the fact that any purported warming claimed by those models is not visible amounts to a hard rejection of alarms about warming claimed by the models.

None of this matters:

A large percentage of numbers are wild guesses (aka infilled)

Almost no Southern Hemisphere temperature data before 1920.

Too little Southern Hemisphere data from 1920 to 1950.

The numbers are not fit for real science before UAH in 1979.

UAH has the potential for accuracy since it has little infilling, is measured in consistent environment, and is measured where the greenhouse effect occurs.

The surface numbers are garbage (a scientific term) even before all the repeated “adjustments” and the use of a global average is not useful — no one lives in a global average temperature.

The one number global average temperature is a statistic, not a real measurement.

It also hides where the most warming has happened since 1979 — Upper half of the Northern Hemisphere, and when the most warming has happened (mainly in the coldest six months of th year and mainly at night). The use of a global temperature anomaly on a chart, with a range of 1 to 1.5 degrees C., is climate alarmist propaganda. The statement that winter nights in Alaska are warmer than they used to be tells a completely different story using the same “data”.

But there is no “greenhouse effect”. The more water vapour in the atmosphere, the more energy is rejected:

https://1drv.ms/u/s!Aq1iAj8Yo7jNg2_DukRksyuhIkZ8

Think about it for 10 seconds – if water vapour caused the surface to warm, it would just all boil off and there would be no oceans.

The atmosphere goes into cyclic cloudburst mode when TPW reaches 38mm. That results in highly reflective cloud that shades ocean surface. It is the cause of monsoon and cyclones. Cyclones result in massive heat rejection. Cloud in cyclones can result in surface cooling when the sun is directly overhead.

“Greenhouse effect” is a fairy tale dreamt up by incompetents who have no understanding of atmospheric physics.

The tropical ocean surface temperate does not have a long term trend and cannot. There is a powerful thermostat that prevents the surface temperature ever exceeding 32C:

https://1drv.ms/b/s!Aq1iAj8Yo7jNg3j-MHBpf4wRGuhf

Has anyone else noticed that Fig.2 shows the 2020 temperature dip as the same as the 1900 peak. That doesn’t seem very catastrophic to me.

“The data used to build the nClimDiv dataset is drawn from the GHCN (Global Historical Climate Network) dataset”

WRONG AGAIN

nClimDIV is build from a whole collection of sources

GCHN D ( DAILY )

ASOS

SNOTEL

RAWS

and a few others

And then matched to USCRN…..

Hadn’t you realised that yet, mosh

Not being a mathematician, you probably wouldn’t.

Steve, NOAA/NCEI say differently, to quote:

“The new divisional data set (nCLIMDIV) is based on the Global Historical Climatological Network-Daily (GHCN-D) and makes use of several improvements to the previous data set. ”

All the data is corrected for time-of-day. I’m sure adjustments have been made to the corrections, that is what I’m looking into. Why is the recent reconstruction so odd? Some changes have been made. ASOS, SNOTEL, and RAWS are in GHCN-D already. For a complete list of datasources for GHCN check the source flag list here:

https://docs.opendata.aws/noaa-ghcn-pds/readme.html

T=SNOTEL, U=RAWS, etc.

“The nClimDiv dataset uses a lot more stations than USCHN and if the stations are well sited and well taken care of this is a good change. The USCRN dataset is from a smaller set of weather stations, but these are highly accurate and carefully located. I do not think the USCRN stations are part of the nClimDiv set but are used as an independent check on them. The two systems of stations are operated independently.”

WRONG AGAIN

Jesus andy.

USCRN stations are also compiled into the GCHN D

GHCN D is one of the sources for nCLIMDIV

Also NOTE. the comparison of of USCRN and NCLIMDIV PROVES that adjustments works

and proves that siting doesn’t matter

get that

Yep, we GET that they adjust climdiv to match USCRN.

That is patently obvious.

USCRN has brought the warming manipulations under control

Before USCRN, they adjusted to an agenda.

USCRN stopped the warming in the US.

Steve, USCRN is not listed as a data source for GHCN-D. Check the list I linked in my previous comment. Since USCRN is supposed to be a reference, it doesn’t make sense to include it. But, if you can document it is a source of data for USHCN or nClimDiv, I would be interested.

Andy,

Stations for GHCN (and nClimDiv, USHCN) are chosen for their long record period, among other criteria. So USCRN stations haven’t qualified so far.

You may find some Moyhu facilities helpful here. This page lets you search for GHCN V4 stations (click radio button “GHCN V4 new!”). And this post has graphical presentation of comparisons of GHCN and USCRN, with this follow-up showing some time series results.

Thanks Nick! They look like great links, I will spend a lot of time on them.

“hey looked very anomalous from 2015 through 2019. The same set of corrections are used in the GHCN dataset, which is the source of the data fed into nClimDiv. So, the problem may still exist in the GHCN dataset. I’ll try and check that out and report on it in a future post”

NO WRONG WRONG WRONG

USHCN data flow goes like this

INGEST FROM SOURCES

RUN TOB

RUN PHA

nCLIMDIV data goes like this

INGEST FROM SOURCES

RUN TOB

RUN PHA

GHCN – D goes like this

INGEST FROM SOURCES

RUN QA

GHCN M goes like this

INGEST from GHCN D

INGEST from ITSI

INGEST from other sources

RUN Pha

Note GHCN-M DOES NOT USE TOB.

For USHCN sources these are the files

Also if you are having trouble with GHCN -M on your system, you are probably doing something

wrong again

Steve, I don’t know why you are saying I’m wrong. I agree with your list and never wrote otherwise. Monthly averages cannot be time of day bias corrected, that has to happen daily, obviously. The daily data used to create GHCM-M has to be bias adjusted, that’s common sense. Presumably, GHCN-D is used to make GHCN-M. From the readme file:

“GHCN-M version 4 currently contains monthly mean temperature for over

25,000 stations across the globe. In large part GHCN-M version 4 uses

the same quality control and bias correction algorithms as version 3.

The greatest difference from previous version is a greatly expanded

set of stations based on the large data holdings in GHCN-Daily as well

as data collected as part of the International Surface Temperature

Initiative databank (ISTI; Rennie et al. 2014).

There are currently three versions of GHCN-M version 4

QCU: Quality Control, Unadjusted

QCF: Quality Control, Adjusted, using the Pairwise Homogeneity

Algorithm (PHA, Menne and Williams, 2009).

QFE: Quality Control, Adjusted, Estimated using the Pairwise

Homogeneity Algorithm. Only the years 1961-2010 are provided.

This is to help maximize station coverage when calculating

normals. For more information, see Williams et al, 2012. ”

My processing is going fine, just slowly due to the huge amount of data.

comparing

https://www.ncdc.noaa.gov/temp-and-precip/national-temperature-index/time-series?datasets%5B%5D=uscrn&datasets%5B%5D=climdiv&datasets%5B%5D=cmbushcn¶meter=anom-tavg&time_scale=ann&begyear=2005&endyear=2020&month=10

Note

1 USCRN is UNADJUSTED GOLD STANDARD

2. USHCN is ADJUSTED and has Sites that are poor quality ( high CRN rating)

3. nCLIMDIV is also ADJUSTED and has sites that are poor quality (high CRN rating)

Result?

ADJUSTMENTS work. The Unadjusted gold standard USCRN SHOWS SLIGHTLY WARMER

temps than adjusted data

“ADJUSTMENTS work.

Yep, now they have USCRN to set a baseline for those “adjustments” to “adjust” to

And warming in the US has leveled off. Having good data was always going to do that.

Before 2005, they can “adjust” as they want, to fit their agenda.

Gavin Schmidt wrote an article on his RealClimate site about why temperature anomalies are presented instead of actual temperatures. Unfortunately I never have any success with searches to find older articles but I think it was in 2012 or 2014. Anyway, if I understood correctly he admitted that temperature measurements were at best accurate to +/- 0.5 C and that a legitimate calculation of global average was thus only accurate to +/- 0.5 C.

The problem with that, he said, was that, over the last 30 years, each year’s average comes out to the same number – it isn’t possible to see any change. But, by calculating anomalies to multiple decimal places, a trend shows up. The anomaly grows by hundredths of a degree year by year (many or most years). This presents convincing evidence that warming is occurring.

I got the impression that he wasn’t even trying to argue that the measurement error does not propagate through the anomaly calculations, only that he and his friends believe that the fact a trend emerges from the calculations’ intermediate values means, with a high probability, that the trend is real.

I think this conclusion is a consideration of philosophy rather than science. In hard terms, to the actual accuracy available, no trend exists in the measurements. However, this perhaps only means that the technology, or the technique, is not up to the real task. The world does seem to be giving signs of warming over the past 200 years or so, perhaps in more locations than not, and as has been pointed out various times, if there is a warming trend, there should be higher and higher temperatures as long as the trend continues.

If this was the calculation Gavin actually used then he was way off. The calculation of uncertainty in the global average would be:

sq rt(number of stations) * 0.5C.

No amount of extra digits in the calculation of the anomalies will change this. It won’t change it for an individual station or for an average of multiple stations. An individual anomaly of .001C will have the same uncertainty interval as the actual temperature, +/- 0.5C. Thus it doesn’t matter how many digits you calculate out to, it won’t help you distinguish anything meaningful.

As J.R. Taylor states in his Introduction to Error Analysis:

—————————————

Rule for Stating Answers

The last significant digit in any stated answer should usually be of the same order of magnitude (in the same decimal position) as the uncertainty.

—————————————-

Thus the anomaly calculated from a measurement uncertainty with a +/- 0.5C interval should not be stated out past the tenth digit. Calculating out further is only fooling yourself.

Why does anyone care about a reconstructive model of the global average temperature?

I suppose I did not express the point clearly enough. The argument was not about a result with statistical certainty it was a consideration of the (claimed) fact that the calculation to two decimal places, continues, year by year (for more years than not, anyway) to trend upward. This ignores the inherent uncertainty. It answers the questions

Why is there a strong (though small) trend in this calculation?

If the results were random, how likely is it that they would show a trend over a long period?

with the opinion that the consistency of the trend most likely means it is real.

Or at least that was my understanding of the argument.

This is different from a conclusion that says the trend matters (unless perhaps it goes on for ten thousand years), that it relates in any meaningful way to anything else that goes on in the natural world.

“The world does seem to be giving signs of warming over the past 200 years or soAnd THANK GOODNESS for that !!

But I prefer REAL warming , over FAKE warming.

Some commenters, notably Pat Frank and Tim Gorman above, seem to have suggested that an error in instrument precision must of necessity flow through to final estimates of uncertainty in temperature anomaly estimation. I will try to demonstrate here how and why it is possible to measure a variation in temperature anomaly which is less than the precision error in individual measurements.

First, if you are trying to take a single temperature measurement in a laboratory with an instrument which is accurate, but which records only to the nearest degree, say – a precision uncertainty of plus or minus 0.5 deg C – then repeat measurements cannot improve the precision of the result. Similarly, using 1000 accurate thermometers, all with the same precision, cannot improve the precision of the single temperature measurement.

Equally, in the same laboratory, if you wish to estimate the temperature change with time over a temperature range of less than 1 deg C, then repeat measurements over time with a single thermometer with a precision to the nearest degree is unlikely to yield any meaningful information.

Suppose however that instead of having one accurate thermometer, you have 11 thermometers, say, each with the same plus or minus 0.5 degree precision, and the same relative accuracy, but which are all calibrated to a slightly different base. Can you measure a temperature change of less than 1 deg C? Counter-intuitively, the answer is that yes you can, if the between-thermometer spread in calibration error is large enough.

And what about the isomorphic situation where you have 11 well calibrated thermometers sited in slightly different places with different temperatures, all within a grid-area or locale, and where you wish to estimate the uniform change in temperature of the locale? Again, the answer is that yes, you can measure a temperature change which is below the level of precision of the thermometers – and, moreover, the final estimate of change has an uncertainty which is significantly below the variance arising from a difference in two measurements due to precision uncertainty.

To illustrate why this is so, let us first consider the simple situation where the initial true temperature differences between the thermometers are regular. For this example, we assume that the first thermometer is well calibrated (accurate) and reads 0 deg C initially. When the first thermometer reads 0 degrees, the temperature of the second is 0.1 deg C, the temperature of the third is 0.2 deg C, and so on up to the 11th thermometer which is sited with initial temperature at 1 deg C. At their recording precision levels, when the first thermometer is at zero deg C, the first five of these thermometers will record (or will be read at) 0 deg C, while the next 6 thermometers will record (or be read at) 1.0 deg C. In order, therefore, the temperature recordings look like this:-

0 0 0 0 0 1 1 1 1 1 1 for the 11 thermometers, yielding an estimate of mean initial temperature of 0.55 deg C (= 6/11)

Now, starting at 0 deg C, (arbitrarily) we consider 30 increments of true-but-unknown temperature change each of 0.02 degree steps – making a total true change of 0.6 deg C uniformly warming the locale.

For the first four temperature increments, the thermometers all continue to record the same temperatures. However, on the fifth increment, when a cumulative warming of 0.1 deg C is added, this “trips” thermometer number 5 to change its record from 0 to 1 deg C. The temperature recordings then become:-

0 0 0 0 1 1 1 1 1 1 1 yielding a mean of 0.64 deg C.

The next trip occurs on the 10th incremental temperature step when 0.2 has been added. This causes thermometer number 4 to go from 0 deg to 1 deg, which raises the overall average recorded temperature to 0.73 deg C.

A plot of mean recorded temperatures against the true temperature change therefore yields a stair-step plot, as various thermometers are “tripped” into recording a higher integer value of temperature.

The starting and final values over this simulated “true” change of 0.6 deg C look like this:-

True Temp Thermometer readings Mean recorded temp

0.0 0 0 0 0 0 1 1 1 1 1 1 0.55

0.6 1 1 1 1 1 1 1 1 1 2 2 1.18

Apparent temperature change from recorded temperatures = 1.18 – 0.55 = 0.63 deg C. Alternatively, a regression of recorded temperature against step-count yields a gradient of 0.0207 deg C/step. Over the 30 steps this gives an estimate of total temperature change of 30×0.0207 = 0.62 deg C . This is very close to the true-but unknown value despite the recording precision being in whole integers of temperature.

Please note that this illustration works for either of two cases:-

a) Estimating a change in temperature on a single body using multiple thermometers of the same precision but with (sufficient) spread in calibration.

b) Estimating a change in temperature over a locale using multiple thermometers located at points with a spread of different initial temperatures (as in the worked example above) with or without a spread in calibration accuracy.

The obvious weakness in the above simple example is that I have pre-selected thermometers with regularly spaced differences in their initial temperatures (or regularly sampled calibration errors if applied to Case (a) as defined just above). It raises the obvious question:- what happens if you have thermometers which are sited such that the initial temperature in a locale is sampled in a random fashion? To test this, I retained 11 thermometers, but ran a MC, sampling the initial thermometer temperatures from a one degree spread (U[0,1] distribution). The resulting mean estimate of temperature gain corresponded to the true solution (0.6) as expected, but the sample variance in the estimated temperature gain rose to 0.026. Increasing the number of temperature points within the locale improves the stair-step relationship (recorded temp vs true incremental temp) by adding more steps – a sort of smoothing – and decreases the variance of the final estimate of temperature gain in inverse proportion to the number of samples, so the variance associated with this 11 thermometer case can be reduced. However, even with just 11 temperature stations, the variance of 0.026 may be compared with the variance of a difference in two values each carrying a precision error of plus or minus 0.5; assuming two uniform distributions for precision error, this works out to be 0.17 – a far larger value.

In summary, it cannot be assumed that measurement error arising from instrument precision propagates through the calculation as an irreducible uncertainty.

“the first thermometer is well calibrated (accurate) and reads 0 deg C initially.”

You are assuming a 100% accurate reading with no uncertainty. You’ve already violated reality.

“To test this, I retained 11 thermometers, but ran a MC, sampling the initial thermometer temperatures from a one degree spread (U[0,1] distribution)”

Error is *NOT* uncertainty. There is no probability distribution for uncertainty.

“assuming two uniform distributions for precision error,”

There is no probability distribution for uncertainty. You are trying to equate the probability distribution of error with uncertainty. It just doesn’t work that way. Your first thermometer reading of 0C should have an uncertainty interval associated with it. E.g. 0 deg C +/- 0.5 deg C. It is that +/- 0.5 deg C that propagates.

If your statement “And what about the isomorphic situation where you have 11 well calibrated thermometers sited in slightly different places ” is true then you are trying to combine independent values with uncertainty intervals. Uncertainty for independent values add by root sum square. You can’t get around that. Each and every one of the sites will have an uncertainty interval associated with it. They may or may not be equal uncertainty intervals but they still add root sum square.

It’s not about precision, kribaez.

The uncertainty derives from systematic measurement error produced by uncontrolled environmental variables.

The error is revealed only by field calibration studies using well-sited and maintained standard instruments. The uncertainty in meteorological field station measurements is expressed as the rms of the systematic errors found during calibration.

The field station thermometers are exposed to environmental variables of unknown intensity that divert the measured temperature from the physically correct temperature. The error in each measurement is therefore unknown.

The only recourse is to apply the measured rms field calibration uncertainty to all field measurements.

Averaging field measurements subject to an applied field calibration uncertainty does not, ever, reduce the uncertainty.

Pat,

“It’s not about precision, kribaez.

The uncertainty derives from systematic measurement error produced by uncontrolled environmental variables. ”

I agree that there are multiple sources of uncertainty that go into the annual temperature index. My comment however IS about precision and only precision. One of the “beliefs” which seems to still be floating around is that it is impossible to estimate the change in annual temperature index to a higher precision than your best thermometer of the day. I am offering a clear worked example of the arithmetic to demonstrate why this is fallacious reasoning when a large number of thermometers are used. That does not mean that the temperature indices are ok, just that, in the general scheme of things, thermometer precision is way down the list in the ranked order of contributors to uncertainty in the annual temperature index.

No one disputes that random precision errors average away, kribaez.

All of my temperature record work concerns systematic measurement error arising from uncontrolled environmental variables.

The calibration uncertainty stemming from those errors does not average away.

For thermometers that measure to the nearest degree C, isn’t the inherent uncertainty =/-0.5C, having nothing to do with environmental variables?

If not, what is the correct label for the fact that 20C on the thermometer just means some where between 19.5 and 20.4?

Does this have some effect on a sum or average of measurements that is different from uncertainty?

Hi Andy, by ‘inherent uncertainty’ do you mean resolution?

Typically, the reading resolution of a LiG thermometer graduated in 1 C is given as ±0.25 C.

That is, even if the LiG thermometer had been calibrated and is known to be accurate to ±0.1 C over its whole range, the ability of a human reader to judge a temperatures reading when the liquid meniscus is between two graduation marks is taken to be ±0.25 C.

If that LiG is in a field meteorological Stevenson screen, then the calibration uncertainty that recognizes the impact of environmental variables on the screen, and thus the thermometer, is ±0.5 C..

The total uncertainty in a given read temperature is then the rms of ±0.25 C and ±0.5 C, which is ±0.56 C.

That ±0.56 C does not average away.

Uncertainty bars at least that large should be on every single global air temperature data point since 1880 (GISS) or 1850 (UKMet).

You cannot estimate precision to a higher value than your instrument allows. This would require having multiple measurements of the same thing using the same measurement device. Then the central limit theory would apply. Even then, if you use significant digits properly you won’t actually get any more precision than the instruments provide. If your calculation is 93.81 +/- 0.3 then it should be rounded to 93.8 +/- 0.3 since you can’t distinguish past the tenths digit. The same thing would apply to the situation you are speaking to.

The uncertainty of one independent instrument measuring one thing adds to the uncertainty of a second independent instrument measuring a second thing. They add by root sum square. The growth of the uncertainty will mask any supposed increase in precision.

Large numbers of thermometers only add to the uncertainty associated with the group when they are combined. If you have 100 thermometers and each has an uncertainty of +/- 0.5 deg C then the total uncertainty of the 100 thermometers would be sqrt(100) * 0.5 = +/- 5 deg C. You can calculate anything you want to out to however digits you want to but it won’t change the uncertainty. Going from 1 deg C +/- 0.5 deg C to 1.00001 deg C +/- 5 deg C is a meaningless exercise. You are no more sure of your answer than when you started. Using significant digits your result should actually go no further than the units digit if the uncertainty is +/- 5. Anything past that is just a false precision.

Pat, Tim

Just as a thought experiment, the process I described above seems interesting. To make it simpler, consider that there are

● 10,000 widely distributed weather station thermometers that all meet the best site conditions,

● all are electronically monitored to eliminate human reading error

● all have been calibrated to the same standard

● their precision is 1 degree C

● accuracy is +/- 0.5 C

● they retain that precision and accuracy over 100 years without intervention

● they all restart the day at local midnight

● on site computation provides the NOAA standard of 1 reading per second averaged over 5 minutes to produce a recorded reading

● they all report the minimum temperature and the maximum temperature between one midnight and the next so that day’s average can be calculated

● NO adjustment are ever applied

● The actual temperature is slowly increasing such that it warms, averaged over all 10,000 sites, by 0.01C per year.

Their uncertainty is +/- 0.5 C for all measurements so an simple average of all 10,000’s daily average, and an yearly average of 365 days of daily averages, still has an uncertainty of +/-0.5 C, no?

For them to actually detect that warming would require more than 50 years of measurements, no?

If the averaging of all 10,000 daily readings, and all forward calculations of those individual readings, is carried out to three decimal places, then rounded to two at year’s end, the places to the right of the decimal are still individually meaningless but are they meaningless in combination? The uncertainty is much larger than the calculation precision but might the actual 0.01C trend be revealed?

Alternatively

Warming might or might not be occurring, the truth is completely unknown.

If the calculation precision of multiple decimal places is truly meaningless, should the distribution of +/- difference relative to the previous year not be random from year to year?

If a trend emerges, at the second decimal place in the year’s end calculation, isn’t there a probability distribution as to how often that could happen by chance?

If a calculation trend is consistent over many years, how many years would be necessary to say the trend is so improbable that it very probably is real?

AndyHce,

You need 120 years minimum. The dataset we have, USHCN RAW, reveals apogee points 60 years apart. Tracking for two cycles will pin things down, yet three/four/five will be better.

I think NOAA should do just as you suggest. Expand USCRN from 145 stations in the US to 1200, then track the raw data for 120 years. Then forever.

However, NOAA should 1) not destroy the currently-ongoing collection of USHCN RAW; and 2) restore the data from the redacted 400 sites absented since 1989; and 3) assure all the USHCN stations they ought not make radical changes to their instruments and protocols. Just be consistent.

The same ought be done for satellites that can make a reasonable reading of surface air temperature. Track for 120 years in a consistent manner.

Now you have three plots of direct measurement. The parallel systems will act as running controls on each other. It will be a tremendous accomplishment, and inexpensive to implement.

Note: many in the orthodox climate profession have faced the fact that Alarm was over-heated [sarc] and things are not dire. Humans might think 120 years is an eternity. Nature says it is a blink of an eye. Do we wish we had direct measurement back through the entire Holocene and into the Wisconsin? Well, we don’t have it. But now we can have it forever in cross-confirmation.

Stay calm and track the raw many ways.

ADD: the 1200 USCRN sites ought to be located right next to the 1200 USHCN sites.

The location of USCRN sites is a major part of what makes them what they are. I they were next to the USHCN sites they could not qualify for USCRN rating.

Ok, I’ll change my wording.

In order to illuminate the in-place 1200 USCHN stations, 1200 new installations with the precision of those used by USCRN should be located right next to each. That way, we would learn with what factor to nudge the historic curve up or down. [this step is not necessary to answer the question “is there any abnormal warming. The non-high-precision historical plots already can signal that, yes or no.]

Another approach — but watered down — would be to make the new installations mobile … then rotate them from one of the 1200 stations to the next every few years or so.

This is what happens in aerospace and precision manufacturing. Your Quality Assurance department welcomes outside calibration of all gages.

It would be better to make all new confirming stations.

Meanwhile, you can continue whatever it is you meant by this: “The location of USCRN sites is a major part of what makes them what they are. I they were next to the USHCN sites they could not qualify for USCRN rating.”

“

That way, we would learn with what factor to nudge the historic curve up or down.”Nudging historical USHCN station trends up or down relative to co-located USCRN measurements would not remove the uncertainty in the historical USHCN temperatures.

Many of the USHCN sites not allow the USCRN equiptment to function at USCRN accuracy. The site conditions are not suitable.

Andy,

“Their uncertainty is +/- 0.5 C for all measurements so an simple average of all 10,000’s daily average, and an yearly average of 365 days of daily averages, still has an uncertainty of +/-0.5 C, no?”

You are correct, the answer is NO.

For them to actually detect that warming would require more than 50 years of measurements, no?

You are correct again, the answer is NO.

I think you are perhaps forgetting that during the course of a year, the temperature at any single point station undergoes a major change, covering a range from less than 10 degrees in the tropics to over 60 degrees in the higher latitudes. This makes a huge difference to the effect of measurement precision.

Let us consider just ONE thermometer in your idealised example. If the accuracy estimate comes from manufacturer’s instrument calibration, and relative accuracy is maintained over the measured temperature range, then it has no effect on the temperature change estimate. If instead, it represents an independent sampling error, which randomly applies to any measurement, then it is easy to add it in by quadrature.

However, let us look first at just the precision question. With just ONE of your thermometers, after 5 years, the estimate of the change in means after accounting for thermometer precision is 0.05 with a standard deviation (sd) of 0.021. In other words, after a uniform temperature gain of 0.05 deg C, the estimate of mean gain will be significantly different from zero – despite the fact that temperatures are recorded only to the nearest whole number.

With 10,000 thermometers, each with a different annual variation, the effect of precision becomes negligible and even after only one year of change the estimate of the change in the mean from daily recordings is 0.01 with a sd of 0.00021. If we add in the effect of a random accuracy problem (+/- 0.5 deg C) assumed to be independently applied to each measurement taken, then this increases the sd after the first year to about 0.0003. So the change in mean temperature after the first year with 10,000 thermometers should be both visible and significant in your idealised example.

I know that this is all counterintuitive. The best way to understand why it works this way is to spend 10 minutes on a spreadsheet. Try the following test for a single thermometer. Set up a sine function with a half-amplitude of 10 degrees (equivalent to a mid-latitude variation of 20 degrees) and a periodicity of 365 days. Calculate its value at daily intervals. Then define a second series by adding 0.05 degrees to the first series. Now round both series to the nearest integer (to account for precision) and take the difference of the means of the two series. The temperature difference for this case works out to be 0.0519. If you change your parameters and repeat a few thousand times, you should find that roughly 95% of the outcomes will fall in the range 0.01 to 0.09 with a mean of 0.05.

“With 10,000 thermometers, each with a different annual variation, the effect of precision becomes negligible”Indeed so. The uncertainty of global averages is due mainly to spatial sampling error, not precision. ie would you have got a different answer by putting the thermometers in different places.

The uncertainty of global averages is due to measuring different things with different measuring devices each with an uncertainty interval. Uncertainty in independent measurements don’t cancel, they only keep adding up. Uncertainty is not error. It doesn’t really matter where you site the thermometers as far as uncertainty is concerned. Siting may affect bias in measurement, i.e. *error*, but it won’t affect uncertainty.

From: https://www.nbi.dk/~petersen/Teaching/Stat2016/Week2/AS2016_1202_SystematicErrors.pdf

=======================================================

Measurements are taken with a steel ruler, the ruler was calibrated at 15 C, the

measurements done at 22 C.

This is a systematic bias and not a systematic uncertainty! To neglect this effect is a

systematic mistake.

Effects can be corrected for! If the temperature coefficient and lab temperature is

known (exactly), then there is no systematic uncertainty.

If we correct for effect, but corrections are not known exactly, then we have to

introduce a systematic uncertainty (error propagation!).

In practice (unfortunately): often not corrected for such effects, but then just

“included in sys. uncertainties”.

===============================================

Errors can be corrected, many times using statistical processes. Uncertainty can’t be corrected.

I hearken back to Donald Rumsfeld: there are known knowns, there are known unknowns, and there are unknown unknowns.

You can handle known knowns statistically, e.g. random measurement errors, bias errors, etc. You can’t do so with known unknowns, they just are uncertainties and aren’t susceptible to statistical reduction. Unknown unknowns can’t even be determined to exist let alone corrected.

Nick

In regard to spacial sampling error, the validity of the calculated average rather depends on what one calls that average. If one insists that one is measuring the average of the planet, that sampling error might be huge. If one merely admits that one is measuring the average of the places sampled, the sampling error is zero (assuming all instruments report data at all the times they are supposed to report). Is that not correct?

Yes, there isn’t much point in doing it unless you are estimating the true global average. As I said, the sampling error is the main component, but it isn’t “huge”. There are a lot of samples there. You can estimate it very well by taking random sub-samples.

Nick, “

The uncertainty of global averages is due mainly to spatial sampling error, not precision.”The uncertainty of global averages is due mainly to systematic measurement error arising from uncontrolled environmental variables. Not spatial sampling error, not precision.

Andy,

Uncertainty adds as root sum square. If you have 10000 independent stations each with a +/- 0.5 uncertainty the final uncertainty of the combined group will be sqrt(10000) * 0.5 = +/-(100 * 0.5) = +/- 50 deg.

Think about averaging a daily max with an uncertainty of +/- 0.5 and a daily minimum with an uncertainty of +/- 0.5. The uncertainty of the two independent measurements combined would be:

sqrt[ (.5)^2 + (.5)^2] = sqrt[ .25 + .25} = sqrt[ 2 * .25] = =/- (1.4 * .5) = +/- 0.7

So now, when you are creating a combined average daily value you are starting with an uncertainty of each data point at +/- 0.7 instead of +/- 0.5. So combining the first two stations will give an uncertainty of sqrt(2) * .7 = 1.0

You’ve already started off with an uncertainty larger then the difference you are trying to see. Is the trend downward or upward?

Good description, Tim. If each of the 10000 ±0.5 C uncertainty temperatures is used in some step-wise calculation, then the final uncertainty is ±50 C, where the ±0.5 C uncertainty is a calibration measure of systematic errors stemming from uncontrolled variables.

If the 10000 ±0.5 C uncertainty temperatures are directly averaged, then the uncertainty in the average is the root-mean-square (rms). That’s sqrt[(100000*(0.5)^2)/9999] = ±0.5 C.

In either case, the uncertainty never averages away.

Pat,

By directly averaged do you mean add the 10,000 measurements, then divide by 10,000?

Andy: yes.

Sorry for the delay in replying. This post didn’t load into my browser for 2 or 3 days. Until today, in fact.

“the final uncertainty of the combined group will be…”That is the uncertainty of the sum. But to get the average, you divide the sum by 10000 (which has no uncertainty). So the uncertainty of the average is 0.005°C.

Nick, “

So the uncertainty of the average is 0.005°C.”Really funny, Nick.

The final uncertainty in an empirical average is the rms.

If the calibration systematic uncertainty of each measurement is ±0.5 C, then the uncertainty in the 10000 average of measurements is sqrt{[(10000)*(0.5)^2]/9999} = ±0.5 C.

The uncertainty never gets smaller, and never averages away.

Nick,

“That is the uncertainty of the sum. But to get the average, you divide the sum by 10000 (which has no uncertainty). So the uncertainty of the average is 0.005°C.”

No. You’ve never bothered to study up on Dr. Taylor’s textbook, have you? Perhaps this one will work better for you:

https://www.nbi.dk/~petersen/Teaching/Stat2016/Week2/AS2016_1202_SystematicErrors.pdf

=========================================================

“Even with infinite statistics, the error on a result will never be zero!

Such errors are called “systematic uncertainties”, and typical origins are:

• Imperfect modeling/simulation

• Lacking understanding of experiment

• Uncertainty in parameters involved

• Uncertainty associated with corrections

• Theoretical uncertainties/limitations

While the statistical uncertainty is Gaussian and scales like 1/sqrt N,

the systematic uncertainties do not necessarily follow this rule.

When statistical uncertainty is largest, more data will improve precision.

When systematic uncertainty is largest, more understanding will improve precision.

================================================================

Uncertainty doesn’t average. When averaging the temperatures you divide by the number of temperatures. You do *NOT* divide by N or sqrt of N when totaling uncertainty. The total uncertainty is the root sum square of the uncertainties. You do not average uncertainty, you calculate the quadrature sum of the uncertainties.

Error is not uncertainty. Why is this so hard to get across to so many on this blog?

“Perhaps this one will work better for you:”OK, I look at Petersen’s notes here (p 12-13). And what does it say? With big red letters saying to commit to memory?:

“What is the uncertainty on the mean? “Uncertainty of mean= σ/√N

where σ is your 0.5 and N is your 10000. As you calculated, the uncertainty of the sum is σ*√N. And to get the uncertainty of the mean, divide by N.

As of course you must. Dividing by N is just rescaling. If you rescale a number, you must equally rescale its uncertainty.

I know you’ve been schooled on this by Jim Gorman. Apparently it went in one ear and out the other.

Uncertainty has no mean therefore the term “uncertainty on the mean” is meaningless for uncertainty. In order to have a mean you must have a probability distribution. Uncertainty has no probability distribution. You can’t pick a point in the uncertainty interval and say *this point has the highest probability of being the true value”.

Uncertainty on the mean has to do with the central limit theorem. If you have a random variable with a Gaussian distribution then you can reduce the uncertainty on the mean statistically.

Everything you write is based on having a random variable with a probability distribution – i.e. random error when taking multiple measurements of the same thing using the same measurement device. In this case you can reduce the uncertainty of the mean using the central limit theory.

σ is a statistical description of a probability distribution which has a mean.

If I give you a temperature of 20 deg C +/- 0.5 deg C can you tell me what point in the interval of 19.5 deg C and 20.5 deg C has the highest probability of being the true value?

If you can’t then that means there is no probability distribution associated with that interval – which leads to there being no mean and no standard deviation (i.e. σ). With no mean how do you then reduce the uncertainty of the mean?

“Uncertainty has no mean therefore the term “uncertainty on the mean” is meaningless for uncertainty.”You quoted Peterson’s notes as the authority. And I quoted then verbatim (p 12):

“What is the uncertainty on the mean? And how quickly does it improve withmore data?”

Answer, σ/√N

Correctly, he makes no requirement that anything be normally distributed.

This is basic stats 101.

Nick,

Please read the entirety of Peterson:

“While the statistical uncertainty is Gaussian and scales like 1/sqrt N ,

the systematic uncertainties do not necessarily follow this rule”

Systematic uncertainties do no necessarily follow this rule.

Once again you are confusing ERROR with UNCERTAINTY. Error is a statistical uncertainty with a probability distribution. Systematic uncertainty is *NOT error and has no probability distribution.

You also ignored the fact that he specifically mentioned GAUSSIAN distribution, i..e NORMAL distribution.

Do you have a reading disability we need to know about?

Tim, Nick is just making a diversion. He knows you’re right.

Look, he wrote, “

If you rescale a number, you must equally rescale its uncertainty”Nick’s rescaling a number is not the same as taking an empirical average. But his choice of phrasing makes it seem as though it is the same. He’s trying to shift the ground of the argument, so as to lead you into a skewed contention from which he can eventually emerge triumphant.

Making the assumption that few of the temperature measurements of the past 120 to 150 years are fraudulent, i.e. people were measuring and recording to the best of their ability and the quality of the circumstances, not trying to mislead, then each measurement, for the sake of this discussion, can be taken to be accurate to +/- 0.5C. Of course in reality some instruments must have been incorrect enough that their reading had a much larger error, but assume that those were few enough to have little effect upon the total.

The straightforward calculations of yearly global averages varies only a few degrees from year to year over the entire period. The importance of such a temperature variation for humans, frogs, or life in general, is not part of the current question.

What is the meaning of a two or more digits of uncertainty relative to the calculated averages made to the nearest +/- 0.5C? Can it be claimed that the “real” global average might differ +/- 20C, 30C, or more, from what is calculated? I get the claim from Pat Frank’s articles that the uncertainty isn’t equivalent to degrees of temperature variance, but to say simply the uncertainty is some value does not inform what that means. If it does not have a real world meaning then it is surely less relevant than calculating the number of angels on a pin head.

Uncertainty *has* a real world meaning. Think about a civil engineer designing a bridge. He calculates the forces on the various trusses based on all kinds of different loads (e.g. vehicles, wind, ice, etc). Then he can specify the trusses needed to match the compression, tension, and torsional forces that are calculated. Someone then has to order the trusses that are specified in the “catalogs” by ability to withstand a certain force level +/- an uncertainty interval. Trusses whose masimum negative uncertainty interval is *higher* than the forces applied to the trusses had better be the ones ordered in order to provide a safety margin for the bridge.

In this case the uncertainty interval of the trusses very much has a real world meaning.

The same thing applies when combining independent temperature readings. The uncertainty rules apply here just as they do anywhere else and they give the result a real world meaning. You may not like what that real world meaning says, but it is still the real world meaning. The fact that combining the uncertainty intervals of independent measurements to generate daily, monthly, or annual averages gives you a total uncertainty interval that is larger than the differences you are trying to find is why so many climate scientists just ignore the rules about uncertainty. If they didn’t ignore them then they would have to admit that their results are meaningless.

The term “variance” has meaning in a probability distribution. Uncertainty has no probability distribution. Probability distributions require one to assign a probability to each possible population member. If I tell you the temperature is 20 deg c +/- 0.5 deg C can you tell me which point in the interval of 19.5 deg C to 20.5 deg C has the highest probability of being the true value?

If you can’t do so then isn’t the reason you can’t do so is because there is no probability distribution that describes the interval? If there is no probability distribution then there can be no standard deviation, no variance, and no mean.

Since each station is independent of all others each station has its own temperature curve over a year. If you consider the curve to represent a statistical population with a mean and a variance then how do you combine the populations from Station 1 and Station 2? The climate scientists just average the mean temperature from the curve of Station 1 with the mean temperature of the curve for Station 2 and assumes that tells the whole story. But this is wrong. The variances of the two curves can be significantly different and that must be handled somehow as well if you truly want to look at the “climate”.

So how should you combine these multiple independent data sets? Darned if I know. Do you?

Hi Tim,

I’d rather decided to not respond to the ongoing discussion about averaging, the complexity of which and earnestness for which is running strong in this thread, and my pointed posts to the contrary, yet which answer your question, already having been proffered several times, and met with silence.

However, I have changed my mind. Your pointed question can not be ignored; it is right to the crux of the matter.

Consider flipping the worldview: instead of the effort to arrive at global or regional “Average Temperature,” instead face the fact we can’t get there with models, gridding, estimating, etc, and instead trade up to “examine 1200 individual cases.”

In addition to “we can’t get there,” add “… and the illusion that we can construct a model — of one statistic: global average — of the prior 150 years, while tempting, is in fact misleading. We can’t go on illusions.

Here’s my graph, once more, of USHCN RAW:

http://theearthintime.com

That sine curve is NOT intended to arrive at “an average.” Instead, it is an amalgam of a spaghetti graph. Imagine an animation that would lay down each of 1200 curves on top of each other. Not only does “average” not serve to describe the nature of the result, even “trend” falls short, because we are used to seeing a straight trendline, which in the case of this “Method” hides the crucial reality: temp rises and falls in a sine curve with oscillations inside oscillations.

An individual weather station might have inaccuracy [uncertainty] within and of itself. However, 120 years of recording 43,800 times in a sufficiently accurate yet consistent manner, reveals one giant truth. “Does this station’s plot display any abnormal warming or cooling?” Now look at each station’s curve, and ask the same question 1200 times.

I’ll say no more for now.

::::: windlord :::::

windlord:

“Consider flipping the worldview: instead of the effort to arrive at global or regional “Average Temperature,” instead face the fact we can’t get there with models, gridding, estimating, etc, and instead trade up to “examine 1200 individual cases.””

I agree with this. There is no such thing as a “global climate”. There is barely even such a thing as “regional climate”. Climate is mostly local. Take Nebraska, Iowa, and Kansas – all part of the same region but clearly having different climates. What is typically considered the “Central Plains” have widely varying climates based on longitude and latitude. Locations as close as Lincoln, NE and Topeka, KS have different climates.

“In addition to “we can’t get there,” add “… and the illusion that we can construct a model — of one statistic: global average — of the prior 150 years, while tempting, is in fact misleading. We can’t go on illusions.”

I agree totally.

“which in the case of this “Method” hides the crucial reality: temp rises and falls in a sine curve with oscillations inside oscillations.”

Again, correct.

“reveals one giant truth. “Does this station’s plot display any abnormal warming or cooling?” Now look at each station’s curve, and ask the same question 1200 times.”

In which case you will probably find some that show some warming, some that show some cooling, and some that show nothing. You still can’t combine these into one whole.

wl-s’s graph of USHCN raw has no uncertainty bars on the temperature points, from the known systematic measurement error.

Put ±0.5 C uncertainty bars on the points, and the sine curve goes through a mean that has no known physical meaning.

He doesn’t know whether the sine curve is real or artifactual. His entire analysis is based upon a blind assumption of accuracy. An assumption that we know, for a fact, is wrong.

That is absolutely stunning. Seriously, my head is swirling and my jaw is on the floor.

I’ve known for quite a while that the obsession with “Global Average Temperature” was virulent and blinding, but this exchange with Pat Frank has me nearly immobile. I can barely type.

Pat Frank, are you willing to state what your bottom line, basement, foundational quest is with regard to climate? What question are you determined to answer?

To anyone else, all I can say is, 1218 stations reporting 50 million recordings of TMAX alone over 120+ years…

It does not matter if my station has had protocol errors;

It does not matter if my station has a gauge out of calibration;

We have been consistent:

I’ve given NOAA my 40,000+ dailies;

I’ve graphed them;

They show the sine curve that all of nature loves and follows;

Viewing 1218 graphs, one by one, even if some or all of them are high or low within themselves, reveals the ultimate (and only) measurement flow of temperature. “Accuracy” is irrelevant.

[recreation begins…]

Is there any abnormal warming?

Not at my station.

I asked the same question at a conference of 436 of my fellow weather station directors. A few said yes. A few said they saw drastic cooling. The rest saw their data forming an oscillation with a period of about 60 years, a sine wave.

We all agreed to not put our data in a cement mixer in quest for an “Average,” since, being rational scientists, we know that is both impossible vis a vis credibility, and not needed.

We agreed to immediately start up an email thread if any of us detects any abnormal warming.

[recreation ends…]

wls,

“To anyone else, all I can say is, 1218 stations reporting 50 million recordings of TMAX alone over 120+ years…”

All I can say is that Pat is correct. If you put 0.5C error bars on that graph then you can’t even see most of the sine wave. Try it. Assume the mean is 65.5F. That means the error bars would run from 66.2F to 64.4F. Print the graph and then cut a piece of paper of the width 64.4F to 66.2F. Now use that strip to cover up the that area on the graph. Most of what you see on the graph disappears. You won’t be able to see enough to generate a sine wave.

Now, if you have actually combined 120 stations to obtain points on the graph then your actual uncertainty interval will approach +/- 5degC. So the uncertainty bars will run from about 75degF to 56degF.

That means that *any* point within that range can be the true value, you just don’t know. You could generate anything you want out of that kind of range of true values.

I know this is hard to grasp. But it is fundamental to understanding why combining a bunch of single independent measurements of totally different independent things using totally different measuring devices, all without regard to systematic uncertainty, is just destined to give a result that is meaningless.

Is there any doubt in your mind that uncertainty of independent measurements of independent things using independent measurement devices adds as root sum square?

“It does not matter if my station has had protocol errors;

It does not matter if my station has a gauge out of calibration;

We have been consistent:

I’ve given NOAA my 40,000+ dailies;

I’ve graphed them;

They show the sine curve that all of nature loves and follows;”

Protocol errors are ERRORS, not uncertainty. Calibration is ERROR, not uncertainty. Error and uncertainty are not the same thing. Pat has tried to explain this over and over but it doesn’t seem to be getting through to people. Error is measuring a 2″x4″ as 7.92′ when it is actually 8.08′ long. Uncertainty is going to a huge pile of 2″x4″ studs and selecting ten of them to build a wall for your house. They won’t all be exactly 8′ long. Some will be shorter and some longer. That’s uncertainty. When you build the wall and attach your sheet rock you’ll have gaps where the longer studs are (that’s what trim boards are for – to cover the gaps) or you won’t be able to fit the sheet rock where the shorter studs are. That isn’t an issue of measurement error, calibration error, or any other kind of error, it’s uncertainty.

At least at your single station you have been using the same measurement device but you don’t measure the same thing everyday. Environmental conditions (wind, humidity, clouds, etc) all add a degree of uncertainty to each daily measurement in addition to the inherent uncertainty added by your measurement device itself. If your overall uncertainty for each measurement is 0.5C then you still must add that uncertainty bar to your measurements and see if you can actually distinguish anything. And when you are combining multiple measurements into an annual average remember the root sum square rule for uncertainty.

If NOAA, NASA, and all the so-called climate scientists were honest they would have to admit that they simply can’t tell what year has been the hottest when they are trying to distinguish 0.1C or 0.01C differences, not when their uncertainty interval is more than 1C!

Tim Gorman,

The curving wave at my website, which is an amalgam of 50-million recordings, and most of the waves of the 1200 individual sites in USHCN, take the shape of Nature’s favorite form: an oscillating sine wave, accompanied by harmonics of longer and shorter periods.

Per your paragraph cited below, that is a coincidence?

“At least at your single station you have been using the same measurement device but you don’t measure the same thing everyday. Environmental conditions (wind, humidity, clouds, etc) all add a degree of uncertainty to each daily measurement…”

Or, if not a coincidence, the fluctuation of ‘conditions’ — which you would think would be a random distribution — instead wonderfully reveals they occur in a regular oscillation?

“And when you are combining multiple measurements into an annual average ….”

Again, you avoid my point. I am not obsessed with “an average.” A spaghetti graph, or amalgam, does not seek to claim an average. It illuminates the curving trend visually.

What is your explanation for the formation of the affecting conditions into an organic shape?

wl-s, “

Pat Frank, are you willing to state what your bottom line, basement, foundational quest is with regard to climate? What question are you determined to answer?.”My original intention was to discover for myself whether the IPCC claim is true, that human CO2 emissions are warming the climate.

That was in 2001, when the TAR came out. I decided to find out for myself, and began to read the primary literature.

After two years of study, it was clear the IPCC could not possibly know what they claimed to know.

In 2006 I read Brohan, et al.,

Uncertainty estimates in regional and global observed temperature changes: A new data set from 1850J. Geophys. Res. 111, D12106 doi:10.1029/2005JD006548That paper revealed that they assumed all temperature measurement error is random, and averages away to insignificance.

That was a shock. It led me to look for and examine published field calibration experiments. Published calibrations showed very significant systematic sensor errors coming from uncompensated environmental variables.

On discovering this, it became clear that the IPCC (and Brohan, et al.) could not possibly know what they claimed to know about air temperatures.

I started out just wanting to know, wl-s. My critical view came honestly from what I discovered.

I’ve learned from direct experience (much documented) that the entire field of consensus climatology ignores data quality. They don’t understand it, they’re not interested to know about it, they reject with hostility any effort to draw their attention to it.

So, you can go ahead and fit your sine curves to low-resolution data, ws-l. No matter how pretty your curves are, they’re physically worthless.

You wrote, “

“Accuracy” is irrelevant.” Incredible.That position makes everything you do irrelevant.

wl-s, “

That is absolutely stunning. Seriously, my head is swirling and my jaw is on the floor.“I’ve known for quite a while that the obsession with “Global Average Temperature” was virulent and blinding, but this exchange with Pat Frank has me nearly immobile. I can barely type.”Bite the bullet of low-resolution measurements, wl-s. Nothing anyone does will improve past measurements that are inherently poor.

Contrary to your writing, I am not obsessed with global average temperature. I’m focused on individual measurements of air temperature, such as are taken using a LiG thermometer in a Stevenson screen or an MMTS sensor, at a meteorological station.

Individual air temperature measurements, as they are recorded off individual instruments. Raw. Unadjusted. Unchanged. As-is. Each one, each time.

The lower limit of calibration uncertainty in

Raw. Unadjusted. Unchanged. As-is. Each one, each time.air temperature measurements from each and every one of the USHCN stations is ±0.5 C.There’s no getting around it. No sine wave fit will change that fact. A fit through any air temperature trend is not the most probably true mean.

Systematic error is not random. It is not normally distributed. It does not have a zero mean. Fits through temperature trends that include systematic error are not physically reliable.

Period. End of story.

Going forward, an air temperature compiled using low-resolution measurements will necessarily and ineluctably produce an unreliable record.

The entire field is pursuing a chimera. And they *will not* see it

Tim,

You wrote

“If I tell you the temperature is 20 deg c +/- 0.5 deg C can you tell me which point in the interval of 19.5 deg C to 20.5 deg C has the highest probability of being the true value?

If you can’t do so then isn’t the reason you can’t do so is because there is no probability distribution that describes the interval? If there is no probability distribution then there can be no standard deviation, no variance, and no mean. “

The question, in my thought experiment, is about something that does have a distribution and a trend. The distribution is over years, the trend is in the second decimal place of the yearly average temperature calculation. The question is not about the probability of each value that makes up the trend. It is not about the likely meaninglessness of the yearly global average calculation vis a vis the fate of the world, no matter its precision or accuracy. It is about the fact that there is a consistent trend (in this thought experiment) in the result, as there is in the data published by the “climate agencies” even though that trend is below the precision and the accuracy of the data.

If the individual measurements are unbiased, as specified, it seems there should be a random distribution of those values to the right of the decimal. An apparent trend could be random, just as many consecutive tosses of a fair coin producing the same value IS random, but a trend that continues long enough suggests that the coin is not, in fact, fair. The probability that it is not fair grows larger and larger as the same result is produced again and again (yes, the improbability grows, but the fact of the coin’s balance doesn’t change, regardless of what the numbers suggest).

Can the trend in the calculated average indicate that a trend beyond the ability to measure directly does actually exist and it is being detected? If not, is there some other rigorous explanation for the trend? I suspect it for the climate data it is just a random result that casually appears to have a trend but that doesn’t seem to be the way the believers see it. Can there be any answer, one way or the other?

Andy,

A lot of Tim’s comments on this subject will damage your mental health. When taking multiple measurements over a range of values using an unbiased thermometer with a precision error of plus or minus 0.5, the precision error for each individual measurement is sampled from a uniform distribution U[-0.5, +0.5]. This is NOT an approximation. It is an exact distribution. The only condition is that the range of values sampled is larger than the range of the uniform distribution. Since in the real world the temperature variation is generally several orders of magnitude larger than this uncertainty range over the annual cycle, it is a condition which is generally easily met. However, it would make little difference whether the distribution was approximated (as it often needs to be) or exact, as it is in this case. The variance of a sample dataset is an arithmetic property of the dataset. It does not need to be attached to a distribution for us to be able to make use of it in theoretical calcs.

The uncertainty associated with this Uniform distribution is easily calculated from theory. The variance of the individual precision error = 1/12 = 0.0833. The standard deviation is the sqrt of this. The variance of the MEAN temperature over a one year period with daily measurements is then (.0833) / 365 = 0.000228 . The standard deviation is then sqrt(.000228) = 0.01511 . This can equally be obtained using the formula correctly stated by Nick Stokes above:

Standard Deviation of the mean = σ/√N.

The uncertainty attached to this mean value arising from precision error is now closely approximated by a Normal distribution.

However, in your 10,000 thermometer example, you were interested in detecting a CHANGE in mean values from the starting point.

The variance of the difference between any two (annual) means = 2 x Var(mean) = 2×0.000228 = 0.000457.

The standard deviation of the difference between the two (annual) means is the sqrt(0.000457) = 0.021 , which I hope is the same sd value that I quoted previously when working through your example. For just a single well, therefore, you would expect to be able to detect a significant change in temperature after just 5 years ( = 0.05 temperature change as per your assumption). With 10,000 thermometers all varying differently, the standard deviation on the difference between two means is reduced by a factor of 1/√N, so the standard deviation becomes 0.021/100 = 0.00021.

So your instincts about “values to the right of the decimal” are correct, and in your idealised example, you could detect a uniformly applied shift in temperature of 0.01 degrees (i.e. after the first year).

Please note that while Tim is making assertions on stuff that he evidently does not understand, the above calculation is readily testable using a Monte Carlo.

I should re-emphasise that this is ONLY trying to deal with the precision question. There are still far more important sources of uncertainty in the adjusted temperature records.

Andy,

“The question, in my thought experiment, is about something that does have a distribution and a trend. The distribution is over years, the trend is in the second decimal place of the yearly average temperature calculation.”

If your instruments only have a precision of 1 decimal place then there is no way to determine the second decimal place. So no matter how long of a time period you use, you are only fooling yourself that you can see a second decimal place.

“even though that trend is below the precision and the accuracy of the data.”

You can only see a trend below the precision and accuracy of the data by ignoring the rules for significant digits in the data. If there is only one significant digit then no amount of calculation can extend that. You will be able to see a trend in the first decimal point but not beyond.

“If the individual measurements are unbiased, as specified, it seems there should be a random distribution of those values to the right of the decimal.”

Nope, you are confusing error with uncertainty. Error has a random distribution, uncertainty does not. If you can’t assign a probability to each point in the uncertainty interval then how do you know you have a random distribution?

You *can* see trends even in measurements that have uncertainty. But the difference MUST be outside the range of uncertainty. Take the 20C measurement again. If it has a +/- 0.5C uncertainty then until you get outside that uncertainty range, 19.5 to 20.5, you don’t know if there is a difference or not. If you begin to measure 21C with +/- 0.5C then you are on the cusp of beginning to see an actual increase you can identify for sure. But that increase of 1C in your reading is far beyond trying to identify a 0.1C or a 0.01C difference trend.

When you add in the fact that when combining 100 different, independent populations with uncertainty intervals, the total uncertainty interval grows by sqrt 100 (i.e. 10), an entire order of magnitude. Now you not only have to see a difference of 1C in your measurements, you have to see a difference of 2.5C before you are sure you’ve actually seen an increase.

“Can the trend in the calculated average indicate that a trend beyond the ability to measure directly does actually exist and it is being detected?”

Follow the rules of significant digits and you can answer this yourself. If you have measurements to the tenth of a degree then how can the average give you more than a tenth of a degree? If your average turns out to be a repeating decimal does that mean that the average now has an infinite precision?

kribaez,

“A lot of Tim’s comments on this subject will damage your mental health. When taking multiple measurements over a range of values using an unbiased thermometer with a precision error of plus or minus 0.5, the precision ERROR for each individual measurement is sampled from a uniform distribution U[-0.5, +0.5]. This is NOT an approximation.” (caps are mine, tg)

You have already started off describing error, not uncertainty.

“It is an exact distribution. The only condition is that the range of values sampled is larger than the range of the uniform distribution.”

Uncertainty is not a probability distribution, not even a uniform one.

“Since in the real world the temperature variation is generally several orders of magnitude larger than this uncertainty range over the annual cycle”

You have already made the mistake of claiming that the uncertainty interval for a combined set of independent populations never grows with the combination. As I pointed out, if you come up with a daily average using a maximum and minimum temperature using an instrument with a +/- 0.5C uncertainty range, your uncertainty associated with the average grows from +/- 0.5C to +/- 0.7C. If you combine 365 daily averages from *one* station to get a yearly average then your uncertainty for the yearly average grows by sqrt(365) = 19. Thus your uncertainty interval becomes +/- (19 * 0.7) = +/- 13C. If your calculated average is 20C then the true value could be between 7C and 33C.

“The uncertainty associated with this Uniform distribution is easily calculated from theory.”

When you assigned a probability distribution to an uncertainty interval you made a wrong step. If you can’t tell me which point in an interval from 19.5C to 20.5C is the most likely the true value then how do you even know you have a uniform distribution? It could be a skewed Caussian distribution, a Poisoon distribution (discrete), or an exponential distribution (continuous) of some kind. The fact is that an uncertainty interval has no probability distribution -period. When you artificially assign one to the interval it suddenly becomes something other than an uncertainty interval.

“However, in your 10,000 thermometer example, you were interested in detecting a CHANGE in mean values from the starting point.”

You already admitted that you can’t tell me the mean (the most likely value in an interval) of an uncertainty interval so how can you have a mean?

You just keep on pretending that error is uncertainty. That tells me you are a mathematician or a computer programmer and not an experimental scientist.

“With 10,000 thermometers all varying differently, the standard deviation on the difference between two means is reduced by a factor of 1/√N, so the standard deviation becomes 0.021/100 = 0.00021.”

From Peterson again:

=====================================

While the statistical uncertainty is Gaussian and scales like 1/sqrt N, the systematic uncertainties do not necessarily follow this rule.

When statistical uncertainty is largest, more data will improve precision.

When systematic uncertainty is largest, more understanding will improve precision.

====================================

Statistical uncertainty is ERROR. Systematical uncertainty is not error.

“Please note that while Tim is making assertions on stuff that he evidently does not understand, the above calculation is readily testable using a Monte Carlo.”

Please, when you have to convert uncertainty to error in order to come up with some kind of probability distribution it is clear that it is you that doesn’t understand. You already admitted you can’t tell me which point in an uncertainty interval is most likely the true value so how can you then turn around and assume there is a probability distribution associated with the uncertainty interval? Even assuming a uniform distribution is assuming something you don’t know just to make the uncertainty interval fit into your world view of everything being a random variable with a probability distribution.

Tim,

“If you can’t assign a probability to each point in the uncertainty interval then how do you know you have a random distribution?”

In the case of the +/-0.5C uncertainty of individual measurements, which is due to the basic fact of only being able to measure to the nearest integer, each possible value in the interval – to + surely has an equal probably of being the “true” temperature. How many possible values are there? Is there a quantum limitation to the smallest change of temperature possible? Probably, but the thermometer cannot measure quantum quantities.

Anyway, kribaez seems correct in asserting the distribution is uniform; what else could it be? It has been many years (of non-use) since my statistics classes so, without significant study, I can’t verify the calculations presented. They may indeed be totally correct for some circumstancs but I don’t see how they could apply to independent measurements made with many different instruments, in many different environments, with many unknown factors between them. For kribaez, I acknowledge that my not see understanding something doesn’t effect its validity, plus or minus.

The so small calculated variance kribaez shows is too strange to be real for the temperature record. The actual variance of interest seems more likely to be the difference between the largest and smallest measurement. Given that, a change in one year surely can’t be given significance as indicating any actual change, and five years still seems much too little data for a conclusion. However, a continued trend over some longer period might be very difficult to explain away – even if the calculated values aren’t the real amount of trend. Single bit analogue to digital, or digital to analogue converters don’t measure anything per sample or small number of samples, but they steer the data in the correct direction, ultimately producing very good results.

You wrote

“In this case the uncertainty interval of the trusses very much has a real world meaning.”

That uncertainty interval is expressed in terms of real mechanical forces. Choosing parts that equal or exceed the uncertainty of the stresses that are expected must be done by using actual measurements of those same mechanical forces. The uncertainty is not a unit-less number, the trusses must be selected in terms of some units such as foot-pounds of whatever type of stress. Therefore the bridge example cannot be comparable to something which you says has no units of measure that relate to something that can be found in the real world, outside of the calculations. If it can be said what it isn’t, there must be something that it is.

Regardless, I apparently am still not getting my point across. The answer has to be something other than it exceeds the certainty level so it has no meaning if the result is not random.

Take it as stipulated that there is a significant uncertainty associated with each measurement and that the combination of measurements in calculations does not reduce the uncertainty or even that it greatly increases the uncertainty.

In making any series of calculations it is common practice to use intermediate values with more precision than is significant. There is a very good reason for this. If each step is only to the precision of the data, rounding errors in multiple steps might make the final product larger or smaller than reality. By calculating to more digits, and not rounding until all calculations are done, one obtains the more correct result.

In general, unless I am confused and somehow only work with special cases when I calculate things, in a series of calculation, the digits beyond the significant digits of the operands might each be any value from 0 to 9. If not, there must exist an arithmetic proof of the restrictions.

If the calculation for “global temperature” to two decimal digits comes to 15.01 one year, then 15.02 the next, then 15.03 the next, increasing by .01 each year – without any manipulation to produce such a series – then even though the places to the right of the decimal are not meaningful or valid as a measure of temperature, such a result is highly unlikely to be random. If not random it comes from something in particular.

The trend need not be so even, the values might be 2, 3, etc., varying from year to year. Some years might even produce lower value digits, a real trend isn’t necessarily uniform. However, if they go in one direction most of the time, either upward or downward, they produce a trend (even if a trend of nothing but the numbers).

The expected result of generating a “random” set of temperatures, of the same total number of values, and all within the same interval of the real temperatures, then doing the same calculations of average with them as are done with the temperature measurements, will almost surely produce a random set of values for each “year’s” worth of fake temperature’s intermediate digits. Thus for this random set there would not be a trend as discussed in the previous paragraph. Again I acknowledge there might be some arithmetic proof that explains there can be a “false” trend, just as there are certain mathematical tricks that can take a person through a series of steps, using many different starting values, yet always end up with the same ultimate result but it seems unlikely.

When Nic Lewis presented his initial analysis on why the results of a paper claiming a new method that concluded ocean heat uptake was 60% greater than previously believed were wrong, there was initially considerable noise from the activists community. Some showed calculations that matched the paper’s result (the paper itself provided the data but not the means by which the authors came to their conclusion). Lewis replied with chapter and verse from a handbook of standard statistical methods, showing his results were correct, his opponents approach was not valid FOR THE TYPE OF DATA. The arguments stopped and the paper was withdrawn, although its authors did not, in their withdrawal statement, admit the major error that Lewis revealed.

For years there has been this argument on various climate sites as to whether statistical calculations of a certain type that result in higher precision for the calculated data than the data the instruments can produce are applicable to temperature data (different instruments at different sites measuring different conditions). For this article Tim has referenced a certain statistical textbook for several rules verifying some particulars of his calculations but still no one seems to have a standard source that addresses the central question of applicability in any unambiguous way.

If a standard weather thermometer (+-/0.5C) is pared with a reference thermometer (+/-0.001C) then it can be said that the integer reading thermometer is off the correct temperature by so may thousands (or hundreds, or tenths if you choose a less extreme reference) of a degree. Does this make the difference, in this circumstance, for any particular measurement, an error of the standard thermometer or a measure of its particular uncertainty?

“Anyway, kribaez seems correct in asserting the distribution is uniform; what else could it be?”

It can be uncertainty which has no probability. In a uniform distribution you can calculate the probability of each point in the distribution – they are all equal. But the *true value* in an uncertainty interval has a higher probability than all other points in the interval, the problem is that you don’t know what the actual true value is. It is uncertain. If you assign equal probabilities to all points in the interval then you have, in essence, claimed that they are *all* the true value. That makes no sense at all.

“The actual variance of interest seems more likely to be the difference between the largest and smallest measurement.”

Even the largest and smallest measurements each have an uncertainty interval associated with them. So is the difference between the top of the uncertainty interval for the largest measurement and the bottom of the uncertainty interval for the smallest measurement? Or is the difference between the bottom of the uncertainty interval for the largest measurement and the top of the uncertainty interval for the smallest measurement? Or is it somewhere in between?

Remember, the difference between on measurement, i.e. year 0, and the next measurement, year 1, has to be greater than the uncertainty interval in order to know whether you way any difference between them for certain. If the difference you calculate is 0.01 but the uncertainty is +/-0.5 then how do you know for sure that you actually saw that difference of 0.01?

For the global climate the differences seen over a long period of time, even 30 years, remains within the uncertainty interval. So how do you know if you’ve seen a difference at all?

“That uncertainty interval is expressed in terms of real mechanical forces. Choosing parts that equal or exceed the uncertainty of the stresses that are expected must be done by using actual measurements of those same mechanical forces. ”

You can calculate the forces, at least within the uncertainty of your varioius estimates of loading. All you need to do then is order trusses whose lower bounds of uncertainty exceed the forces you have calculated.The only measurement that is required is by the manufacturer to insure that his characterization of the strengths of his product are accurate.

“Therefore the bridge example cannot be comparable to something which you says has no units of measure”

I never claimed that uncertainty has no units of measure. If you’ll look back I always said something line +/- 0.5degC.The indicator C means temperature in Centigrade. That *is* a unit of measure.

“In making any series of calculations it is common practice to use intermediate values with more precision than is significant.”

Calculations are not measurements. And the intermediate values only go one digit further than your precision.

As Dr. Taylor notes in his textbook: “To reduce inaccuracies caused by rounding, any numbers used in subsequent calculations should normally retain at least one significant figure more than is finally justified. At the end of the calculations, the final answer should be rounded to remove these extra, insignificant figures.”

Remember, measurements are not calculated. If you are calculating an average of measurements then in the final answer you still round to the significant figure determined by the precision of the instrument doing the rounding and the rule “The last significant figure in any stated answer should usually be of the same order of magnitude (in the same decimal position) as the uncertainty.” (again from Dr. Taylor’s textbook) If the uncertainty is in tenths then round to the tenths digit.

“If the calculation for “global temperature” to two decimal digits comes to 15.01 one year, then 15.02 the next, then 15.03 the next, increasing by .01 each year – without any manipulation to produce such a series – then even though the places to the right of the decimal are not meaningful or valid as a measure of temperature, such a result is highly unlikely to be random. If not random it comes from something in particular.”

You are missing the point. How does the global temperature get calculated to the hundredths digit when the uncertainty is in at least in the units digit? All you *really* know is that year one is 15 and year two is 15. Calculating year one and then year 2 are *not* intermediate calculations. They are separate calculations and each need to be rounded to the same magnitude as the uncertainty interval.

“However, if they go in one direction most of the time, either upward or downward, they produce a trend (even if a trend of nothing but the numbers).”

How do you know which way they go most of the time? You are violating the rules for significant figures in order to try and gain more precision than you actually have.

“The expected result of generating a “random” set of temperatures, of the same total number of values, and all within the same interval of the real temperatures, then doing the same calculations of average with them as are done with the temperature measurements, will almost surely produce a random set of values for each “year’s” worth of fake temperature’s intermediate digits. ”

The issue is not the calculation, the issue is not knowing the true value. Of what use is it to generate a whole bunch of different “random” set of temperature when you don’t know if they are the true value or not? Does it help you in any way to determine the true values? Does it help you distinguish between two points that are within the uncertainty interval?

“Again I acknowledge there might be some arithmetic proof that explains there can be a “false” trend”

No one is saying anything about a “false” trend. The issue is that you simply can’t determine if there is a trend, either a true trend or a false trend, unless the temperatures being compared are outside of the uncertainty interval of both.

“For years there has been this argument on various climate sites as to whether statistical calculations of a certain type that result in higher precision for the calculated data than the data the instruments can produce are applicable to temperature data (different instruments at different sites measuring different conditions).”

And for years those who think they can gain precision using calculations while ignoring uncertainty, be it for temperatures or the mass of a proton are only fooling themselves. It’s like arguing how many angels fit on the head of a pin? Who can measure it precisely enough to know for sure?

“still no one seems to have a standard source that addresses the central question of applicability in any unambiguous way.”

There are all kinds of sources. Dr. Taylor’s textbook is just one of many. Go here for another discussion of the issue: https://www.sjsu.edu/people/ananda.mysore/courses/c1/s0/ME120-11_Uncertainty_Analysis.pdf

Note carefully the statement: “Taking the square root of the sum-of-squares is an effective way to combine uncertainties into one value, and squaring each contributing term before taking the sum has some important advantages:” Root-sum-square again. You will find this *everywhere* on the internet.

Go to the NIST site or look up the GUM document. You are really interested in NIST Type B uncertainty. Don’t confuse “standard uncertainty of the mean” with “random uncertainty” with “systematic uncertainty”. They are all different and the first two are involved with probability functions of random error and not with uncertainty.

“If a standard weather thermometer (+-/0.5C) is pared with a reference thermometer (+/-0.001C) then it can be said that the integer reading thermometer is off the correct temperature by so may thousands (or hundreds, or tenths if you choose a less extreme reference) of a degree.”

What do you mean by “paired with”? If they are both in the same site measuring the same thing then why would you use two different devices? And remember, like the Argo floats, the uncertainty of the measurement is not determined by the precision of the thermistor used as a detector but by the entire system itself. If the 0.001C detector is housed in a system which causes a 0.5C uncertainty in the measurement, then the precision of the detector is meaningless.

Time wrote

“Remember, the difference between on measurement, i.e. year 0, and the next measurement, year 1, has to be greater than the uncertainty interval in order to know whether you way any difference between them for certain. If the difference you calculate is 0.01 but the uncertainty is +/-0.5 then how do you know for sure that you actually saw that difference of 0.01?”

“No one is saying anything about a “false” trend. The issue is that you simply can’t determine if there is a trend, either a true trend or a false trend, unless the temperatures being compared are outside of the uncertainty interval of both.”

You keep objecting to my thought experiments by saying it would not mean something to whatever that or there is no reason to do that (e.g. pair two thermometers of different precision). In that experiment we are of course saying it is possible to measure to the higher precision in the site so both thermometers work to the limit of their inherent capability. For the case of the two thermometers, the objection about possible site limitations is quite irrelevant. It merely avoids the question, which is still open.

In regard to the trend calculation, if you do the calculations to two decimal places, and record the result for each year, the trend in the numbers is right there in front of you eyes. The trend exists, the question is ‘why’?, or ‘what does it possibly mean’? Saying one can’t calculate a trend avoids the question of why the trend exists.

There isn’t, in my trend questions, any suggestion that any one value to the right of the decimal has a meaning in itself. Any one year to the next year comparison won’t provide any information. The issue only has to do with that series of calculated yearly result which does have a trend … for some reason.

A trend of numbers in the answers from year to year exists. That it continues to exist year by year is the mystery, or rather it apparently is a justification for saying that the world is heating. I am not saying it is a trend of temperature, just pointing out that a trend in the calculated results is quite evident looking at the string of years in order. If a trend exists instead of just random digits, one can’t change that fact by saying you didn’t really see anything because ghosts don’t exist. You can look at it as many times as you want, or recalculate it as many times as you want, and it is still there.

It seems very unlikely that it is the uncertainty in the measurements itself that produces the trend, but if it is, there should be some mathematical way to demonstrate that a trend, and not a random selection among 0 to 9, is what should show up. Perhaps that proof, if it exists, will show the trend is BECAUSE OF the inherent uncertainty in the measurements This seems very unlikely but that seeming so doesn’t cause the trend to vanish.

Maybe the trend does not reasonably suggest temperature increase or decrease, maybe it is purely by chance, maybe it means something quite different. However, the more it persists as years increase, the less likely it is to be chance. If it isn’t by chance then it is either

a fundamental characteristic of the type of calculations being done (unlikely) If so, it should be possible to shown so by a mathematical proof,

or it has some kind of physical meaning that is captured by the conglomerate of thermometers, independent of their uncertainty, regardless of how unlikely that seems.

I’ve never seen anything to suggest that any critic of excess precision believes funny calculations are producing the result, only that the calculations are not applicable to the particular data – without being able to quite explain why and certainly without any successes convincing anyone who believes the results of those calculations.

Tim wrote

“Calculations are not measurements. And the intermediate values only go one digit further than your precision.”

Certainly the calculations are not measurements but it is not the case that intermediate values only one digit more than the value’s precision are used in calculations. There are many instances where floating point calculations are done on integer values because of the type of calculations, when there are a series of calculations using the output of the previous calculation as input to the next calculation. These can quickly lead to overrunning the bounds of the data if done otherwise leading to more error than necessary. The final result should always rounded back to the integer.

One example is the output of analogue to digital sampling. Most converters produce integer output, some particular finite value that, like the general thermometer, is a single integer value for an interval of continuous signal. Any particular bit depth used sets the maximum range of sample values that can be encoded. Floating point improves on that limitation. The converter output is often subject to considerable enhancement calculations to produce a more useable result. Quantization errors are much larger when limited size intermediate values are used. Data can be lost in a multiplication that raises the result value above the capacity of the integer sample.

Another example is interest calculations on money. The money is only defined to two decimal places (0 to 99 cents in the US system). Two decimals is the starting and the ending point. Once upon a time, when interest on bank deposit really existed, and the interest on my deposit was supposed be calculated daily, I asked why I could not get the same answer as the bank. I was using what was supposed to be a legitimate financial formula. The answer was that the calculation formula was correct but it was necessary to calculate all steps to 12 decimal places to get the right answer. That turned out to work.

Re statistics textbooks:

I have tried to get an insight to the basic problem but nothing I’ve read so far actually answers the question, at least in a straightforward way I can follow. There are plenty of discussions about ways to calculate this or that but the basic consideration of applicability is elusive. Perhaps it is necessary to put in four years study, or eight years, or whatever, in order to have any hope of understanding the issue fully but that is why I wrote up the example of the ocean heat content controversy.

In that, proponents showed how to get the same answer as the published paper, using methods and calculations exactly as presented in source textbooks. It was only when some rules were provided, also from standard sources, that made it clear the proponent methods were not applicable to the kind of data, that the controversy stopped and the paper was withdrawn from the journal in which it had been published.

The question seems easy enough but there are strident proponents on both sides. The calculation formulas used are certainly published and correctly used, at least in a mechanical way. The question of whether they are truly valid for the purpose never seems to gets settled. The issue arises frequently in regard to ‘climate’ related data. Do you know of any place that makes a clear statement that differentiates the valid application of ways that can legitimate improve precision or accuracy from invalid applications, not from drawing conclusions about related aspects but directly and explicitly?

While I can understand the idea that the uncertainty of the general weather thermometer, +/-0.5C cannot be calculated away, it also seems perhaps reasonable that many separate measurements all around the country, while not providing any greater precision or accuracy for any one location can tell something more exacting about the overall territory.

Andy, “

it also seems perhaps reasonable that many separate measurements all around the country, while not providing any greater precision or accuracy for any one location can tell something more exacting about the overall territory.”It’s not reasonable to think so, Andy. It’s naive, at best.

The measurement uncertainty in the data is as large, or larger than, the trend. That fact makes the data physically useless for that purpose.

If the workers in the air temperature field paid attention to data quality, e.g., the size of measurement error, they’d have nothing to say about trends in the climate.

That, perhaps, explains why they are so adamant about

notpaying attention to proper scientific rigor.The same neglect of proper rigor is evident throughout all of consensus climatology.

No matter your desire to know, Andy, if you work with bad data you’ll only get chimerical results.

Bad data contaminated with systematic error can behave just like good data. I know that from personal experience. Serious care is required when working with measurements.

“In that experiment we are of course saying it is possible to measure to the higher precision in the site so both thermometers work to the limit of their inherent capability. ”

Again, precision of the sensor is not the same as precision of the measurement system. Argo floats use sensors capable of .001C precision but the uncertainty of the temperature measurement float is only about 0.5C.

“For the case of the two thermometers, the objection about possible site limitations is quite irrelevant. It merely avoids the question, which is still open.”

The argument is not open. You keep wanting to say that precision of measurement can be increased by calculation. It can’t.

“In regard to the trend calculation, if you do the calculations to two decimal places, and record the result for each year, the trend in the numbers is right there in front of you eyes.”

This is the argument of a mathematician or computer programmer, not a physical scientist or engineer. You can’t calculate past the precision and uncertainty of the instrument. Using your logic a repeating decimal would be infinitely precise and accurate. It’s an impossibiity.

“The issue only has to do with that series of calculated yearly result which does have a trend … for some reason.”

You want to add significant digits to your result via calculation. It’s not possible.

“A trend of numbers in the answers from year to year exists. ”

No, it doesn’t – except in the minds of those who think significant digits are meaningless.

“You can look at it as many times as you want, or recalculate it as many times as you want, and it is still there.”

I’m sorry if it offends your sensibilities but you can’t extend precision through calculation. Trying to identify a trend in the hundredths digit when the base precision is in the tenths digit is impossible.

“It seems very unlikely that it is the uncertainty in the measurements itself that produces the trend,”

The uncertainty doesn’t produce the trend. It only identifies whether or not you can actually see the trend.

“Certainly the calculations are not measurements but it is not the case that intermediate values only one digit more than the value’s precision are used in calculations.”

More digits are used by mathematicians and computer programmers, not by physical scientists and engineers. Just because a calculator can give you an answer out to 8 decimal places in a calculation doesn’t make it useful if the base precision is only 1 digit.

“There are many instances where floating point calculations are done on integer values because of the type of calculations, when there are a series of calculations using the output of the previous calculation as input to the next calculation. These can quickly lead to overrunning the bounds of the data if done otherwise leading to more error than necessary. The final result should always rounded back to the integer.”

Again, this is the viewpoint of a mathematician or computer programmer. It simply does no good to go more than one digit past the base precision when you are doing intermediate calculations. The uncertainty of the first number can’t be changed. As Pat Frank pointed out, every time you use an uncertain answer as the input to another calculation the uncertainty grows. Iteration is no different than averaging independent measurements as far as uncertainty goes.

“One example is the output of analogue to digital sampling. ”

One, converting from analogue to digital is *NOT* averaging independent measurements. It’s a false analogy. Second, no depth of bit field can ever create the *exact* analog signal. You can get closer and closer with more bits but you will never get there.

“The answer was that the calculation formula was correct but it was necessary to calculate all steps to 12 decimal places to get the right answer. That turned out to work.”

If that is true then they had to be using fractional interest rates, e.g. 6 7/8 percent, that resulted in something like what I mentioned, repeating decimals. That simply doesn’t apply to precision of physical measurements and the uncertainty of such.

“I have tried to get an insight to the basic problem but nothing I’ve read so far actually answers the question,”

Then you haven’t read Dr. Taylor’s textbook or looked at the various links I have provided, e.g to the NIST web site and the GUM.

“The question seems easy enough but there are strident proponents on both sides. ”

Both sides? If they are against using systematic uncertainty in the combining of independent measurements then they are as anti-science as it is possible to get. I was taught this in my physics and electrical engineering labs back in the 60’s. It’s not like it hasn’t been around forever.

“Do you know of any place that makes a clear statement that differentiates the valid application of ways that can legitimate improve precision or accuracy from invalid applications”

The answer is that you CAN NOT improve precision of a measurement device except through using a better device. Perhaps by using a micrometer that clicks when enough force is applied to the object being measured instead of an older type device where the force applied has to be manually estimated. But you still can’t calculate more precision than the instrument provides. Neither can you calculate away uncertainty. Uncertainty only grows when adding independent populations, e.g. temperature measurements.

“while not providing any greater precision or accuracy for any one location can tell something more exacting about the overall territory.”

Sorry.It can’t. It truly is that simple. You can’t calculate away uncertainty and you can’t calculate in more precision. Even averaging a daily temp for the same site using a max measurement and a min measurement increases uncertainty about the average because you aren’t measuring the same thing. Sqrt(2) * 0.5 = +/- 0.707. You can’t calculate that away and you can’t increase the precision no matter what calculations you do. And it doesn’t get better by using more independent measurments from more sites.

You’ve explained with clarity until you’re blue in the face, Tim. So has Jim.

So has everyone who understands the difference between precision and accuracy, and who knows the importance of calibration and uncertainty.

College freshmen in Physics, Engineering, and Chemistry come to understand the concepts without much difficulty.

And yet, it remains opaque to all these intelligent folks, many with Ph.D.s, who thrash about, keep a furious grip their mistaken views, and will not accept the verdict of rigor no matter the clarity or cogency of description, or the authoritative sources cited.

They’ll never accept the verdict of good science, Tim. Because doing so means their entire story evaporates away, leaving them with nothing. Nothing to say. No life filled with meaning.

No noticeable CO2 effect on climate. No whoop-de-doo 20th century unprecedented warming trend. No paleo-temperature narrative. Nothing.

They’d have to repudiate 30 years of pseudo-science, admit they’ve been foolishly hoodwinked, that their righteous passions have been expended on nothing, and then the need to go and find new jobs.

No way they’ll embrace that fate. Their choice is between principled integrity plus career crash, or willful ignorance plus retained employment.

Commenter angech is the only person I’ve encountered here who showed the integrity of a good scientist/engineer by a change of mind.

The reason remains to post the correction for the undecided readers, and for people looking for an argument from integrity.

But consensus climate scientists, and those committed to Critical Global Warming Theory, will never, ever, move. Their very lives depend on remaining ignorant.

I don’t believe using thermometers accurate to only whole numbers of degrees can give true information beyond the whole number. A change of at least 1C is necessary to detect any change. The uncertainty cannot be overcome by computing to a greater precision result, whether 1, 2, or 50 decimal places. I’m not trying to argue otherwise.

The fact of that uncertainty does not prevent someone, anyone, from making multi decimal place calculation with the data, however. The extra precision part of the answer doesn’t mean anything, it should be rounded away. I don’t disagree.

All the temperature keeping agencies do calculations to produce results to two decimal place precision. They keep tables and graphs to display their results. Their claim is, I believe, that the average ‘global temperature’ has risen somewhat around 0.70C since 1980, based on data that has certainty only to 1C..

Even if, if fact, the global average, whether or not a global average has any real meaning, has risen by 0.7C, their data and calculations can not detect it. Therefore, why don’t their calculated anomalies go up and down from year to year without going anywhere? That is the question I’ve been trying to ask.

It’s ok to say ‘I have no idea’ if that is the case, but to say the calculations don’t come out the way they do way is another thing entirely. The data is available, the calculations can be done in the same way by anyone who can access the data and understand the process, then the results can be compared against the agencies’ output.

Since non-agency people are not reporting different results from that process, the most probable truth is that calculations to multiple decimal points do show a (meaningless) rise of 0.70C (or whatever amount they are claiming). Why does their anomaly rise most of the time rather than fall or, more reasonably, stay level over the years?

That is the whole of the matter. I am not trying to claim validity for such calculations, I am asking why the consistency?

******

“For the case of the two thermometers, the objection about possible site limitations is quite irrelevant. It merely avoids the question, which is still open.”

“The argument is not open. You keep wanting to say that precision of measurement can be increased by calculation. It can’t. “

The question I ask here was

“Does this make the difference, in this circumstance, for any particular measurement, an error of the standard thermometer or a measure of its particular uncertainty?”

By difference I mean the difference in readings between those two thermometers, and only those two thermometers. It is not about averaging with other sites or averaging multiple readings. It is only about these two thermometers at one site, for each reading. While a bit off the main topic, it still seems an interesting question.

*******

In regard to the thermometers themselves, based on one of Pat Frank’s response, I think my ignorance may be greater than I guessed. I know that with the older analogue thermometers, it is possible to interpolate between the markings and attempt to make a more precise reading (I did not write “accurate”).

However, I thought that the more modern electronic thermometers with +/-0.5 uncertainty were like some thermometers I have. The house thermostat thermometer, for instance, displays to the nearest whole degree. That means no matter how accurate the sensor is calibrated, the uncertainty of any reading is +/-0.5.

What does the recorded output from these electronically recorded weather station thermometers, look like? Is it a whole number or are one or more decimal places recorded (regardless of their accuracy)?

Therefore, why don’t their calculated anomalies go up and down from year to year without going anywhere? That is the question I’ve been trying to ask.”

If the anomaly is within the uncertainty range how do you know what the anomaly is doing?

“Why does their anomaly rise most of the time rather than fall or, more reasonably, stay level over the years?”

Lot’s of people ask that question. Many who have looked at the raw data says it doesn’t just go up and up. In fact, depending on the interval looked at it probably goes down.

“By difference I mean the difference in readings between those two thermometers, and only those two thermometers.”

You used an uncertainty interval for one thermometer and the sensor resolution for the other. Two different things. What is the uncertainty range for the thermometer with the sensor having a 0.001C resolution?

“What does the recorded output from these electronically recorded weather station thermometers, look like?”

I just dumped the Nov, 2020 data for the GHCN station: : CONCORDIA ASOS, KS US WBAN: 72458013984 (KCNK) using the PDF option.

The temperature data is recorded in that NOAA database to the units digit. No decimal places at all.

This is a eecord from a gchn record:

USC00029359201907TMAX 300 H 294 H 283 H 278 H 278 H 283 H 283 H 272 H 289 H 311 H 311 H 300 H-9999 -9999 317 H 317 H 317 H 311 H 311 H 306 H 317 H 311 H 289 H 267 H 267 H 289 H 300 H 311 H 311 H 300 H-9999

It is showing TMAX from July, 2019. Temps are in tenths of C. 300 = 30.0 degC

Andy,

here is some info I found on ASOS automated stations:

“Once each minute the ACU calculates the 5-minute average ambient temperature and dew point temperature from the 1-minute average observations (provided at least 4 valid 1-minute averages are available). These 5-minute averages are rounded to the nearest degree Fahrenheit, converted to the nearest 0.1 degree Celsius, and reported once each minute as the 5-minute average ambient and dew point

temperatures. All mid-point temperature values are roundedup (e.g., +3.5°F rounds up to +4.0°F; -3.5°F rounds up to – 3.0°F; while -3.6 °F rounds to -4.0 °F).”

FMH-1 (federal meteorological handbook) specifies a +/- 0.6degC accuracy for -50degF to 122degF with a resolution to the tenth of a degree C for temperature sensors (read temperature measuring system, not the actual sensor itself).

It’s difficult to find actual uncertainty intervals for any specific measuring device but it *will* be higher than the sensor resolution itself.

I forgot to mention that if you are rounding the F temp to the nearest degree and then calculating C temperature to the tenth, you are violating significant digit rules at the very beginning of the process.

50F = 10C

51F = 10.6C

52F = 11.1C

Think about it. If the uncertainty of the temperature is +/- 0.6degC then 10C could actually be between 9.4degC and 10.6degC. Similarly 10.6degC could be between 10degC and 11.2degC. You’ve already started off calculating more precision than the uncertainty allows!

It’s how uncertainty simply never gets considered in so much of the stuff we take for granted!

Tim,

Thank you for finding the information on recorded values from different kinds of sites. I’m going to have to figure out what some of those abbreviations and numbers mean but you have provided useful information from which to start to get a better understanding. The rules for rounding up at the ASOS automated stations leave too much to the imagination. They may imply that .1 through .4 round down, but don’t actually say so. Also, the ghcn clearly are to a tenth of a degree, even though it isn’t printed. Of course what you wrote about the digits is important to know in order to understand the values.

Tim wrote:

“Therefore, why don’t their calculated anomalies go up and down from year to year without going anywhere? That is the question I’ve been trying to ask.”

If the anomaly is within the uncertainty range how do you know what the anomaly is doing?”

Now you make it obvious that you are just being obstinate, perhaps to mock me, perhaps because you don’t want to admit that you don’t have any idea WHY there is a trend. I can’t believe that I’ve been so unclear that someone’s view filter prevents understanding that I’m talking about something, however controversial, that is widely done, recorded, and available, regardless of whether or not it fits certain rules he believes should apply. Those calculated results are what the agencies use to proclaim “hottest’ whatever period they want people to believe is going to soon cook them. I don’t think they just make the values up on the instant. They calculate them.

I though we were talking about the global yearly average, and aware that, as stipulated earlier, it sometimes goes up, it sometimes goes down, but on the whole has been going up from year to year (in the second decimal place). Yes, I mentioned 10,000 thermometers, as a hypothetical, and that number is published as now being used in the US, but I tried to be clear that the domain was not restricted. Any selected location or limited region might indeed be going down overall, or not varying, without changing the direction of the upward trend of the global average.

I offer the tentative hypothesis that the controversy over degree of precision for a global average exist due to a difference between measuring and counting. One side quotes rules for combining multiple measures, the other side quotes rules for calculations about sampling. Both sides have standard textbooks from which they take their rules but, apparently, the sources don’t make a clear enough assertion about how rules apply in enough circumstances.

While it isn’t beyond the bounds of possibly that the major agencies all know they are doing something wrong but intend to deceive, it is more likely that most just use the rules they believe are correct for the data. Almost everyone else just uses the results as presented, without questioning. On the other hand, politicians are simply parasites masquerading as human beings and one can’t expect proper human behavior from them.

For instance, if we can agree there is such a thing as an average temperature of the planet’s land surface, regardless of whether or not it means anything, that planet average becomes more fully know by sampling more widely over the entire globe. If multiple values can exist over multiple locations, as they do for temperatures, especially if the variations cannot be ascribed to a simple rule, as they cannot for temperatures, then more samples are more likely to provide more meaningful information. For instance, if it were possible to simultaneously measure the temperature within each square foot of the entire globe’s land area, it should be possible to derive a much better numerical average than from one or two measures from one or two randomly selected spots.

My tentative hypothesis says one side of the controversy uses the rules of sampling statistics to calculate a very precise value, the other side says that no matter how many conditions apply, the calculation precision must be limited to that of the least accurate individual measure.

Tim wrote:

“You used an uncertainty interval for one thermometer and the sensor resolution for the other. Two different things. What is the uncertainty range for the thermometer with the sensor having a 0.001C resolution?”

More refusing to address the question with made up excuses. I did not specify two different qualities of thermometers, I specified two thermometers of differing accuracy and precision.

‘Standard” weather thermometers are widely used. What they are and what they do is known. USCRN thermometers are used in their special sites and what they do is known. Properly place a standard thermometer in the same site with a USCRN thermometer and set it to record at the same time as the high precision, high accuracy USCRN thermometer. There is no reason to be befuddled about what the two thermometers are doing.

“Now you make it obvious that you are just being obstinate, perhaps to mock me, perhaps because you don’t want to admit that you don’t have any idea WHY there is a trend.”

No mocking. In order to discern a trend you must know the true value or see an increase/decrease outside the uncertainty interval. It is unknown if the stated measurement is the true value. Since the uncertainty interval is as wide on the negative side as it is on the positive side how do you know whether the true value is less or is more than the true value. If one reading is 50deg +/- 0.5deg and the next is 50.1deg +/- 0.5deg then how do you know if there is a true trend upward or not? If the first reading is 50deg +/- 0.5deg and the next is 49.9deg +/- 0.5deg then how do you know if there is a negative trend or not? The only valid statement you can make is to say you simply don’t know if there is a trend, either up or down.

If you get a series of measurements, 50-50.1-50.2-50.3-50.4 each with +/- 0.5 uncertainty then the climate scientists of today will say there is a trend upward when in actuality you simply don’t know.

I know that is hard to accept but it *is* how metrology actually should work for physical scientists and engineers.

“Those calculated results are what the agencies use to proclaim “hottest’ whatever period they want people to believe is going to soon cook them. I don’t think they just make the values up on the instant. They calculate them.”

You simply can’t calculate greater precision or calculate away uncertainty. There is no “making it up” involved – other than ignoring the precepts of metrology.

“that planet average becomes more fully know by sampling more widely over the entire globe.”

You keep ignoring the fact that if the planetary averages are always within the interval of uncertainty then how do you discern if there is a change in the planetary average? If you can’t discern whether there is a change or not then you can do all the sampling you want and it won’t help. Sample size only applies if you are trying to minimize ERROR when measuring the same thing multiple times using the same measurement device. Your sample size in the case of global temperature is ONE. Each measurement is independent with a population size of ONE. You are combining multiple populations, each with a sample size of one. You do that using root sum square for uncertainty as indicated in *all* the literature.

“One side quotes rules for combining multiple measures, the other side quotes rules for calculations about sampling.”

Those who say statistics can calculate away uncertainty are *NOT* physical scientists or engineers and are ignoring *all* the literature on uncertainty. They are primarily mathematicians and computer programmers who believe that a repeating decimal is infinitely precise!

” then more samples are more likely to provide more meaningful information. For instance, if it were possible to simultaneously measure the temperature within each square foot of the entire globe’s land area, it should be possible to derive a much better numerical average than from one or two measures from one or two randomly selected spots.”

Nope. Each measurement is an independent population of size one with its own uncertainty. When you combine independent populations the uncertainty adds as root sum square. Combining more independent populations only grows the uncertainty. Forget sample size. The only sample size you get for each population is ONE.

“My tentative hypothesis says one side of the controversy uses the rules of sampling statistics to calculate a very precise value, the other side says that no matter how many conditions apply, the calculation precision must be limited to that of the least accurate individual measure.”

When the population size is one there are no sampling statistics that apply. Remember, you can have precision on a device out the the thousandths digit but if the uncertainty in that measurement is in the tenths, e.g. +/- 0.1, the it doesn’t matter how precise your measurement is. Precision and accuracy are not the same thing. There *is* a reason why physical scientists and engineers go by the rule that you can’t have more significant digits than the uncertainty. It’s because you simply don’t know if the precision gives you any more of an accurate answer than a less precise device.

“More refusing to address the question with made up excuses. I did not specify two different qualities of thermometers, I specified two thermometers of differing accuracy and precision.”

No, you gave the uncertainty of one device and the precision of the other. What is the uncertainty associated with the thermometer than reads out to the thousandths digit?

“Properly place a standard thermometer in the same site with a USCRN thermometer and set it to record at the same time as the high precision, high accuracy USCRN thermometer. There is no reason to be befuddled about what the two thermometers are doing.”

The federal government says the thermometers only need to be accurate to +/- 0.6degC. They can read out in thousandths but it doesn’t make them more accurate. You keep confusing precision and accuracy.

from the NOAA site:

“Each USCRN station has three thermometers which report independent temperature measurements each hour. These three observed temperature value are used to derive a single official USCRN temperature value for the hour. This single value is sometimes a median and sometimes an average of various combinations of the three observed values, depending on information about which instruments agree in a pairwise comparison within 0.3°C”

So it would seem that at least a +/- 0.3C uncertainty interval would apply here. That’s better than +/- 0.5degC – but when combining the independent readings of multiple stations that +/- 0.3C grows by root sum square. Combine a 100 of these stations and you get (+/-10)*0.3 = +/-3degC for an combined uncertainty interval. So how do you identify a 0.1 degree trend in a 3deg uncertainty?

“No mocking. In order to discern a trend you must know the true value or see an increase/decrease outside the uncertainty interval. It is unknown if the stated measurement is the true value.”

I never said the trend was meaningful. You can’t mistake that. They calculate, they get a value that increases most years over the previous years. You can’t not understand that. Therefore “mocking me” by selectively taking pieces of what I write and pretending something else is what I meant.

This is the same tactic as alarmists who refuse to acknowledge data that does not agree with their proclamations. Or those clergy that refused to look through Galileo’s telescopes because they knew the truth, which was something different that what Galileo, and others, said was there to be seen. – This comparison is not to conflate the trend calculated from temperature measurements of too low certainty with the new universal truths Galileo discovered, only with the methods used to deny.

“Your sample size in the case of global temperature is ONE. Each measurement is independent with a population size of ONE.”

No, there are many thousands of daily surface samples. From the satellites there are millions of daily samples. They exist. So, pretend that “samples” in not an appropriate label and say “measurements”. Doesn’t make any difference. No average can be calculated from only one sample. Averages, and variances, etc. can easily be calculated from multiple samples. That fact is completely independent of either the accuracy of the calculated average (low) or the meaning of the calculated average (probably none in terms of anything else). If the average is only valid to the whole number, it is still the average of many measurements made over a wide temperature range. You keep trying to turn it into such a silly argument by pretending I can’t understand anything.

“No, you gave the uncertainty of one device and the precision of the other. What is the uncertainty associated with the thermometer than reads out to the thousandths digit?”

You make up specious arguments about what some thermometers can and cannot provide and pretend not to understand that the questions is about two thermometers with different accuracy, one of which provides a significantly more accurate and precise measurements than the other. NO, that is not a misunderstanding on my part of the difference between accuracy and precision, it is a stated condition of the question I posed.

You throw in spurious statements about some particular other thermometers, pretending that it isn’t possible to get better information from some thermometers than from others. If you believe the USCRN stations are simply a fraud and there is always only one degree of accuracy and certainty possible, no matter what instruments people make, or how they use them, why not just say so and be done with it.

You are insisting one some very narrow definitions, perhaps because of concentrating on uncertainty, and pretending the general definitions don’t exist, when they do, regardless of whether uncertainty is high or low. You have to understand by now, and perhaps from the very beginning, that I am not arguing against the rules governing uncertainty. Perhaps that seems fun but the fun is running dry. Thank you for engaging but it seems impossible to get any further under these conditions. We are running around the same circle again and again.

“They calculate, they get a value that increases most years over the previous years.”

Which only matters if they can actually see the increase. If the increase is within the uncertainty interval then all their calculations are moot. It is a mathematical figment of the imagination, it is a phantom, a chimera. If they would actually pay attention to the uncertainty of their base data then this would be obvious. But they have to ignore uncertainty in order to actually have something on which to base their claims.

Suppose you go out and trap a bunch of muskrats so you can measure their tails and get an average value for all muskrats around the world. All you have is a ruler marked in inches. But you estimate down to the tenth of an inch and come up with an average value in the hundredths. Then you do the same thing next year and get an answer 1 hundredth longer than last year. Then the third year you get an answer .01 inch longer! Should we believe you have actually identified a trend in the length of muskrat tails? Or is the uncertainty of your answer out to the hundredths digit too large to be sure?

“No, there are many thousands of daily surface samples.”

Each one is an independent population with a population size of one. You are measuring different things using different devices in each case. The central limit theory only applies with multiple samples of the same thing using the same measuring device. In this case the measurements are all correlated and make up a probability distribution that can be subjected to statistical analysis. That simply isn’t true of of a thousand different independent measurements.

” No average can be calculated from only one sample. Averages, and variances, etc. can easily be calculated from multiple samples.”

Now you are getting the crux of the issue. You are correct, you can’t calculate an average from one sample, nor can you define a probability distribution to a population of one. Averages and variances *ONLY* apply to a population having more than one member which define a probability distribution.

1. there is no probability distribution for uncertainty. 2. Go look up how you combine independent populations, each with an uncertainty interval associated with it.

“If the average is only valid to the whole number, it is still the average of many measurements made over a wide temperature range. You keep trying to turn it into such a silly argument by pretending I can’t understand anything.”

You are the same as so many others. You keep wanting to define uncertainty as ERROR. That lets you try and turn independent measurements of different things into an ERROR issue with a defined probability distribution. And then the central limit theory can be used to more accurately calculate the mean, and that needs to be read as the TRUE VALUE.

” pretend not to understand that the questions is about two thermometers with different accuracy, one of which provides a significantly more accurate and precise measurements than the other.”

Can you show me a measuring station today that has an accuracy of +/- .001deg? I can show you sensors that can read out to that precision but that is not a statement of the uncertainty associated with the measurement device itself. As I pointed out, even the ARGO floats with a sensor that can read out to the .001deg is widely accepted as having a +/- 0.5deg uncertainty! If you can’t show me a station with a +/- .001 uncertainty then your hypothetical is meaningless in the real world.

“f you believe the USCRN stations are simply a fraud ”

I don’t believe they are a fraud. But even the federal government expects no more than +/- 0.6degC accuracy from them. Do you deny that? That uncertainty has to be taken into account into *anything* that uses that data – and it is *NOT* taken into account by most climate scientists.

“pretending the general definitions don’t exist,”

The only one pretending here is you. You are pretending that you can treat independent measurements of different things using different devices in the same manner as multiple measurements of the same thing using the same measurement device. Not only is a global average meaningless in and of itself, trying to identify one down to the hundredths digit when the uncertainty interval that goes along with that average is so much wider than that makes it a meaningless exercise.

The fallacy of dong that was taught to me in my physics and EE labs 50 years ago. Somewhere along that line that apparently is no longer taught.

Tim,

Posting to agree.

I’ve stopped responding to the “uncertainty” champions here for the moment, but I’m still reading, and am still stronger than ever in support of “inspection of raw direct measurement’s organic trend over a long period” trumps “fuss and obsess over accuracy to get averages.”

wl-s, “

[I] am still stronger than ever in support of “inspection of raw direct measurement’s organic trend over a long period” trumps “fuss and obsess over accuracy to get averages.”Then you’re fated to talk nonsense, wl-s.

But, after all, that’s what all of consensus climate so-called science has been doing for 32 years: talking nonsense.

You’re in common company.

Analyzing raw data for a single stations is far better than trying to do it for multiple stations. But you must still consider the uncertainty associated with doing so. You can’t average today’s max temp with tomorrow’s max temp without considering the uncertainty of both. You can’t say tomorrow is warmer than today unless tomorrow’s temperature has an uncertainty interval that does not overlap yesterday’s uncertainty interval. If the uncertainty intervals overlap at all then you simply don’t know if tomorrow is warmer than today.

If you are talking about a federal measurement station with a +/- 0.6C uncertainty you are talking about close to +/- 1 degF.

A reading of 90degF +/- 1 degF would span the interval of 89degF to 91degF. So if August 15, 2019 had a temp of 90degF +/- 1degF then the August 15, 2020 temp would have to be more than 92degF +-/ 1degF to know if the 2020 date was hotter than the 2019 date.

When you graph the max temp per year, per month, or per day you must include the uncertainty bar with each data point. If the uncertainty bars overlap at all then how do you know which year was warmer? How do you discern a trend?

If 2015 max temp was 100degF +/- 1degF (99F to 101F) and for 2020 was 98degF +/- 1degF (97F to 99F) then you could be sure you were seeing a difference (but just barely). Reading temperatures to the tenth of a degree really doesn’t make any difference if the uncertainty interval is +/- 1degF. You still have to be outside the +/- 1degF interval to discern a difference.

@Tim Gorman

I withdraw my agreement. You are afflicted with the “AverageItWithModel” fallacy, same as Andy and Pat.

Then, you show yourselves completely blind to the one attack that can wreck the consensus. What side are you on?

@Pat Frank

My questions to all three of you. Without stipulating one ounce of your failed logic, Please answer this:

How did you arrive at the margin of error? Was is your certainty on the uncertainty?

The three of you play into the hands of TheConsensus by adopting their method, while attempting to defrag it’s results. This leaves their premise intact. Nice.

wls,

My calculations are done using the data in the Federal Meteorological Handbook No. 1, Appendix C, showing the acceptable uncertainty interval for meterological measuring stations. Even the supposed high-quality stations are only required to meet that uncertainty interval, +/- 0.6degC, to be certified.

Section 10.4.2 states: “Temperature shall be determined to the nearest tenth of a degree Celsius at all stations.”

10.5.1 Resolution for Temperature and Dew Point states:

The reporting resolution for the temperature and the dew point in the body of the report shall be whole degrees Celsius. The reporting resolution for the temperature and dew point in the remarks section of the report shall be to the nearest tenth of a degree Celsius. Dew point shall be calculated with respect to water at all temperatures.”

Appendex C, section 2.5 is where the standards for temperature can be found.

As Pat has pointed out there are all kinds of uncertainty associated with any measuring stations. This would include obstructions in the aspiration fan filter, dirt and grime buildup on the sensor itself preventing the aspirated air from properly encountering the sensor itself, even a mud dauber building a nest in the louvers of the station can affect the uncertainty interval of the station. Ice and snow buildup can do the same. FMH-1 only requires annual inspection and re-calibration. Lots can happen in a year.

Error is not uncertainty. Uncertainty is not error. Resolution is not accuracy. Accuracy is not resolution. Significant digits always apply with any measurement.

Learn those few rules and you are on your way to understanding whey even averaging something as simple as maximum and minimum temperatures has a larger uncertainty interval then either temperature by itself.

wl-s, I arrived at my per-site uncertainty by use of the comprehensive field calibrations of Hubbard, & Lin, (2002)

Realtime data filtering models for air temperature measurementsGeophys. Res. Lett., . 29(10): 1425; doi: 10.1029/2001GL013191.My first paper is published here, second one here. You can find there the whole analytical ball of wax.

All your air temperature trend lines have ±0.5 C uncertainty bars around every single air temperature point, wl-s. Your dismissals notwithstanding.

None of my work involves averaging with models, wl-s. It’s all empirical.

You have given no evidence of understanding the systematic error that corrodes the accuracy of USHCN temperature sensors.

How is it, that you, who claims a working knowledge of meteorological stations, apparently knows nothing of the impacts of insufficient wind speed and shield heating from solar irradiance, on the accuracy of air temperature readings?

It’s basic to the job, and you dismiss it in ignorance.

Patrick,

There are 1200 stations in RAW USHCN. You contend all 50 million recordings of TMAX they have made over 120+ years contain “systematic error.”

You cite an AVERAGE of uncertainty. Some of the stations were way off kilter and some spot on within a tiny tolerance.

Please provide me with a list of 100 stations that were calibrated by some mechanism that you champion and found to have an error 10x less than that pulled from Hubbard. I will graph the recordings from those 100, and make a video showing the sine curve of each, one by one.

Since the above is a joke — you will never do that — how about sending me a list of four stations that have a spectacularly low incidence of uncertainty. Or even one station.

Sorry, I meant to reply to Pat Frank, not “Patrick.”

Provide a list of stations, wl-s, that did not employ Stevenson screens (1850-1980) or MMTS shields (1980 and thereafter) and that were not exposed to sun and wind.

The sensors enclosed in all screens exposed to wind and sun are subject to the measurement errors Hubbard and Lin found to be caused by wind speed and irradiance.

The calibration experiments exposed the problem, wl-s. The problem exists and corrupts measurements wherever a field station is sited.

The cold waters of science, wl-s. Denial of the reality of measurement error is to sit in a warm little pond of fantasy.

In that event, you get to play with numbers but without any larger meaning. Another description of the mud slough that is consensus climatology.

wl-s, “>em>You contend all 50 million recordings of TMAX they have made over 120+ years contain “systematic error.””

Yes, I do. It’s a conclusion forced by calibration experiments conducted to test the effects of wind speed and irradiance on the accuracy of temperature sensor measurements.

Wait til I publish my next paper on sensor measurement error, wl-s. The benthos will gape below you.

The work is done. I hope to get to write it up next year, after other present projects are concluded.

The only thing you are proving is that the 50-million recordings don’t agree with your MegaStations. You are basically riding the coattails of the scandal revealed by Anthony Watts. Nice that you confirm the issues with some of the stations. This approach, however, does not negate the import of a consistent 120-year record for a station, which — even though possibly “innacurate” — still reveals the curving organic trend, and would reflect abnormal warming if detected.

And by obsessing like a religion on the uncertainty, and finding the average, you play into the hands of Alarmists. They are on board with voiding USCHN and GCHN RAW and instead valorizing their models of temp.

The USCRN-type MegaStations are useless in looking at the question “is there any abnormal warming” since there are only 140 of them, recording for a decade or so on average. They will be wonderful in 120 years, after two cycles of the sine curve have been observed by them in direct measurement. Please post back then.

Meanwhile, you evade and denouce the fact that 1200 long observations ought to show, one by one, abnormal warming, if there is any. They do not, on individual examination and on amalgam. That is the actual refutation of Alarm. Not your pointless obsession with wind screens and shields, and “averaging.”

wl-s, “

You are basically riding the coattails of the scandal revealed by Anthony Watts.Wrong again, wl-s. Anthony’s work involved siting issues. My work is completely independent.

The instruments Hubbard and Lin used for their calibration experiments were perfectly sited and maintained. Wind speed and irradiance were the only major sources of measurement error.

Their work and mine are experimentally orthogonal to Anthony’s survey. Nothing in my results depends in any way on anything Anthony has done.

You clearly haven’t read their work. You haven’t read mine. You don’t understand either one. And you don’t know what you’re talking about.

The two are related: “Data from USHCN is botched because of XXX reasons.” You both say that. You’d know that both are subsumed under my claim, if you thought abstractly, in essentials. You’ve shown monumental failure to think that way. You can take comfort that your blindness to that modality has shot down my coattails claim … in your rigid gaze.

I don’t care about “your” work or the claim of a rock-solid, certain, immutable, all-encompasing uncertainty number Hubbard came up with.

The basis of my claim you have sidestepped, only attacking it from your fallacious worldview. That is weak. A good intellect in contention who cannot reflect back the logic of the opponent — from within the opponent’s worldview — prior to refuting it, signals an empty amunition box.

For any others still reading:

USCHN RAW contains data from 1200 stations. Each recorded with sufficient consistency per tolerance, even if “off” by a constant error. Even if there is a tiny “ding” when they bought new gages in 1962. Even if a new parking lot was built next to the station in 1988. Even if there was never a shield around the gages, there was always never a shield, and consistency was not damaged. The power of so many direct recordings, relentlessly ammased, makes the ‘problems’ at the stations irrelevant. The organic flow of TMAX graphed over 40,000 recordings at the station through 120+ years is a rock-solid signal. We have no other rock, anywhere.

When observing the 1200 curves, one by one, a clear picture of nature’s impulse builds. There is a 60-year cycle to the sine wave. We are on the downslope of a wave now, nearly at the bottom. That is not abnormal. Neither was the upslope from the 1970s. However, MannHansen rode the wave up, and shouted that it would shoot to the sky and burn up the earth. Not only did that not happen, TMAX at nearly all the stations began descending, and has continued to descend. Alarmists have stonewalled the downtrend.

That is what the hard data says.

wl-s, “

Each recorded with sufficient consistency per tolerance, even if “off” by a constant error.”Not a constant error, wl-s. A deterministic and variable error. Produced by uncontrolled environmental conditions. Non-random error. Error unknown in magnitude and sign.

Error infesting the entire air temperature measurement record. Error you can’t bring yourself to investigate.

Your sine waves travel through the mean of physically ambiguous points.

You have hard numbers. To you they look like hard data. They’re not.

As to so-called climate change, my paper on the uncertainty in air temperature projections, recently updated to include the CMIP6 climate models, demonstrates without doubt that climate models have no predictive or diagnostic value and that none of those people — Mann, Hansen, the entire IPCC — know what they are talking about.

The entire CO2 emissions charade: disproved.

” The power of so many direct recordings, relentlessly ammased, makes the ‘problems’ at the stations irrelevant. The organic flow of TMAX graphed over 40,000 recordings at the station through 120+ years is a rock-solid signal. We have no other rock, anywhere.”

If the signal is within the uncertainty interval then no trend can be established because you never know what the “true value” of any specific data point is.

Pat Frank,

Okay, truce. You are defragging the Alarmists your way, through their methods.

I am approaching it by rejecting their methodology out of hand.

I still dislike that you need to make me wrong out of hand, instead of saying “oh, I see what you did there,” and then acknowloging it or refuting it on it’s own merits.

There can be an “and,” you know.

wl-s, agreed.

However, I do not feel a need to make you wrong. I do see what you did. If the measurement data were more accurate, your approach would be fine.

However, the measurement data are not accurate enough to support the method you’ve chosen.

It’s not that the data are wrong. It’s that we don’t know they are right. The difference is between error (which we do not know) and uncertainty (which we do know).

I’m a long time physical methods experimental chemist, wl-s. I struggle with systematic measurement error and data accuracy regularly. I work from that perspective. The air temperature record is not fit for climatology.

Pat Frank,

I accept your agreement of truce, even though you accompany it with a rejecting attack, and a denial that you need to make me wrong while doing it.

Here’s my comeback: you are completely wrong that your points (from your POV) invalidate my claim.

You also threw your credentials. “…long time physical methods experimental chemist …”

Here’s mine: for 35 years I have written software for, serviced, and validated a Quality Assurance department in a subcontractor in the Aerospace industry. You know, where tolerances start at ten-thousandths?

I respect precision, quite possibly on a scale that would leave you breathless. I contend with the prospect of AOG over my shoulder every day. That stands for “Aircraft On Ground,” a red-flag designation for remedial action the seriousness of which would blow your hair back.

That does not change my assessment that looking at 1200 sine curves of TMAX. The curving trends are not invalidated by the tolerance needed to answer “Does any abnormal warming appear on any of the waves?”

Do what you like, ws-l. The issue is not precision. The issue is accuracy.

I didn’t

throw my credentials, as you say. I referenced my experience working with measurement data; making measurements and evaluating them. Accuracy is key. The air temperature record does not have it.You write of “

abnormal warming.” How do you, or anyone else, know what isabnormal, with respect to warming?Are you familiar with Heinrich and Dansgaard-Oeschger events?

They occur periodically. Each of them is a rapid warming followed by slow cooling. There were about 25 D-O events during the last glacial period. The climate warmed several degrees C per century, i.e., much faster than now.

The Holocene has had about 8 warming events that were more rapid than now. The Medieval Warm Period was warmer than now — something we know from the position of the northern tree line then, versus the present.

So, what is

abnormal? A warming rate several times what we presently observe has beennormalin the recent geological past.Given the known extreme fluctuations of the climate over the last 20,000 years, quite apart from the major glaciations, there’s no way normal or abnormal can be detected in a 150 year temperature record.

Given the polar tropical climate during the Tertiary and the snowball Earth prior to the Cambrian, does abnormal have any meaning at all with respect to air temperature?

Systematic error and systematic uncertainty are not the same thing. Error is not uncertainty and uncertainty is not error.

How do you know a station is spot on when the specified uncertainty of all stations is +/- 0.6degC?

Tim

a) as I have shown ten times here, I don’t care if the station is ‘spot on’;

b) requirng a station to be ‘spot on’ is unnecessary to answer if the raw data shows abnormal warming;

c) you don’t know if a specific station is ‘spot on’ or not, since you are imposing a generalized construction of error and uncertainty, which may or not apply. Many stations are “spot on”, out of 1200 — all are “on” sufficiently for the test;

d) you apparently are as unable or unwilling as Pat and Andy to reflect my point of view in order to refute it. That leaves it standing;

a) We know. You’ve made yourself very clear, wl-s.

b1) necessary, is sufficient accuracy to detect the change of interest

b2)

abnormalis undefined (so isnormal)c) “don’t know” = analytical uncertainty; Tim’s exact point.

d) we have examined your POV and found it wanting, with reasons given referenced to professional texts.

Pat Frank, why are you replying to a post I made to Tim? Moreover, you agreed to a truce. Moreover …. well right at this point, if I were to dishonerably spit on a truce like you are doing, I’d trash your abcd into the mud. I won’t do that, easy as it would be.

“

I’d trash your abcd into the mud.”You’d just disparage and reject it, wl-s. You’ve proved or disproved nothing. You’ve only asserted.

You broke the truce, wl-s, with your baseless accusation of willful denial made against me: “

you apparently are as unable”or unwilling as Patand Andyto reflect my point of view in order to refute it.Rules of comment around here include that anyone can respond to any comment, no matter the addressee.

“TMAX at nearly all the stations began descending, and has continued to descend. Alarmists have stonewalled the downtrend.

That is what the hard data says.”

I just graphed my own personal data for Tmax for the months of July in 2018, 2019, and 2020. It definitely shows a downtrend. But the trend is by more than the +/- 0.5C uncertainty interval of my weather station. The highest Tmax in July, 2018 was 100degF, in 2019 was 96.8, and in 2020 was 94.2. These are enough different that their uncertainty intervals do not overlap. Therefore a trend is visible.

The problem comes in when you start averaging stations together. If you average my data with a similar stations for two years then the uncertainty interval becomes +/- 0.7degC and if you average three stations then the uncertainty becomes about +/- 0.9degC. and that is about +/- 1.7degF. All of sudden the uncertainty intervals overlap. The same thing applies if I just average the three Tmax readings of my own.

If the climate scientists would say x number of stations show an uptrend and y number of stations show a downtrend and leave the averages out of it they would be making a true statement. But to say that the average of 140 stations (with a combined uncertainty of +/- 7degC) shows a 0.01degC trend is just plain wrong. With a +/- 7degC uncertainty interval they would barely be able to discern a 10degC change.

If you do as you say and look at each station individually and give each one a plus or a minus trend and then count up the pluses and the minuses then we could trust what they say. When they try to average them then what they say simply can’t be trusted.

Yes. Stop averaging. Look at the station sine waves one by one.

Here’s a start. Hilariously, there is AN ANOMOLY!

It’s Berkley.

Wouldn’t you know!

https://theearthintime.com/berk.jpg

Except for Berkley[!] and one other station, the sine wave for the early-middle of the 20th century is at the top, and the direction of TMAX is descending over the past 30 years.

And … don’t average anything!

No uncertainty bars on your temperature plots, wl-s. LiG thermometer are graduated in 1 C or 1 F. Their field resolution is ±0.25 C or ±0.25 F.

The systematic error in each record is unknown, but is known to be present and likely of order ±0.5 C.

Those are Fahrenheit thermometers, so converting the estimated uncertainty due to systematic error to ±0.9 F, the total uncertainty in those temperatures is sqrt[(0.25)^2+(0.9)^2] = ±0.93 F.

Some of the series show excursions larger than that width. But interpretive caution is necessary.

I’m not trying to give you unnecessary trouble, wl-s. It’s just that doing science demands attention to exact detail. It can’t be escaped.