By Andy May
Steven Mosher complained about my previous post on the difference between the final and raw temperatures in the conterminous 48 states (CONUS) as measured by NOAA’s USCHN. That post can be found here. Mosher’s comment is here. Mosher said the USHCN is no longer the official record of the CONUS temperatures. This is correct as far as NOAA/NCEI is concerned. They switched to a dataset they call nClimDiv in March 2014. Where USHCN had a maximum of 1218 stations, the new nClimDiv network has over 10,000 stations and is gridded to a much finer grid, called nClimGrid. The nClimGrid gridding algorithm is new, it is called “climatological aided interpolation” (Willmott & Robeson, 1995). The new grid has 5 km resolution, much better than the USCHN grid.
While the gridding method is different, the corrections to the raw measurements recorded by the nClimDiv weather stations are the same as those used for the USCHN station measurements. This is discussed here and here. As a result, the nClimDiv and USHCN CONUS yearly averages are nearly the same as seen in Figure 1. The data used to build the nClimDiv dataset is drawn from the GHCN (Global Historical Climate Network) dataset (Vose, et al., 2014).

Figure 1. The USCRN record only goes back to 2005, it is shown in blue. nClimDiv and USHCN go back to the 19th century and lie on top of one another, with very minor differences. In this plot both datasets are gridded with the new nClimGrid gridding algorithm.
In Figure 2, USCRN, nClimDiv and USCHN, gridded with nClimGrid, are shown overlain with the average USCHN station data used in my previous post. The station average is plotted with a yellow dashed line.

Figure 2. Same as Figure 1, but the final USCHN non-gridded final temperature anomalies have been moved to a common reference (1981-2010 average) and plotted on top of the grid averages. The difference between the gridded and non-gridded averages is most noticeable in the peaks and valleys.
In Figure 2 we can see that the nClimDiv yearly gridded average anomalies are similar to the older, non-gridded USHCN yearly averages. The difference is not in the final data, but in the gridding process. The USCRN reference network station data is also similar to nClimDiv and USHCN, but it only goes from 2005 to the present.
As explained here, by NOAA:
“The switch [from USCHN] to nClimDiv has little effect on the average national temperature trend or on relative rankings for individual years, because the new dataset uses the same set of algorithms and corrections applied in the production of the USHCN v2.5 dataset. However, although both the USHCN v2.5 and nClimDiv yield comparable trends, the finer resolution dataset more explicitly accounts for variations in topography (e.g., mountainous areas). Therefore, the baseline temperature, to which the national temperature anomaly is applied, is cooler for nClimDiv than for USHCN v2.5. This new baseline affects anomalies for all years equally, and thus does not alter our understanding of trends.”
Prior to making Figure 2, we adjusted the USCHN station average, from our previous post, to the new baseline. The difference is approximately -0.33°C. This shift is legitimate and results from the new gridding algorithm that explicitly accounts for elevation changes, especially in mountainous areas.
Conclusions
While NOAA/NCEI has dropped USCHN in favor of a combination of USCRN and nClimDiv, the anomaly record from 1900 to today hasn’t changed in any significant way. The baseline (or reference) changed slightly, but we are plotting anomalies and the baseline is not important, it just moves the graph up and down, the trends and year-to-year changes stay the same. More importantly the adjustments made to the raw data, including the important time-of-day bias corrections and the pairwise homogenization (PHA) changes that looked so suspicious in my previous post have not changed at all and are still used.
The nClimDiv dataset uses a lot more stations than USCHN and if the stations are well sited and well taken care of this is a good change. The USCRN dataset is from a smaller set of weather stations, but these are highly accurate and carefully located. I do not think the USCRN stations are part of the nClimDiv set but are used as an independent check on them. The two systems of stations are operated independently.
My previous post dealt with the corrections, that is final minus raw temperatures, used in the USCHN dataset. They looked very anomalous from 2015 through 2019. The same set of corrections are used in the GHCN dataset, which is the source of the data fed into nClimDiv. So, the problem may still exist in the GHCN dataset. I’ll try and check that out and report on it in a future post.
You can purchase my latest book, Politics and Climate Change: A History, here. The content in this post is not from the book.
Works Cited
Vose, R., Applequist, S., Squires, M., Durre, I., Menne, M., Williams, C., . . . Arndt, D. (2014, May 9). Improved Historical Temperature and Precipitation Time Series for U.S. Climate Divisions. Journal of Applied Meteorology and Climatology, 53(5), 1232-1251. Retrieved from https://journals.ametsoc.org/jamc/article/53/5/1232/13722
Willmott, C., & Robeson, S. (1995). Climatologically Aided Interpolation (CAI) of Terrestrial Air Temperature. International Journal of Climatology, 15, 221-229. Retrieved from https://rmets.onlinelibrary.wiley.com/doi/abs/10.1002/joc.3370150207
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
Time wrote
“Remember, the difference between on measurement, i.e. year 0, and the next measurement, year 1, has to be greater than the uncertainty interval in order to know whether you way any difference between them for certain. If the difference you calculate is 0.01 but the uncertainty is +/-0.5 then how do you know for sure that you actually saw that difference of 0.01?”
“No one is saying anything about a “false” trend. The issue is that you simply can’t determine if there is a trend, either a true trend or a false trend, unless the temperatures being compared are outside of the uncertainty interval of both.”
You keep objecting to my thought experiments by saying it would not mean something to whatever that or there is no reason to do that (e.g. pair two thermometers of different precision). In that experiment we are of course saying it is possible to measure to the higher precision in the site so both thermometers work to the limit of their inherent capability. For the case of the two thermometers, the objection about possible site limitations is quite irrelevant. It merely avoids the question, which is still open.
In regard to the trend calculation, if you do the calculations to two decimal places, and record the result for each year, the trend in the numbers is right there in front of you eyes. The trend exists, the question is ‘why’?, or ‘what does it possibly mean’? Saying one can’t calculate a trend avoids the question of why the trend exists.
There isn’t, in my trend questions, any suggestion that any one value to the right of the decimal has a meaning in itself. Any one year to the next year comparison won’t provide any information. The issue only has to do with that series of calculated yearly result which does have a trend … for some reason.
A trend of numbers in the answers from year to year exists. That it continues to exist year by year is the mystery, or rather it apparently is a justification for saying that the world is heating. I am not saying it is a trend of temperature, just pointing out that a trend in the calculated results is quite evident looking at the string of years in order. If a trend exists instead of just random digits, one can’t change that fact by saying you didn’t really see anything because ghosts don’t exist. You can look at it as many times as you want, or recalculate it as many times as you want, and it is still there.
It seems very unlikely that it is the uncertainty in the measurements itself that produces the trend, but if it is, there should be some mathematical way to demonstrate that a trend, and not a random selection among 0 to 9, is what should show up. Perhaps that proof, if it exists, will show the trend is BECAUSE OF the inherent uncertainty in the measurements This seems very unlikely but that seeming so doesn’t cause the trend to vanish.
Maybe the trend does not reasonably suggest temperature increase or decrease, maybe it is purely by chance, maybe it means something quite different. However, the more it persists as years increase, the less likely it is to be chance. If it isn’t by chance then it is either
a fundamental characteristic of the type of calculations being done (unlikely) If so, it should be possible to shown so by a mathematical proof,
or it has some kind of physical meaning that is captured by the conglomerate of thermometers, independent of their uncertainty, regardless of how unlikely that seems.
I’ve never seen anything to suggest that any critic of excess precision believes funny calculations are producing the result, only that the calculations are not applicable to the particular data – without being able to quite explain why and certainly without any successes convincing anyone who believes the results of those calculations.
Tim wrote
“Calculations are not measurements. And the intermediate values only go one digit further than your precision.”
Certainly the calculations are not measurements but it is not the case that intermediate values only one digit more than the value’s precision are used in calculations. There are many instances where floating point calculations are done on integer values because of the type of calculations, when there are a series of calculations using the output of the previous calculation as input to the next calculation. These can quickly lead to overrunning the bounds of the data if done otherwise leading to more error than necessary. The final result should always rounded back to the integer.
One example is the output of analogue to digital sampling. Most converters produce integer output, some particular finite value that, like the general thermometer, is a single integer value for an interval of continuous signal. Any particular bit depth used sets the maximum range of sample values that can be encoded. Floating point improves on that limitation. The converter output is often subject to considerable enhancement calculations to produce a more useable result. Quantization errors are much larger when limited size intermediate values are used. Data can be lost in a multiplication that raises the result value above the capacity of the integer sample.
Another example is interest calculations on money. The money is only defined to two decimal places (0 to 99 cents in the US system). Two decimals is the starting and the ending point. Once upon a time, when interest on bank deposit really existed, and the interest on my deposit was supposed be calculated daily, I asked why I could not get the same answer as the bank. I was using what was supposed to be a legitimate financial formula. The answer was that the calculation formula was correct but it was necessary to calculate all steps to 12 decimal places to get the right answer. That turned out to work.
Re statistics textbooks:
I have tried to get an insight to the basic problem but nothing I’ve read so far actually answers the question, at least in a straightforward way I can follow. There are plenty of discussions about ways to calculate this or that but the basic consideration of applicability is elusive. Perhaps it is necessary to put in four years study, or eight years, or whatever, in order to have any hope of understanding the issue fully but that is why I wrote up the example of the ocean heat content controversy.
In that, proponents showed how to get the same answer as the published paper, using methods and calculations exactly as presented in source textbooks. It was only when some rules were provided, also from standard sources, that made it clear the proponent methods were not applicable to the kind of data, that the controversy stopped and the paper was withdrawn from the journal in which it had been published.
The question seems easy enough but there are strident proponents on both sides. The calculation formulas used are certainly published and correctly used, at least in a mechanical way. The question of whether they are truly valid for the purpose never seems to gets settled. The issue arises frequently in regard to ‘climate’ related data. Do you know of any place that makes a clear statement that differentiates the valid application of ways that can legitimate improve precision or accuracy from invalid applications, not from drawing conclusions about related aspects but directly and explicitly?
While I can understand the idea that the uncertainty of the general weather thermometer, +/-0.5C cannot be calculated away, it also seems perhaps reasonable that many separate measurements all around the country, while not providing any greater precision or accuracy for any one location can tell something more exacting about the overall territory.
Andy, “ it also seems perhaps reasonable that many separate measurements all around the country, while not providing any greater precision or accuracy for any one location can tell something more exacting about the overall territory.”
It’s not reasonable to think so, Andy. It’s naive, at best.
The measurement uncertainty in the data is as large, or larger than, the trend. That fact makes the data physically useless for that purpose.
If the workers in the air temperature field paid attention to data quality, e.g., the size of measurement error, they’d have nothing to say about trends in the climate.
That, perhaps, explains why they are so adamant about not paying attention to proper scientific rigor.
The same neglect of proper rigor is evident throughout all of consensus climatology.
No matter your desire to know, Andy, if you work with bad data you’ll only get chimerical results.
Bad data contaminated with systematic error can behave just like good data. I know that from personal experience. Serious care is required when working with measurements.
“In that experiment we are of course saying it is possible to measure to the higher precision in the site so both thermometers work to the limit of their inherent capability. ”
Again, precision of the sensor is not the same as precision of the measurement system. Argo floats use sensors capable of .001C precision but the uncertainty of the temperature measurement float is only about 0.5C.
“For the case of the two thermometers, the objection about possible site limitations is quite irrelevant. It merely avoids the question, which is still open.”
The argument is not open. You keep wanting to say that precision of measurement can be increased by calculation. It can’t.
“In regard to the trend calculation, if you do the calculations to two decimal places, and record the result for each year, the trend in the numbers is right there in front of you eyes.”
This is the argument of a mathematician or computer programmer, not a physical scientist or engineer. You can’t calculate past the precision and uncertainty of the instrument. Using your logic a repeating decimal would be infinitely precise and accurate. It’s an impossibiity.
“The issue only has to do with that series of calculated yearly result which does have a trend … for some reason.”
You want to add significant digits to your result via calculation. It’s not possible.
“A trend of numbers in the answers from year to year exists. ”
No, it doesn’t – except in the minds of those who think significant digits are meaningless.
“You can look at it as many times as you want, or recalculate it as many times as you want, and it is still there.”
I’m sorry if it offends your sensibilities but you can’t extend precision through calculation. Trying to identify a trend in the hundredths digit when the base precision is in the tenths digit is impossible.
“It seems very unlikely that it is the uncertainty in the measurements itself that produces the trend,”
The uncertainty doesn’t produce the trend. It only identifies whether or not you can actually see the trend.
“Certainly the calculations are not measurements but it is not the case that intermediate values only one digit more than the value’s precision are used in calculations.”
More digits are used by mathematicians and computer programmers, not by physical scientists and engineers. Just because a calculator can give you an answer out to 8 decimal places in a calculation doesn’t make it useful if the base precision is only 1 digit.
“There are many instances where floating point calculations are done on integer values because of the type of calculations, when there are a series of calculations using the output of the previous calculation as input to the next calculation. These can quickly lead to overrunning the bounds of the data if done otherwise leading to more error than necessary. The final result should always rounded back to the integer.”
Again, this is the viewpoint of a mathematician or computer programmer. It simply does no good to go more than one digit past the base precision when you are doing intermediate calculations. The uncertainty of the first number can’t be changed. As Pat Frank pointed out, every time you use an uncertain answer as the input to another calculation the uncertainty grows. Iteration is no different than averaging independent measurements as far as uncertainty goes.
“One example is the output of analogue to digital sampling. ”
One, converting from analogue to digital is *NOT* averaging independent measurements. It’s a false analogy. Second, no depth of bit field can ever create the *exact* analog signal. You can get closer and closer with more bits but you will never get there.
“The answer was that the calculation formula was correct but it was necessary to calculate all steps to 12 decimal places to get the right answer. That turned out to work.”
If that is true then they had to be using fractional interest rates, e.g. 6 7/8 percent, that resulted in something like what I mentioned, repeating decimals. That simply doesn’t apply to precision of physical measurements and the uncertainty of such.
“I have tried to get an insight to the basic problem but nothing I’ve read so far actually answers the question,”
Then you haven’t read Dr. Taylor’s textbook or looked at the various links I have provided, e.g to the NIST web site and the GUM.
“The question seems easy enough but there are strident proponents on both sides. ”
Both sides? If they are against using systematic uncertainty in the combining of independent measurements then they are as anti-science as it is possible to get. I was taught this in my physics and electrical engineering labs back in the 60’s. It’s not like it hasn’t been around forever.
“Do you know of any place that makes a clear statement that differentiates the valid application of ways that can legitimate improve precision or accuracy from invalid applications”
The answer is that you CAN NOT improve precision of a measurement device except through using a better device. Perhaps by using a micrometer that clicks when enough force is applied to the object being measured instead of an older type device where the force applied has to be manually estimated. But you still can’t calculate more precision than the instrument provides. Neither can you calculate away uncertainty. Uncertainty only grows when adding independent populations, e.g. temperature measurements.
“while not providing any greater precision or accuracy for any one location can tell something more exacting about the overall territory.”
Sorry.It can’t. It truly is that simple. You can’t calculate away uncertainty and you can’t calculate in more precision. Even averaging a daily temp for the same site using a max measurement and a min measurement increases uncertainty about the average because you aren’t measuring the same thing. Sqrt(2) * 0.5 = +/- 0.707. You can’t calculate that away and you can’t increase the precision no matter what calculations you do. And it doesn’t get better by using more independent measurments from more sites.
You’ve explained with clarity until you’re blue in the face, Tim. So has Jim.
So has everyone who understands the difference between precision and accuracy, and who knows the importance of calibration and uncertainty.
College freshmen in Physics, Engineering, and Chemistry come to understand the concepts without much difficulty.
And yet, it remains opaque to all these intelligent folks, many with Ph.D.s, who thrash about, keep a furious grip their mistaken views, and will not accept the verdict of rigor no matter the clarity or cogency of description, or the authoritative sources cited.
They’ll never accept the verdict of good science, Tim. Because doing so means their entire story evaporates away, leaving them with nothing. Nothing to say. No life filled with meaning.
No noticeable CO2 effect on climate. No whoop-de-doo 20th century unprecedented warming trend. No paleo-temperature narrative. Nothing.
They’d have to repudiate 30 years of pseudo-science, admit they’ve been foolishly hoodwinked, that their righteous passions have been expended on nothing, and then the need to go and find new jobs.
No way they’ll embrace that fate. Their choice is between principled integrity plus career crash, or willful ignorance plus retained employment.
Commenter angech is the only person I’ve encountered here who showed the integrity of a good scientist/engineer by a change of mind.
The reason remains to post the correction for the undecided readers, and for people looking for an argument from integrity.
But consensus climate scientists, and those committed to Critical Global Warming Theory, will never, ever, move. Their very lives depend on remaining ignorant.
I don’t believe using thermometers accurate to only whole numbers of degrees can give true information beyond the whole number. A change of at least 1C is necessary to detect any change. The uncertainty cannot be overcome by computing to a greater precision result, whether 1, 2, or 50 decimal places. I’m not trying to argue otherwise.
The fact of that uncertainty does not prevent someone, anyone, from making multi decimal place calculation with the data, however. The extra precision part of the answer doesn’t mean anything, it should be rounded away. I don’t disagree.
All the temperature keeping agencies do calculations to produce results to two decimal place precision. They keep tables and graphs to display their results. Their claim is, I believe, that the average ‘global temperature’ has risen somewhat around 0.70C since 1980, based on data that has certainty only to 1C..
Even if, if fact, the global average, whether or not a global average has any real meaning, has risen by 0.7C, their data and calculations can not detect it. Therefore, why don’t their calculated anomalies go up and down from year to year without going anywhere? That is the question I’ve been trying to ask.
It’s ok to say ‘I have no idea’ if that is the case, but to say the calculations don’t come out the way they do way is another thing entirely. The data is available, the calculations can be done in the same way by anyone who can access the data and understand the process, then the results can be compared against the agencies’ output.
Since non-agency people are not reporting different results from that process, the most probable truth is that calculations to multiple decimal points do show a (meaningless) rise of 0.70C (or whatever amount they are claiming). Why does their anomaly rise most of the time rather than fall or, more reasonably, stay level over the years?
That is the whole of the matter. I am not trying to claim validity for such calculations, I am asking why the consistency?
******
“For the case of the two thermometers, the objection about possible site limitations is quite irrelevant. It merely avoids the question, which is still open.”
“The argument is not open. You keep wanting to say that precision of measurement can be increased by calculation. It can’t. “
The question I ask here was
“Does this make the difference, in this circumstance, for any particular measurement, an error of the standard thermometer or a measure of its particular uncertainty?”
By difference I mean the difference in readings between those two thermometers, and only those two thermometers. It is not about averaging with other sites or averaging multiple readings. It is only about these two thermometers at one site, for each reading. While a bit off the main topic, it still seems an interesting question.
*******
In regard to the thermometers themselves, based on one of Pat Frank’s response, I think my ignorance may be greater than I guessed. I know that with the older analogue thermometers, it is possible to interpolate between the markings and attempt to make a more precise reading (I did not write “accurate”).
However, I thought that the more modern electronic thermometers with +/-0.5 uncertainty were like some thermometers I have. The house thermostat thermometer, for instance, displays to the nearest whole degree. That means no matter how accurate the sensor is calibrated, the uncertainty of any reading is +/-0.5.
What does the recorded output from these electronically recorded weather station thermometers, look like? Is it a whole number or are one or more decimal places recorded (regardless of their accuracy)?
Therefore, why don’t their calculated anomalies go up and down from year to year without going anywhere? That is the question I’ve been trying to ask.”
If the anomaly is within the uncertainty range how do you know what the anomaly is doing?
“Why does their anomaly rise most of the time rather than fall or, more reasonably, stay level over the years?”
Lot’s of people ask that question. Many who have looked at the raw data says it doesn’t just go up and up. In fact, depending on the interval looked at it probably goes down.
“By difference I mean the difference in readings between those two thermometers, and only those two thermometers.”
You used an uncertainty interval for one thermometer and the sensor resolution for the other. Two different things. What is the uncertainty range for the thermometer with the sensor having a 0.001C resolution?
“What does the recorded output from these electronically recorded weather station thermometers, look like?”
I just dumped the Nov, 2020 data for the GHCN station: : CONCORDIA ASOS, KS US WBAN: 72458013984 (KCNK) using the PDF option.
The temperature data is recorded in that NOAA database to the units digit. No decimal places at all.
This is a eecord from a gchn record:
USC00029359201907TMAX 300 H 294 H 283 H 278 H 278 H 283 H 283 H 272 H 289 H 311 H 311 H 300 H-9999 -9999 317 H 317 H 317 H 311 H 311 H 306 H 317 H 311 H 289 H 267 H 267 H 289 H 300 H 311 H 311 H 300 H-9999
It is showing TMAX from July, 2019. Temps are in tenths of C. 300 = 30.0 degC
Andy,
here is some info I found on ASOS automated stations:
“Once each minute the ACU calculates the 5-minute average ambient temperature and dew point temperature from the 1-minute average observations (provided at least 4 valid 1-minute averages are available). These 5-minute averages are rounded to the nearest degree Fahrenheit, converted to the nearest 0.1 degree Celsius, and reported once each minute as the 5-minute average ambient and dew point
temperatures. All mid-point temperature values are roundedup (e.g., +3.5°F rounds up to +4.0°F; -3.5°F rounds up to – 3.0°F; while -3.6 °F rounds to -4.0 °F).”
FMH-1 (federal meteorological handbook) specifies a +/- 0.6degC accuracy for -50degF to 122degF with a resolution to the tenth of a degree C for temperature sensors (read temperature measuring system, not the actual sensor itself).
It’s difficult to find actual uncertainty intervals for any specific measuring device but it *will* be higher than the sensor resolution itself.
I forgot to mention that if you are rounding the F temp to the nearest degree and then calculating C temperature to the tenth, you are violating significant digit rules at the very beginning of the process.
50F = 10C
51F = 10.6C
52F = 11.1C
Think about it. If the uncertainty of the temperature is +/- 0.6degC then 10C could actually be between 9.4degC and 10.6degC. Similarly 10.6degC could be between 10degC and 11.2degC. You’ve already started off calculating more precision than the uncertainty allows!
It’s how uncertainty simply never gets considered in so much of the stuff we take for granted!
Tim,
Thank you for finding the information on recorded values from different kinds of sites. I’m going to have to figure out what some of those abbreviations and numbers mean but you have provided useful information from which to start to get a better understanding. The rules for rounding up at the ASOS automated stations leave too much to the imagination. They may imply that .1 through .4 round down, but don’t actually say so. Also, the ghcn clearly are to a tenth of a degree, even though it isn’t printed. Of course what you wrote about the digits is important to know in order to understand the values.
Tim wrote:
“Therefore, why don’t their calculated anomalies go up and down from year to year without going anywhere? That is the question I’ve been trying to ask.”
If the anomaly is within the uncertainty range how do you know what the anomaly is doing?”
Now you make it obvious that you are just being obstinate, perhaps to mock me, perhaps because you don’t want to admit that you don’t have any idea WHY there is a trend. I can’t believe that I’ve been so unclear that someone’s view filter prevents understanding that I’m talking about something, however controversial, that is widely done, recorded, and available, regardless of whether or not it fits certain rules he believes should apply. Those calculated results are what the agencies use to proclaim “hottest’ whatever period they want people to believe is going to soon cook them. I don’t think they just make the values up on the instant. They calculate them.
I though we were talking about the global yearly average, and aware that, as stipulated earlier, it sometimes goes up, it sometimes goes down, but on the whole has been going up from year to year (in the second decimal place). Yes, I mentioned 10,000 thermometers, as a hypothetical, and that number is published as now being used in the US, but I tried to be clear that the domain was not restricted. Any selected location or limited region might indeed be going down overall, or not varying, without changing the direction of the upward trend of the global average.
I offer the tentative hypothesis that the controversy over degree of precision for a global average exist due to a difference between measuring and counting. One side quotes rules for combining multiple measures, the other side quotes rules for calculations about sampling. Both sides have standard textbooks from which they take their rules but, apparently, the sources don’t make a clear enough assertion about how rules apply in enough circumstances.
While it isn’t beyond the bounds of possibly that the major agencies all know they are doing something wrong but intend to deceive, it is more likely that most just use the rules they believe are correct for the data. Almost everyone else just uses the results as presented, without questioning. On the other hand, politicians are simply parasites masquerading as human beings and one can’t expect proper human behavior from them.
For instance, if we can agree there is such a thing as an average temperature of the planet’s land surface, regardless of whether or not it means anything, that planet average becomes more fully know by sampling more widely over the entire globe. If multiple values can exist over multiple locations, as they do for temperatures, especially if the variations cannot be ascribed to a simple rule, as they cannot for temperatures, then more samples are more likely to provide more meaningful information. For instance, if it were possible to simultaneously measure the temperature within each square foot of the entire globe’s land area, it should be possible to derive a much better numerical average than from one or two measures from one or two randomly selected spots.
My tentative hypothesis says one side of the controversy uses the rules of sampling statistics to calculate a very precise value, the other side says that no matter how many conditions apply, the calculation precision must be limited to that of the least accurate individual measure.
Tim wrote:
“You used an uncertainty interval for one thermometer and the sensor resolution for the other. Two different things. What is the uncertainty range for the thermometer with the sensor having a 0.001C resolution?”
More refusing to address the question with made up excuses. I did not specify two different qualities of thermometers, I specified two thermometers of differing accuracy and precision.
‘Standard” weather thermometers are widely used. What they are and what they do is known. USCRN thermometers are used in their special sites and what they do is known. Properly place a standard thermometer in the same site with a USCRN thermometer and set it to record at the same time as the high precision, high accuracy USCRN thermometer. There is no reason to be befuddled about what the two thermometers are doing.
“Now you make it obvious that you are just being obstinate, perhaps to mock me, perhaps because you don’t want to admit that you don’t have any idea WHY there is a trend.”
No mocking. In order to discern a trend you must know the true value or see an increase/decrease outside the uncertainty interval. It is unknown if the stated measurement is the true value. Since the uncertainty interval is as wide on the negative side as it is on the positive side how do you know whether the true value is less or is more than the true value. If one reading is 50deg +/- 0.5deg and the next is 50.1deg +/- 0.5deg then how do you know if there is a true trend upward or not? If the first reading is 50deg +/- 0.5deg and the next is 49.9deg +/- 0.5deg then how do you know if there is a negative trend or not? The only valid statement you can make is to say you simply don’t know if there is a trend, either up or down.
If you get a series of measurements, 50-50.1-50.2-50.3-50.4 each with +/- 0.5 uncertainty then the climate scientists of today will say there is a trend upward when in actuality you simply don’t know.
I know that is hard to accept but it *is* how metrology actually should work for physical scientists and engineers.
“Those calculated results are what the agencies use to proclaim “hottest’ whatever period they want people to believe is going to soon cook them. I don’t think they just make the values up on the instant. They calculate them.”
You simply can’t calculate greater precision or calculate away uncertainty. There is no “making it up” involved – other than ignoring the precepts of metrology.
“that planet average becomes more fully know by sampling more widely over the entire globe.”
You keep ignoring the fact that if the planetary averages are always within the interval of uncertainty then how do you discern if there is a change in the planetary average? If you can’t discern whether there is a change or not then you can do all the sampling you want and it won’t help. Sample size only applies if you are trying to minimize ERROR when measuring the same thing multiple times using the same measurement device. Your sample size in the case of global temperature is ONE. Each measurement is independent with a population size of ONE. You are combining multiple populations, each with a sample size of one. You do that using root sum square for uncertainty as indicated in *all* the literature.
“One side quotes rules for combining multiple measures, the other side quotes rules for calculations about sampling.”
Those who say statistics can calculate away uncertainty are *NOT* physical scientists or engineers and are ignoring *all* the literature on uncertainty. They are primarily mathematicians and computer programmers who believe that a repeating decimal is infinitely precise!
” then more samples are more likely to provide more meaningful information. For instance, if it were possible to simultaneously measure the temperature within each square foot of the entire globe’s land area, it should be possible to derive a much better numerical average than from one or two measures from one or two randomly selected spots.”
Nope. Each measurement is an independent population of size one with its own uncertainty. When you combine independent populations the uncertainty adds as root sum square. Combining more independent populations only grows the uncertainty. Forget sample size. The only sample size you get for each population is ONE.
“My tentative hypothesis says one side of the controversy uses the rules of sampling statistics to calculate a very precise value, the other side says that no matter how many conditions apply, the calculation precision must be limited to that of the least accurate individual measure.”
When the population size is one there are no sampling statistics that apply. Remember, you can have precision on a device out the the thousandths digit but if the uncertainty in that measurement is in the tenths, e.g. +/- 0.1, the it doesn’t matter how precise your measurement is. Precision and accuracy are not the same thing. There *is* a reason why physical scientists and engineers go by the rule that you can’t have more significant digits than the uncertainty. It’s because you simply don’t know if the precision gives you any more of an accurate answer than a less precise device.
“More refusing to address the question with made up excuses. I did not specify two different qualities of thermometers, I specified two thermometers of differing accuracy and precision.”
No, you gave the uncertainty of one device and the precision of the other. What is the uncertainty associated with the thermometer than reads out to the thousandths digit?
“Properly place a standard thermometer in the same site with a USCRN thermometer and set it to record at the same time as the high precision, high accuracy USCRN thermometer. There is no reason to be befuddled about what the two thermometers are doing.”
The federal government says the thermometers only need to be accurate to +/- 0.6degC. They can read out in thousandths but it doesn’t make them more accurate. You keep confusing precision and accuracy.
from the NOAA site:
“Each USCRN station has three thermometers which report independent temperature measurements each hour. These three observed temperature value are used to derive a single official USCRN temperature value for the hour. This single value is sometimes a median and sometimes an average of various combinations of the three observed values, depending on information about which instruments agree in a pairwise comparison within 0.3°C”
So it would seem that at least a +/- 0.3C uncertainty interval would apply here. That’s better than +/- 0.5degC – but when combining the independent readings of multiple stations that +/- 0.3C grows by root sum square. Combine a 100 of these stations and you get (+/-10)*0.3 = +/-3degC for an combined uncertainty interval. So how do you identify a 0.1 degree trend in a 3deg uncertainty?
“No mocking. In order to discern a trend you must know the true value or see an increase/decrease outside the uncertainty interval. It is unknown if the stated measurement is the true value.”
I never said the trend was meaningful. You can’t mistake that. They calculate, they get a value that increases most years over the previous years. You can’t not understand that. Therefore “mocking me” by selectively taking pieces of what I write and pretending something else is what I meant.
This is the same tactic as alarmists who refuse to acknowledge data that does not agree with their proclamations. Or those clergy that refused to look through Galileo’s telescopes because they knew the truth, which was something different that what Galileo, and others, said was there to be seen. – This comparison is not to conflate the trend calculated from temperature measurements of too low certainty with the new universal truths Galileo discovered, only with the methods used to deny.
“Your sample size in the case of global temperature is ONE. Each measurement is independent with a population size of ONE.”
No, there are many thousands of daily surface samples. From the satellites there are millions of daily samples. They exist. So, pretend that “samples” in not an appropriate label and say “measurements”. Doesn’t make any difference. No average can be calculated from only one sample. Averages, and variances, etc. can easily be calculated from multiple samples. That fact is completely independent of either the accuracy of the calculated average (low) or the meaning of the calculated average (probably none in terms of anything else). If the average is only valid to the whole number, it is still the average of many measurements made over a wide temperature range. You keep trying to turn it into such a silly argument by pretending I can’t understand anything.
“No, you gave the uncertainty of one device and the precision of the other. What is the uncertainty associated with the thermometer than reads out to the thousandths digit?”
You make up specious arguments about what some thermometers can and cannot provide and pretend not to understand that the questions is about two thermometers with different accuracy, one of which provides a significantly more accurate and precise measurements than the other. NO, that is not a misunderstanding on my part of the difference between accuracy and precision, it is a stated condition of the question I posed.
You throw in spurious statements about some particular other thermometers, pretending that it isn’t possible to get better information from some thermometers than from others. If you believe the USCRN stations are simply a fraud and there is always only one degree of accuracy and certainty possible, no matter what instruments people make, or how they use them, why not just say so and be done with it.
You are insisting one some very narrow definitions, perhaps because of concentrating on uncertainty, and pretending the general definitions don’t exist, when they do, regardless of whether uncertainty is high or low. You have to understand by now, and perhaps from the very beginning, that I am not arguing against the rules governing uncertainty. Perhaps that seems fun but the fun is running dry. Thank you for engaging but it seems impossible to get any further under these conditions. We are running around the same circle again and again.
“They calculate, they get a value that increases most years over the previous years.”
Which only matters if they can actually see the increase. If the increase is within the uncertainty interval then all their calculations are moot. It is a mathematical figment of the imagination, it is a phantom, a chimera. If they would actually pay attention to the uncertainty of their base data then this would be obvious. But they have to ignore uncertainty in order to actually have something on which to base their claims.
Suppose you go out and trap a bunch of muskrats so you can measure their tails and get an average value for all muskrats around the world. All you have is a ruler marked in inches. But you estimate down to the tenth of an inch and come up with an average value in the hundredths. Then you do the same thing next year and get an answer 1 hundredth longer than last year. Then the third year you get an answer .01 inch longer! Should we believe you have actually identified a trend in the length of muskrat tails? Or is the uncertainty of your answer out to the hundredths digit too large to be sure?
“No, there are many thousands of daily surface samples.”
Each one is an independent population with a population size of one. You are measuring different things using different devices in each case. The central limit theory only applies with multiple samples of the same thing using the same measuring device. In this case the measurements are all correlated and make up a probability distribution that can be subjected to statistical analysis. That simply isn’t true of of a thousand different independent measurements.
” No average can be calculated from only one sample. Averages, and variances, etc. can easily be calculated from multiple samples.”
Now you are getting the crux of the issue. You are correct, you can’t calculate an average from one sample, nor can you define a probability distribution to a population of one. Averages and variances *ONLY* apply to a population having more than one member which define a probability distribution.
1. there is no probability distribution for uncertainty. 2. Go look up how you combine independent populations, each with an uncertainty interval associated with it.
“If the average is only valid to the whole number, it is still the average of many measurements made over a wide temperature range. You keep trying to turn it into such a silly argument by pretending I can’t understand anything.”
You are the same as so many others. You keep wanting to define uncertainty as ERROR. That lets you try and turn independent measurements of different things into an ERROR issue with a defined probability distribution. And then the central limit theory can be used to more accurately calculate the mean, and that needs to be read as the TRUE VALUE.
” pretend not to understand that the questions is about two thermometers with different accuracy, one of which provides a significantly more accurate and precise measurements than the other.”
Can you show me a measuring station today that has an accuracy of +/- .001deg? I can show you sensors that can read out to that precision but that is not a statement of the uncertainty associated with the measurement device itself. As I pointed out, even the ARGO floats with a sensor that can read out to the .001deg is widely accepted as having a +/- 0.5deg uncertainty! If you can’t show me a station with a +/- .001 uncertainty then your hypothetical is meaningless in the real world.
“f you believe the USCRN stations are simply a fraud ”
I don’t believe they are a fraud. But even the federal government expects no more than +/- 0.6degC accuracy from them. Do you deny that? That uncertainty has to be taken into account into *anything* that uses that data – and it is *NOT* taken into account by most climate scientists.
“pretending the general definitions don’t exist,”
The only one pretending here is you. You are pretending that you can treat independent measurements of different things using different devices in the same manner as multiple measurements of the same thing using the same measurement device. Not only is a global average meaningless in and of itself, trying to identify one down to the hundredths digit when the uncertainty interval that goes along with that average is so much wider than that makes it a meaningless exercise.
The fallacy of dong that was taught to me in my physics and EE labs 50 years ago. Somewhere along that line that apparently is no longer taught.
Tim,
Posting to agree.
I’ve stopped responding to the “uncertainty” champions here for the moment, but I’m still reading, and am still stronger than ever in support of “inspection of raw direct measurement’s organic trend over a long period” trumps “fuss and obsess over accuracy to get averages.”
wl-s, “ [I] am still stronger than ever in support of “inspection of raw direct measurement’s organic trend over a long period” trumps “fuss and obsess over accuracy to get averages.”
Then you’re fated to talk nonsense, wl-s.
But, after all, that’s what all of consensus climate so-called science has been doing for 32 years: talking nonsense.
You’re in common company.
Analyzing raw data for a single stations is far better than trying to do it for multiple stations. But you must still consider the uncertainty associated with doing so. You can’t average today’s max temp with tomorrow’s max temp without considering the uncertainty of both. You can’t say tomorrow is warmer than today unless tomorrow’s temperature has an uncertainty interval that does not overlap yesterday’s uncertainty interval. If the uncertainty intervals overlap at all then you simply don’t know if tomorrow is warmer than today.
If you are talking about a federal measurement station with a +/- 0.6C uncertainty you are talking about close to +/- 1 degF.
A reading of 90degF +/- 1 degF would span the interval of 89degF to 91degF. So if August 15, 2019 had a temp of 90degF +/- 1degF then the August 15, 2020 temp would have to be more than 92degF +-/ 1degF to know if the 2020 date was hotter than the 2019 date.
When you graph the max temp per year, per month, or per day you must include the uncertainty bar with each data point. If the uncertainty bars overlap at all then how do you know which year was warmer? How do you discern a trend?
If 2015 max temp was 100degF +/- 1degF (99F to 101F) and for 2020 was 98degF +/- 1degF (97F to 99F) then you could be sure you were seeing a difference (but just barely). Reading temperatures to the tenth of a degree really doesn’t make any difference if the uncertainty interval is +/- 1degF. You still have to be outside the +/- 1degF interval to discern a difference.
@Tim Gorman
I withdraw my agreement. You are afflicted with the “AverageItWithModel” fallacy, same as Andy and Pat.
Then, you show yourselves completely blind to the one attack that can wreck the consensus. What side are you on?
@Patrick Guinness Frank
My questions to all three of you. Without stipulating one ounce of your failed logic, Please answer this:
How did you arrive at the margin of error? Was is your certainty on the uncertainty?
The three of you play into the hands of TheConsensus by adopting their method, while attempting to defrag it’s results. This leaves their premise intact. Nice.
wls,
My calculations are done using the data in the Federal Meteorological Handbook No. 1, Appendix C, showing the acceptable uncertainty interval for meterological measuring stations. Even the supposed high-quality stations are only required to meet that uncertainty interval, +/- 0.6degC, to be certified.
Section 10.4.2 states: “Temperature shall be determined to the nearest tenth of a degree Celsius at all stations.”
10.5.1 Resolution for Temperature and Dew Point states:
The reporting resolution for the temperature and the dew point in the body of the report shall be whole degrees Celsius. The reporting resolution for the temperature and dew point in the remarks section of the report shall be to the nearest tenth of a degree Celsius. Dew point shall be calculated with respect to water at all temperatures.”
Appendex C, section 2.5 is where the standards for temperature can be found.
As Pat has pointed out there are all kinds of uncertainty associated with any measuring stations. This would include obstructions in the aspiration fan filter, dirt and grime buildup on the sensor itself preventing the aspirated air from properly encountering the sensor itself, even a mud dauber building a nest in the louvers of the station can affect the uncertainty interval of the station. Ice and snow buildup can do the same. FMH-1 only requires annual inspection and re-calibration. Lots can happen in a year.
Error is not uncertainty. Uncertainty is not error. Resolution is not accuracy. Accuracy is not resolution. Significant digits always apply with any measurement.
Learn those few rules and you are on your way to understanding whey even averaging something as simple as maximum and minimum temperatures has a larger uncertainty interval then either temperature by itself.
wl-s, I arrived at my per-site uncertainty by use of the comprehensive field calibrations of Hubbard, & Lin, (2002) Realtime data filtering models for air temperature measurements Geophys. Res. Lett., . 29(10): 1425; doi: 10.1029/2001GL013191.
My first paper is published here, second one here. You can find there the whole analytical ball of wax.
All your air temperature trend lines have ±0.5 C uncertainty bars around every single air temperature point, wl-s. Your dismissals notwithstanding.
None of my work involves averaging with models, wl-s. It’s all empirical.
You have given no evidence of understanding the systematic error that corrodes the accuracy of USHCN temperature sensors.
How is it, that you, who claims a working knowledge of meteorological stations, apparently knows nothing of the impacts of insufficient wind speed and shield heating from solar irradiance, on the accuracy of air temperature readings?
It’s basic to the job, and you dismiss it in ignorance.
Patrick,
There are 1200 stations in RAW USHCN. You contend all 50 million recordings of TMAX they have made over 120+ years contain “systematic error.”
You cite an AVERAGE of uncertainty. Some of the stations were way off kilter and some spot on within a tiny tolerance.
Please provide me with a list of 100 stations that were calibrated by some mechanism that you champion and found to have an error 10x less than that pulled from Hubbard. I will graph the recordings from those 100, and make a video showing the sine curve of each, one by one.
Since the above is a joke — you will never do that — how about sending me a list of four stations that have a spectacularly low incidence of uncertainty. Or even one station.
Sorry, I meant to reply to Pat Frank, not “Patrick.”
Provide a list of stations, wl-s, that did not employ Stevenson screens (1850-1980) or MMTS shields (1980 and thereafter) and that were not exposed to sun and wind.
The sensors enclosed in all screens exposed to wind and sun are subject to the measurement errors Hubbard and Lin found to be caused by wind speed and irradiance.
The calibration experiments exposed the problem, wl-s. The problem exists and corrupts measurements wherever a field station is sited.
The cold waters of science, wl-s. Denial of the reality of measurement error is to sit in a warm little pond of fantasy.
In that event, you get to play with numbers but without any larger meaning. Another description of the mud slough that is consensus climatology.
wl-s, “>em>You contend all 50 million recordings of TMAX they have made over 120+ years contain “systematic error.””
Yes, I do. It’s a conclusion forced by calibration experiments conducted to test the effects of wind speed and irradiance on the accuracy of temperature sensor measurements.
Wait til I publish my next paper on sensor measurement error, wl-s. The benthos will gape below you.
The work is done. I hope to get to write it up next year, after other present projects are concluded.
The only thing you are proving is that the 50-million recordings don’t agree with your MegaStations. You are basically riding the coattails of the scandal revealed by Anthony Watts. Nice that you confirm the issues with some of the stations. This approach, however, does not negate the import of a consistent 120-year record for a station, which — even though possibly “innacurate” — still reveals the curving organic trend, and would reflect abnormal warming if detected.
And by obsessing like a religion on the uncertainty, and finding the average, you play into the hands of Alarmists. They are on board with voiding USCHN and GCHN RAW and instead valorizing their models of temp.
The USCRN-type MegaStations are useless in looking at the question “is there any abnormal warming” since there are only 140 of them, recording for a decade or so on average. They will be wonderful in 120 years, after two cycles of the sine curve have been observed by them in direct measurement. Please post back then.
Meanwhile, you evade and denouce the fact that 1200 long observations ought to show, one by one, abnormal warming, if there is any. They do not, on individual examination and on amalgam. That is the actual refutation of Alarm. Not your pointless obsession with wind screens and shields, and “averaging.”
wl-s, “You are basically riding the coattails of the scandal revealed by Anthony Watts.
Wrong again, wl-s. Anthony’s work involved siting issues. My work is completely independent.
The instruments Hubbard and Lin used for their calibration experiments were perfectly sited and maintained. Wind speed and irradiance were the only major sources of measurement error.
Their work and mine are experimentally orthogonal to Anthony’s survey. Nothing in my results depends in any way on anything Anthony has done.
You clearly haven’t read their work. You haven’t read mine. You don’t understand either one. And you don’t know what you’re talking about.
The two are related: “Data from USHCN is botched because of XXX reasons.” You both say that. You’d know that both are subsumed under my claim, if you thought abstractly, in essentials. You’ve shown monumental failure to think that way. You can take comfort that your blindness to that modality has shot down my coattails claim … in your rigid gaze.
I don’t care about “your” work or the claim of a rock-solid, certain, immutable, all-encompasing uncertainty number Hubbard came up with.
The basis of my claim you have sidestepped, only attacking it from your fallacious worldview. That is weak. A good intellect in contention who cannot reflect back the logic of the opponent — from within the opponent’s worldview — prior to refuting it, signals an empty amunition box.
For any others still reading:
USCHN RAW contains data from 1200 stations. Each recorded with sufficient consistency per tolerance, even if “off” by a constant error. Even if there is a tiny “ding” when they bought new gages in 1962. Even if a new parking lot was built next to the station in 1988. Even if there was never a shield around the gages, there was always never a shield, and consistency was not damaged. The power of so many direct recordings, relentlessly ammased, makes the ‘problems’ at the stations irrelevant. The organic flow of TMAX graphed over 40,000 recordings at the station through 120+ years is a rock-solid signal. We have no other rock, anywhere.
When observing the 1200 curves, one by one, a clear picture of nature’s impulse builds. There is a 60-year cycle to the sine wave. We are on the downslope of a wave now, nearly at the bottom. That is not abnormal. Neither was the upslope from the 1970s. However, MannHansen rode the wave up, and shouted that it would shoot to the sky and burn up the earth. Not only did that not happen, TMAX at nearly all the stations began descending, and has continued to descend. Alarmists have stonewalled the downtrend.
That is what the hard data says.
wl-s, “Each recorded with sufficient consistency per tolerance, even if “off” by a constant error.”
Not a constant error, wl-s. A deterministic and variable error. Produced by uncontrolled environmental conditions. Non-random error. Error unknown in magnitude and sign.
Error infesting the entire air temperature measurement record. Error you can’t bring yourself to investigate.
Your sine waves travel through the mean of physically ambiguous points.
You have hard numbers. To you they look like hard data. They’re not.
As to so-called climate change, my paper on the uncertainty in air temperature projections, recently updated to include the CMIP6 climate models, demonstrates without doubt that climate models have no predictive or diagnostic value and that none of those people — Mann, Hansen, the entire IPCC — know what they are talking about.
The entire CO2 emissions charade: disproved.
” The power of so many direct recordings, relentlessly ammased, makes the ‘problems’ at the stations irrelevant. The organic flow of TMAX graphed over 40,000 recordings at the station through 120+ years is a rock-solid signal. We have no other rock, anywhere.”
If the signal is within the uncertainty interval then no trend can be established because you never know what the “true value” of any specific data point is.
Pat Frank,
Okay, truce. You are defragging the Alarmists your way, through their methods.
I am approaching it by rejecting their methodology out of hand.
I still dislike that you need to make me wrong out of hand, instead of saying “oh, I see what you did there,” and then acknowloging it or refuting it on it’s own merits.
There can be an “and,” you know.
wl-s, agreed.
However, I do not feel a need to make you wrong. I do see what you did. If the measurement data were more accurate, your approach would be fine.
However, the measurement data are not accurate enough to support the method you’ve chosen.
It’s not that the data are wrong. It’s that we don’t know they are right. The difference is between error (which we do not know) and uncertainty (which we do know).
I’m a long time physical methods experimental chemist, wl-s. I struggle with systematic measurement error and data accuracy regularly. I work from that perspective. The air temperature record is not fit for climatology.
Pat Frank,
I accept your agreement of truce, even though you accompany it with a rejecting attack, and a denial that you need to make me wrong while doing it.
Here’s my comeback: you are completely wrong that your points (from your POV) invalidate my claim.
You also threw your credentials. “…long time physical methods experimental chemist …”
Here’s mine: for 35 years I have written software for, serviced, and validated a Quality Assurance department in a subcontractor in the Aerospace industry. You know, where tolerances start at ten-thousandths?
I respect precision, quite possibly on a scale that would leave you breathless. I contend with the prospect of AOG over my shoulder every day. That stands for “Aircraft On Ground,” a red-flag designation for remedial action the seriousness of which would blow your hair back.
That does not change my assessment that looking at 1200 sine curves of TMAX. The curving trends are not invalidated by the tolerance needed to answer “Does any abnormal warming appear on any of the waves?”
Do what you like, ws-l. The issue is not precision. The issue is accuracy.
I didn’t throw my credentials, as you say. I referenced my experience working with measurement data; making measurements and evaluating them. Accuracy is key. The air temperature record does not have it.
You write of “abnormal warming.” How do you, or anyone else, know what is abnormal, with respect to warming?
Are you familiar with Heinrich and Dansgaard-Oeschger events?
They occur periodically. Each of them is a rapid warming followed by slow cooling. There were about 25 D-O events during the last glacial period. The climate warmed several degrees C per century, i.e., much faster than now.
The Holocene has had about 8 warming events that were more rapid than now. The Medieval Warm Period was warmer than now — something we know from the position of the northern tree line then, versus the present.
So, what is abnormal? A warming rate several times what we presently observe has been normal in the recent geological past.
Given the known extreme fluctuations of the climate over the last 20,000 years, quite apart from the major glaciations, there’s no way normal or abnormal can be detected in a 150 year temperature record.
Given the polar tropical climate during the Tertiary and the snowball Earth prior to the Cambrian, does abnormal have any meaning at all with respect to air temperature?
Systematic error and systematic uncertainty are not the same thing. Error is not uncertainty and uncertainty is not error.
How do you know a station is spot on when the specified uncertainty of all stations is +/- 0.6degC?
Tim
a) as I have shown ten times here, I don’t care if the station is ‘spot on’;
b) requirng a station to be ‘spot on’ is unnecessary to answer if the raw data shows abnormal warming;
c) you don’t know if a specific station is ‘spot on’ or not, since you are imposing a generalized construction of error and uncertainty, which may or not apply. Many stations are “spot on”, out of 1200 — all are “on” sufficiently for the test;
d) you apparently are as unable or unwilling as Pat and Andy to reflect my point of view in order to refute it. That leaves it standing;
a) We know. You’ve made yourself very clear, wl-s.
b1) necessary, is sufficient accuracy to detect the change of interest
b2) abnormal is undefined (so is normal)
c) “don’t know” = analytical uncertainty; Tim’s exact point.
d) we have examined your POV and found it wanting, with reasons given referenced to professional texts.
Pat Frank, why are you replying to a post I made to Tim? Moreover, you agreed to a truce. Moreover …. well right at this point, if I were to dishonerably spit on a truce like you are doing, I’d trash your abcd into the mud. I won’t do that, easy as it would be.
“I’d trash your abcd into the mud. ”
You’d just disparage and reject it, wl-s. You’ve proved or disproved nothing. You’ve only asserted.
You broke the truce, wl-s, with your baseless accusation of willful denial made against me: “you apparently are as unable or unwilling as Pat and Andy to reflect my point of view in order to refute it.”
Rules of comment around here include that anyone can respond to any comment, no matter the addressee.
“TMAX at nearly all the stations began descending, and has continued to descend. Alarmists have stonewalled the downtrend.
That is what the hard data says.”
I just graphed my own personal data for Tmax for the months of July in 2018, 2019, and 2020. It definitely shows a downtrend. But the trend is by more than the +/- 0.5C uncertainty interval of my weather station. The highest Tmax in July, 2018 was 100degF, in 2019 was 96.8, and in 2020 was 94.2. These are enough different that their uncertainty intervals do not overlap. Therefore a trend is visible.
The problem comes in when you start averaging stations together. If you average my data with a similar stations for two years then the uncertainty interval becomes +/- 0.7degC and if you average three stations then the uncertainty becomes about +/- 0.9degC. and that is about +/- 1.7degF. All of sudden the uncertainty intervals overlap. The same thing applies if I just average the three Tmax readings of my own.
If the climate scientists would say x number of stations show an uptrend and y number of stations show a downtrend and leave the averages out of it they would be making a true statement. But to say that the average of 140 stations (with a combined uncertainty of +/- 7degC) shows a 0.01degC trend is just plain wrong. With a +/- 7degC uncertainty interval they would barely be able to discern a 10degC change.
If you do as you say and look at each station individually and give each one a plus or a minus trend and then count up the pluses and the minuses then we could trust what they say. When they try to average them then what they say simply can’t be trusted.
Yes. Stop averaging. Look at the station sine waves one by one.
Here’s a start. Hilariously, there is AN ANOMOLY!
It’s Berkley.
Wouldn’t you know!
Except for Berkley[!] and one other station, the sine wave for the early-middle of the 20th century is at the top, and the direction of TMAX is descending over the past 30 years.
And … don’t average anything!
No uncertainty bars on your temperature plots, wl-s. LiG thermometer are graduated in 1 C or 1 F. Their field resolution is ±0.25 C or ±0.25 F.
The systematic error in each record is unknown, but is known to be present and likely of order ±0.5 C.
Those are Fahrenheit thermometers, so converting the estimated uncertainty due to systematic error to ±0.9 F, the total uncertainty in those temperatures is sqrt[(0.25)^2+(0.9)^2] = ±0.93 F.
Some of the series show excursions larger than that width. But interpretive caution is necessary.
I’m not trying to give you unnecessary trouble, wl-s. It’s just that doing science demands attention to exact detail. It can’t be escaped.