Ever since the beginning of the global warming debate, now labeled “climate change,” there has been one immutable yet little-known fact: All of the temperature data stations used to make determinations about the state of Earth’s temperature are controlled by governments.
In June 1988, when Dr. James Hansen, then-director of NASA’s Institute for Space Studies in Manhattan, went before the Senate Energy and Natural Resources Committee to say that, “global warming has begun,” he was using temperature data collected by governments worldwide from a weather station network that was never intended to detect a “global warming signal.”
In fact, Dr. Hansen had to develop novel statistical techniques to tease that global warming signal out of the data. The problem is, these weather station networks were never designed to detect such a signal in the first place. They were actually designed for weather forecast verification, to determine if forecasts issued by agencies such as the U.S. Weather Bureau (now the National Weather Service) were accurate. If you make temperature and precipitation forecasts for a location, and there is no feedback of the actual temperatures reached and the rainfall recorded, then it is impossible to improve the skill of forecasting.
The original network of weather stations, called the Cooperative Observer Program (COOP), was established in 1891 to formalize an ad hoc weather observation network operated by the U.S. Army Signal Service since 1873. It was only later that the COOP network began to be used for climate because climate observations require at least 30 years of data from weather stations before a baseline “normal climate” for a location can be established. Once the Cooperative Observer Program was established in the United States, other countries soon followed, and duplicated how the U.S. network was set up on a global scale.
However, the COOP network has several serious problems for use in detecting climate change on a national and global scale. The U.S. temperature readings made by volunteer COOP observers are rounded to the nearest whole degree Fahrenheit when recording the data on a paper form called a B-91. When comparing such coarsely recorded nearest whole degree temperature data to the claims of global warming, which are said to be about 1.8°F (1.0°C) since the late 1800s, the obvious questions of accuracy and precision of the COOP temperature data arise.
Even more concerning is that more than 90% of the COOP stations in the United States used to record climate data have found to be corrupted by local urbanization or contamination effects over time, with hundreds of COOP stations found to have been severely compromised by being placed next to air conditioner exhausts, jet exhausts at airports, and concrete, asphalt, and buildings that have sprung up near stations. All of these heat sources and heat sinks do one thing and one thing only: bias the recorded temperatures upwards.
The crux of the problem is this: the NWS publication “Requirements and Standards for Climate Observations” instructs that temperature data instruments must be “over level terrain (earth or sod) typical of the area around the station and at least 100 feet from any extensive concrete or paved surface,” and that “all attempts will be made to avoid areas where rough terrain or air drainage are proven to result in non-representative temperature data.” However, as detailed in this report, these instructions are regularly violated, not just in the U.S. network, but also in the Global Historical Climate Network.
This isn’t just a U.S. problem; it is a global problem. Examples exist of similarly compromised stations throughout the world, including Italy, the United Kingdom, China, Africa, and Australia.
With such broad corruption of the measurement environment, “The temperature records cannot be relied on as indicators of global change,” said John Christy, professor of atmospheric science at the University of Alabama in Huntsville, a former lead author on the Intergovernmental Panel on Climate Change.
The fact is that all global temperature data are recorded and compiled by government agencies, and the data are questionable due to corruption issues, rounding, and other adjustments that are applied to the data. In essence, the global surface temperatures that are reported are a mishmash of rounded, adjusted, and compromised readings, rather than being an accurate representation of Earth’s temperature. While scholars may claim the data are accurate, any layman can surmise that with all the problems that have been pointed out, it cannot possibly be accurate, only an estimate with high uncertainty.
Only one independent global temperature dataset that is independent of government compilation and reporting methods exists, and that is satellite-derived global temperature data from the University of Alabama at Huntsville (UAH), which are curated by Dr. John Christy and Dr. Roy Spencer.
But, even the UAH satellite dataset doesn’t give a full and accurate picture of global surface temperature because of limitations of the satellite system. At present, the system measures atmospheric temperature of the lower troposphere at about 26,000 feet (8 kilometers) altitude.
To date, there is only one network of climate capable weather stations that is accurate enough to fully detect a climate change signal. This is the U.S. Climate Reference Network (USCRN), which is a state-of-the-art automated system designed specifically to accurately measure climate trends at the surface. Since going into operation in 2005, it has not found any significant warming trend in the United States that can be attributed to climate change.
Unfortunately, the data from the USCRN network are buried by the U.S. government, and are not publicly reported in monthly or yearly global climate reports. It has also not been deployed worldwide.
Given the government monopoly on use of corrupted temperature data, questionable accuracy, and a clear reticence to make highly accurate temperature data from the USCRN available to the public, it is time for a truly independent global temperature record to be produced.
This isn’t “rocket science.” Given that governments are spending billions of taxpayer dollars on climate mitigation programs, doesn’t it make sense to get the most important thing – the actual temperature – as accurate as possible? Given today’s technology, shouldn’t we confirm and validate the need for it with a network and data collection that is not relying on a 100-year-old system? After all, if we can send a man to the moon, surely, we can measure the temperature of our own planet accurately.
Anthony Watts (awatts@heartland.org) is a senior fellow for environment and climate at The Heartland Institute.
This post originally ran in Issues & Insights.
I just did an article on how climate science (erroneously) got to one of its most basic parameters. I think it is fun and revealing..
https://greenhousedefect.com/basic-greenhouse-defects/the-anatomy-of-a-climate-science-disaster
That earth energy budget has issues.
Incoming 341w-m²(correct)
Reflected 101.9w-m²(solar heat in stratosphere) correct
Clouds and Atmosphere 79w-m² (49w-m² compression heating+ 30w-m² clouds)
Reflected by surface 23w-m² (351(summer)-328(winter))=23w-m²
161w-m² Absorbed by surface (161×2=322w-m² (winter))
17w-m² Thermals
Evaporation 80w-m²
80+17+23+40 =160w-m2+30w-m²+49w-m²=239w-m² outgoing. (160w-m² on the low side).
175w-m² summer (350w-m²), 164w-m² x 2 328w-m² (winter).
396w-m² is incorrect as based on flat earth measurements.
396/2=198/4=49w-m² compression heating.
333w-m² – 198(compression heating) +135(CO2)(incorrect) as clouds that reach stratosphere near tropics removes 30w-m².
Compression heating : 1.4 x 1.27kg x 334.4m/s squared (111,823) = 198957w-m²/1007/w-m²
” Compression heating : 1.4 x 1.27kg x 334.4m/s squared (111,823) = 198957w-m²/1007/w-m² ”
Heating occurs during compression . After compression is complete , heating stops .
Compression of the atmosphere stopped ages ago …..
😉
No, that is not the problem..
Its fairly close tho since the energy per fixed measured volume at a specific altitude does not change because the controlling factor is gravity regulating pressure.. there it is.
slindsayyulegmailcom: “That earth energy budget has issues.”
Hmm…
slindsayyulegmailcom: “396w-m² is incorrect as based on flat earth measurements.”
Well there’s your problem.
wrong
ES,
in your “fun and revealing” analysis you made the assumption that the ocean exhibits Fresnel reflection. But this is not the case, mostly due to the angle of waves, ripples, and foam from horizontal.
As evidence, I point out that the ocean horizon is a dark line in the far distance, completely unlike a mirror reflector as the Fresnel equation predicts for a perfectly flat surface. An internet search will provide a few papers with correct test results, and unfortunately many who incorrectly assume the Fresnel curve for the purposes they are investigating. But you are investigating the emissivity of the planet, so you should use the “right stuff”.
BTW, I do appreciate the work you have put into your site, sharing your calcs with others. I have been too lazy to do one out of fear of critics, I suppose.
You confuse two things.
One is doing all this in infinite resolution, considering cold skin effects, wave and windspeeds on a global scale, different ocean temperatures which will affect emissivities a little bit (unless below freezing), salinity, measurements from all possible angles and so on. If you can do all that, go ahead
The other thing is completely messing up at the basics. That is what the article is about.
The point seems to be that there *is* some level of reflection not accounted for. How much there is can be debated but it shouldn’t be ignored.
fear of being wrong
Your “conclusion” is almost entirely a character attack on modern climate science and also one specific scientist, rather than an actual conclusion.
Yeah, I was not quite happy with it. Maybe I should edit it. Yet it is not an attack on Gavin Schmidt. If an attack, it is on Wilber at al, because they deserve it. Also Trenberth et al is a joke. They just baselessly assume a surface emissivity of one, double down on it and then (ab)use Wilber et al as reference. And while this completely screws up the whole “science”, Gavin Schmidt sits on top of it and is like “hey, where is the problem”..
Of course this is commedy!
I recommend:
Write a plain English summary and put that paragraph it at the start of the article Delete the last paragraph. Focus on your science, or focus on your politics, but not both in the same article.
This climate change scaremongering junk science is not comedy to me.
It is a tragedy that the best climate in 5000 years — now — has been spun as the beginning of an imaginary climate emergency since the early 1970s, and many people believe that.
It’s easy to criticize the climate predictions because enough climate science knowledge does not exist (not even close) to have any hope of making a correct long term climate prediction, except with a lucky guess. And that is assuming it will ever be possible to predict the climate in 50 to 100 years — I have no logical reason to believe that will ever be possible.
Climate scaremongering is politics, not science.
We climate realists have been arguing that the consensus climate science is wrong since the late 1970s, but that strategy has no effect.
The (junk) science of the IPCC is just right to scare people and allow leftist governments to control them. The IPCC does not care if their climate predictions are right. They only care that scare people with their predictions.
Many people would make a distinction between the physics of WG1 and what the rest of the IPCC is doing. Then there is the politics, the activists and so on. I have no interest in discussing these because there is no point and it is boring.
The issues of WG1 however are interesting and it is right at the core of things. It is the rabbit hole worth exploring, but no one, or barely anyone is doing it. At least not properly.
Maybe too many on the critical side believe this would be a “pet project” 😉
long term climate prediction,
1 air temperature will vary with latitude, it will be cold at the poles
We climate realists have been arguing that the consensus climate science is wrong since the late 1970s, but that strategy has no effect.
The (junk) science of the IPCC is just right to scare people and allow leftist governments to control them. The IPCC does not care if their climate predictions are right. They only care that scare people with their predictions.
climate realists are a recent fabrication, they have no agreed upon positions
the strategy of declaring others wrong, never has any long effect
“the strategy of declaring others wrong, never has any long effect” Then why do you constantly do it?
to illustrate the point. nobody is convinced by assumptive closes
They just baselessly assume a surface emissivity of one
use the figures from Spencer, you know the ones used in UAH
Don’t understand why you got down checks. I read your article, at least most of it, and I could find nothing wrong with it. Seems like you stepped on some climate alarmist’s toes!
It’s a total trainwreck innit.
Just very quickly and looking at your ‘fool-you’ emissivities table, I took myself off and asked ’emissivity of sand’
Sand being a major part of what makes a desert and desert covers 10% of Earth’s total surface
Google told me this:“Firstly, emissivity of the samples increases gradually with temperature. For biogenic crust, the emissivity increases from 0.9660 at 25°C to 0.9729 at 45°C. For sand it changes from 0.9388 to 0.9552.
In the ‘fool-you table I don’t see any figures of less than 0.97
So straight off we see why deserts can be Hot Places
wait until they find out that we can’t actually see more than half the photons that are entering the atmosphere and cannot detect a little more than 1/3 of them…
The ‘surface’ emissivity is not a primary issue.
The effective radiating surface observed from space is primarily composed of the cloud layer i.e. the condensed full spectrum radiating surfaces of liquid and ice. The equilibrium cloud cover comprises about 2/3ds of effective radiating surface. The terrestrial/oceanic surface below the cloud deck does not contribute to radiative balance. The ‘surface’ emissivity issue, if there is one, can only be a minor issue. If one is concerned with emissivity, it is the 2/3ds of the radiating surface composed of clouds where one must focus.
Such diagrams depicting only 30 Wm-2 full spectrum IR radiating from the cloud deck to space appear to be erroneous.
The article already addressed your fallacy.
The article in no way addresses the issue, I’m afraid.
The albedo is a consequence of the cloud fraction, of course. The cloud fraction is a consequence of the radiative equilibrium process in the dynamic condensing atmosphere.
The albedo is not an external ad-hoc parameter. it is coupled to the thermodynamic equilibrium process.
The all sky spectral emissivity observed from space is about 0.7, representing an average effective radiating surface temperature of 278K.
This value is perfectly consistent with the astronomical factors of Solar Luminosity and distance from sun, irrespective of albedo.
Ok, then once again..
Ah, you are constraining yourself to the Ramanathan style greenhouse definition.
Which is obviously absurd, but it may help you to communicate with Dr Schmidt. How is that going, anyway?
In reality, the optical depth in the gap between the Earth and Cloud is irrelevant.
So it’s all a bit silly, but nibbling around with surface emissivity is an interesting curiosity.
JCM,
Only weighing in here to say that ES’ point 1 is the IPCC’s greenhouse definition, as well. From AR6, that would be 398 – 239 = 159 W/m^2. (Grain of salt optional).
Is there any other definition?
The definition would be the surface temperature with and without green house gasses. The question would be how to handle water vapor.
so the GHE = 390 w/m2 – 240 w/m2
= 150 w/m2 and almost constant
I’d say the Ramanathan ratio of surface flux and OLR might appear to be constant, not absolutes.
But it’s ridiculous. In the diagram 40 goes straight out the window, so there is nothing greenhousy about that stuff.
Now it’s only 350-240 (+ and – Schaffer’s emissivity nibbling).
If you are looking at radiative balances you are not looking at something that controls surface temperature.
Top image in attached depicts the ToA radiation imbalance over the past 16 years. The middle image is the temperature change globally over that period and the bottom chart plots the temperature change against energy absorbed (positive) or released (negative). Temperature change is uncorrelated to the energy uptake anywhere on the globe.
There is no reason why incoming radiation energy and outgoing radiation energy have to be in balance. In fact modern human civilisation is built on the energy that biological processes have accumulated over the millennia. Hansen’s missing heat could very well be on the ocean floor, peat bogs and coral reefs.
The only way to understand the surface temperature is to understanding how energy is transferred, dissipated and stored in the Earth system. These processes have far more impact on surface temperature than anything to do with radiation. Earth is not static like a black body globe and its orientation to its primary energy source is constantly changing as are its internal energy transfer, energy dissipation and energy storage. .
RickWill is correct.
everyone, and I mean everyone, misses that such energy diagrams show the energy imbalance exclusively exists in the surface budget. There is zero zilch nada imbalance in the atmosphere.
Net 161 SW – 396 upward LW + 333 downard LW = 98
The balancing flux of H + LE = 97.
There is a net flux into the surface of +1.
The atmosphere shows no positive imbalance, whatsoever.
It’s so ridiculous it’s comical.
No anomalous atmospheric “heat trapping”.
No mid tropospheric hot spot.
The “heat trapping” is happening below the terrestrial/ocean surface. Full stop. Net in > Net out of the surface.
any wonder why the official presented TMT observations are not matching model output?
DUH. It is because that is NOT where the “heat trapping” is occurring.
The atmosphere is perfectly balanced:
239 SW – 169 “atmospheric emission” – 30 “cloud emission” – 40 “window flux” = 0
Helllooooo??? anybody home????
NASA puts out the same diagram with numbers that balance everywhere. That chart at the top is wrong.
the chart you’re referencing is only meant to depict an equilibrium state.
It’s balanced everywhere in the sense that it complies with the 1LOT. However, it is not balanced at TOA since 341.3 W/m2 – (101.9 W/m2 + 238.5 W/m2) = 0.9 W/m2. But the 0.9 W/m2 shows up at the surface as a retention component.
Mr. Schaffer, you seem to CONSTANTLY spam threads here with your off-topic pet projects. Stop it, or I’ll put you on moderation. You’ve been warned more than once. No further warnings will be issued.
Anthony
Sorry for that, I will try to not do it again. Although I am not aware of any warnings.
Thank you. Comments were left, but they may not have been seen. It’s fine to post these things on topic related threads.
Dear Anthony,
I thought this post was about temperature measurements not the surface energy balance, which for the most part is an artificial construct made up of lots of moving parts.
I believe that in relation to temperature measurements, that rather than building a massive new database, there is a need for a set of protocols that allow any dataset to be investigated from first principles. This has been the focus of my work at http://www.bomwatch.com.au.
Although an increasing number of individual investigations have been published and more are in the pipeline, over the last decade I have looked closely at some 500 long and medium -term datasets from all climate regions in Australia. With the exception of some sites where Stevenson screens were over-exposed to gale-force winds, sea-spray, fog, driving rain etc. that caused thermometers to be wet much of the time, methods were found to be robust and replicable.
The methodology has been outlined as case studies in several reports (e.g. https://www.bomwatch.com.au/climate-data/climate-of-the-great-barrier-reef-queensland-climate-change-at-gladstone-a-case-study/), and expanded-on in my latest series on homogenisation of Australian temperature records (https://www.bomwatch.com.au/data-homogenisation/).
It would be interesting to test the same protocols using maximum temperature and rainfall data from sites in the US, Europe and other places. If you or anyone else are interested I can be contacted at scientist@bomwatch.com.au.
Yours sincerely,
Dr Bill Johnston
It may be off topic, but the article and subsequent discussion is an interesting one and not discussed nearly enough.
What is of concern is what Mr Greene said “We climate realists have been arguing that the consensus climate science is wrong since the late 1970s, but that strategy has no effect”
I don’t agree it has no effect and it is only the questioning of the basic science that the AGW can be held to account.
Like many I see that something is OFF, with the whole AGW science. It clearly goes back to its inception but I can’t get to the bottom of it as we only ever see fleeting debates before they are buried in the plethora of new articles.
The debate around the so called consensus should be a highlighted constant. Like this article or Mr Schaffers contribution, they might be best promoted by being regularly served up as their own category each month. A bit like how we look forward to Lord Monckton monthly update about the pause.
I don’t agree it has no effect and it is only the questioning of the basic science that the AGW can be held to account.
40 years of CAGW scaremongering
40 years of increasingly hysterical climate scaremongering
So what effect did a science debate actually have?
Not that there were two sides debating much.
The basic science of AGW is fine except for false claims to know the exact effects of CO2 in the atmosphere, beyond the likely truth of “small and harmless” I know some commenters here reject almost all consensus climate science. I reject the CAGW scaremongering (90%) but accept the AGW science (10%) as being in the ballpark of reality
Of course AGW is more than just CO2 emissions, such as:
Air pollution
All greenhouse gases
Water vapor positive feedback
Albedo changes from land use changes, UHI
and dark soot (pollution) deposited on Arctic ice and snow.
Errors in historical temperature data
that may overstate prior warming
deliberately, or unintentionally
Unknown manmade causes of climate change.
There is a difference between unknown and ignored. We know we are heating the planet with aviation induced cirrus. It has been around in the literature for a while.
I seriously doubt the current administration would have any interest.
If we apply Biden’s Build Back Better philosophy to the weather station network, it will get worse, (I call it Bidet’s Build Back Baloney)
The House of Representatives might be interested.
“Since going into operation in 2005, it has not found any significant warming trend in the United States that can be attributed to climate change.”
The NOAA does make USCRN available – WUWT displays the NOAA graph on the front page. But it gives just the same results as the larger group of ClimDiv stations:
Yes.
Both show no warming.
I make the warming rate of USCRN to be 0.31°C / decade.
ClimDiv over the same period shows 0.23°C / decade.
Of course it isn’t significant because it’s only 17 years of variable local data, but that does not mean it shows no warming.It amounts to 0.96°F over the last 17 years.
Look at Nick’s USCRN graph. How in frick do you come up with +.31 per decade instead of -.05 per decade ? Or how about the meandering data is not predictive AT ALL, so is insignificantly different from ZERO temperature change ?
I feed the data into an lm function and it tells me. In case you hadn’t noticed, it’s the same data in the USCRN graph. If you want an indication of why there is a rise, try comparing the first have of the data with the second.
Average anomaly :
2005-2013 = –0.12°C
2014-2022 = +0.37°C
I can corroborate Bellman’s calculations. I get the exact same result. And when using the monthly data I get +0.26 C/decade and +0.34 C/decade for nClimDiv and USCRN respectively.
I’d say both 0.26 and 0.34 round to 0.3, which makes both averages similar, in my interpretation. Thanks for the specific trends/
An absolutely perfect example of why the NIST TN1900 EX. 2 methods should be followed.
Take a close look at all these graphs. What do you think the variance in the data distribution of the anomalies should be? The anomalies alone have a large variance at the scale used. That variance along with the average (mean) is what the anomaly values could be.
Three other issues.
1) These anomaly values are computed from the difference of two random variables. The variance of the difference in those random variables is the sum of each of the variances. It will be larger than the variance in the anomaly distribution simply because of the scaling. However, the anomaly should carry the variation of the data distributions used to calculate it.
2) Why is it everyone in climate science ignores basic statistical and scientific practice? An average (mean) should never be quoted without also quoting the variance or standard deviation of the data distribution used to calculate the mean!
3) The shaded error distribution in Bellman’s plot is only for the errors in the linear regression line. It tells you how accurate the line itself is in portraying the data. It IS NOT the variance of the distribution which is what affects the average value.
” The anomalies alone have a large variance at the scale used. ”
That’s why the trend is not significant. That’s why the confidence interval is so large. The uncertainty of the trend depends on the size of the variation, just as the uncertainty of an average depends on the variation in the data.
It’s the same reason why there is so much uncertainty in the pause. If you think of each monthly value as a random variable around the trend, there is a huge range of plausible trends over the last 8 years or so.
“These anomaly values are computed from the difference of two random variables”
Which as I keep trying to explain to you is irrelevant when talking about the rate of change. The base value is fixed. It doesn’t matter how much of an error it has, that error will be the same for every data point. It will not affect the trend.
“An average (mean) should never be quoted without also quoting the variance or standard deviation of the data distribution used to calculate the mean!”
Tell that to Monckton and Spencer. But don’t confuse the standard deviation of the data with the uncertainty of the mean.
JG said: “An absolutely perfect example of why the NIST TN1900 EX. 2 methods should be followed.”
Are you saying that you now accept type A evaluations and the 1/sqrt(N) rule?
Have you *read* 1900? Where does the 1/sqrt(N) come from? From taking an average? Or from finding the standard deviation of the sample means.
TG said: “Have you *read* 1900?”
I’m the one that notified you about it.
TG said: “Where does the 1/sqrt(N) come from?”
It tells you exactly where it comes from.
“I’m the one that notified you about it.”
That doesn’t mean you read it and understood it. Please list out *ALL* of the assumptions Possolo used in analyzing the maximum temperature at one location.
My guess is that you can’t. Or won’t!
“Therefore, the standard uncertainty associated with the average is u(r) = s/√m = 0.872 C.”
And EXACTLY what is that parameter? How closely the average you calculated approaches the population mean? Or is it the measurement uncertainty associated with the stated values.
My guess is that you don’t have a clue and don’t care to have a clue.
TG said: “Please list out *ALL* of the assumptions Possolo used in analyzing the maximum temperature at one location.”
Possolo says ti = T + Ei where ti is a measurement, T is the measurand, and Ei is an independent random variable with a mean of 0 and gaussian.
TG said: “And EXACTLY what is that parameter?”
Possolo says that it is the uncertainty of the measurand (the monthly mean temperature) and includes the uncertainty contributed by 1) natural variability 2) time of observation variability and 3) the components associated with the calibration and reading of the instrument.
What is interesting about this example is that the model defines a single measurand T being the monthly average maximum temperature and uses ti measurements to quantify it. That is interesting because the individual measurements are clearly of different things due to the differing environmental conditions from day-to-day, but yet are still treated in the model as if they were for the single measurand T.
This is an unequivocal refutation of 1) the argument that the GUM only works when the measurements are of the same thing and 2) that you cannot use the 1/sqrt(N) rule on measurements of different things. It is unequivocal because each of the ti are of different things and yet 1/sqrt(N) was still used.
Why do you *always* leave out the context?
“This so-called measurement error model (Freedman et al., 2007) may be specialized further by assuming that E1, . . . , Em are modeled independent random variables with the same Gaussian distribution with mean 0 and standard deviation (. In these circumstances, the {ti} will be like a sample from a Gaussian distribution with mean r and standard deviation ( (both unknown).”
“The {Ei} capture three sources of uncertainty: natural variability of temperature from day to day, variability attributable to differences in the time of day when the thermometer was read, and the components of uncertainty associated with the calibration of the thermometer and with reading the scale inscribed on the thermometer. Assuming that the calibration uncertainty is negligible by comparison with the other uncertainty components, and that no other significant sources of uncertainty are in play, then the common end-point of several alternative analyses is a scaled and shifted Student’s t distribution as full characterization of the uncertainty associated with r”
The basic assumption is to ignore measurement uncertainty and use the variation in the stated values as a measure of the uncertainty of the average.
It’s what you and the rest of the climate alarmists ALWAYS do. Assume the measurement uncertainty is random, Gaussian, and cancels – or that it is negligible.
Neither are appropriate assumptions when you are combining measurements from different devices measuring different things!
“That is interesting because the individual measurements are clearly of different things due to the differing environmental conditions from day-to-day, but yet are still treated in the model as if they were for the single measurand T.”
Exactly! Possolo is assuming that the 22 measurements are of the same thing – Tmax. It makes things so much easier if you do that, just like assuming all measurement uncertainty is negligible. *NEITHER* are valid assumptions for calculating a global average temperature!
Do you accept that you can average different temperatures or not?
Do you accept that the GUM can be used on measurements of different things or not?
Do you accept that the uncertainty of the average of different temperatures scales by 1/sqrt(N) or not?
Do you accept NIST TN1900 or not?
Temperature is an intrinsic property and cannot be averaged.Temperature measurements are measurements of different things.Temperature measurements have uncertainty equivalent to variance in a random variable and therefore the variance must be propagated onto any average.So no, average temperature measurements simply can’t be averaged. The average tells you nothing about the real world. Just as the average value of a 4″ board and a 6″ board is 5″ and that 5″ doesn’t describe anything that exists in the real world, neither does an averaged global temperature.
Even Possolo had to make assumptions that no uncertainty exists and that successive measurements of different things can be equated to successive measurements of the same thing in order to come up with an average and a standard deviation.
The uncertainty of the average does *NOT* scale by 1/sqrt(N). You keep confusing how close you can get to the population average as being the uncertainty of that population average.
You can’t even make sense of the fact that your SEM can be ZERO while the uncertainty of the population mean can be huge!
If the mean of the sample means is equal to the population mean then your SEM is zero. But that population mean can be far from the true value.
Why is that so hard for you to understand? You can only know the accuracy of the population mean by propagating the uncertainties of the distribution elements onto the mean. You want to ignore that simple fact of metrology so you can assume the SEM is the uncertainty of the mean. It simply isn’t.
Do you deny that the sample mean can equal the population mean? Do you deny that in that case there is no uncertainty associated with your calculated sample mean? Do you deny that the population mean can be inaccurate?
TG said: “Temperature is an intrinsic property and cannot be averaged.”
Possolo computed an average temperature in NIST TN1900 E2.
TG said: “Even Possolo had to make assumptions that no uncertainty exists”
Possolo calculated the standard uncertainty as u(r) = s/sqrt(m) = 0.872 C in NIST TN1900 E2.
TG said: “The uncertainty of the average does *NOT* scale by 1/sqrt(N).”
Possolo used a type A evaluation of uncertainty requiring a division by sqrt(N) in NIST TN1900 E2.
TG said: “You can’t even make sense of the fact that your SEM can be ZERO while the uncertainty of the population mean can be huge!”
It’s not my calculation. It was from Possolo in NIST TN1900 E2.
Now stop deflecting and diverting…
Do you accept that you can average different temperatures or not?
Do you accept that the GUM can be used on measurements of different things or not?
Do you accept that the uncertainty of the average of different temperatures scales by 1/sqrt(N) or not?
Do you accept NIST TN1900 or not?
In other words you either don’t understand or don’t want to understand what Possolo did in 1900.
Nor do you understand the GUM. You can’t even get the definition of a “measurand” corect. A measurand is *NOT* a collection of different things. It is ONE thing.
TG said: “In other words you either don’t understand or don’t want to understand what Possolo did in 1900.”
I understand that…
1) He computed an average temperature.
2) He assessed the uncertainty of that average.
3) He did so despite the measurements being of different things.
4) He did so by applying 1/sqrt(N).
TG said: “Nor do you understand the GUM.”
It doesn’t matter if I understand the GUM or not. In this context it only matters that Possolo understand it. He’s the one that formulated example E2.
I will say that I believe I understand it enough that I do not challenge NIST TN1900 E2. It looks like Possolo followed the procedure correctly.
TG said: “You can’t even get the definition of a “measurand” corect.”
I think a measurand is a “particular quantity subject to measurement” (JCGM 100:2008 B.2.9). I take it you disagree?
TG said: “A measurand is *NOT* a collection of different things. It is ONE thing.”
I’m glad to hear that you at least accept this. Are you willing to accept NIST TN1900 E2 and all that goes along with it as well?
You miss the whole point with your bias. You appear afraid that something is going to upset the GAT applecart.
“””””Do you accept that the uncertainty of the average of different temperatures scales by 1/sqrt(N) or not?”””””
Only if the stipulation made in TN1900 are met.
– same shelter
– same month
– same Tmax
If you want to combine with other stations then you must generate your own assertions and justifications.
The TN has several criteria such as independence (not correlated) and others. Here are some examples.
“””””The assumption of independence may obviously be questioned, but with such scant data it is difficult to evaluate its adequacy (Example E20 describes a situation where dependence is obvious and is taken into account). The assumption of Gaussian shape may be evaluated using a statistical test. For example, in this case the test suggested by Anderson and Darling (1952) offers no reason to doubt the adequacy of this assumption. However, because the dataset is quite small, the test may have little power to detect a violation of the assumption. “””””
“””””The equation, ti = τ +Ei, that links the data to the measurand, together with the assumptions made about the quantities that figure in it, is the observation equation. The measurand r is a parameter (the mean in this case) of the probability distribution being entertained for the observations. “””””
“””””Adoption of this model still does not imply that r should be estimated by the average of the observations — some additional criterion is needed. In this case, several well-known and widely used criteria do lead to the average as “optimal” choice in one sense or another: these include maximum likelihood, some forms of Bayesian estimation, and minimum mean squared error.
“””””The associated uncertainty depends on the sources of uncertainty that are recognized, and on how their individual contributions are evaluated. “””””
Possolo doesn’t say that.
Possolo doesn’t say that
This has to be a joke right? E2 is literally averaging different Tmax measurements. Literally.
You are a cult member in the religion of Climate Alarmism.
Of course Possolo says all three!
“Exhibit 2 lists and depicts the values of the daily maximum temperature that were observed on twenty-two (non-consecutive) days of the month of May, 2012, using a traditional mercury-in-glass “maximum” thermometer located in the Stevenson shelter in the NIST campus that lies closest to interstate highway I-270” (bolding mine, tpg)
You’ve never actually even read TN1900, have you? You just keep spouting the same religious dogma and never look left or right.
No. He didn’t list all 3. He didn’t even list one of the 3. Your argument is one of affirming a disjunct. You are claiming that because NIST TN1900 E2 is for the scenario of a single shelter for a single month that it necessarily follows that it is impossible to apply the GUM technique to different to different scenarios including one in which there are different shelters and different months. That is a logical fallacy because Possolo did not say the GUM cannot be applied to other scenarios.
And your claim about the 22 Tmax values being of the same thing is absurd. Not even the most contrarian of contrarians is going to be convinced that Tmax values on different days are the same. The are different in every sense of the word…literally.
Go away troll. You have no idea of what you are talking about.
“””””JG: same shelter
Possolo doesn’t say that.”””””
From TN1900:
“””””in this Stevenson shelter,”””””
“””””in that shelter. “””””
______________________________________________
“””””JG: same month”””””
“””””Possolo doesn’t say that”””””
From TN1900
“””””The daily maximum temperature r in the month of May, 2012″””””
“””””thirty-one true daily maxima of that month”””””
________________________________________________
“””””JG: same Tmax”””””
“””””This has to be a joke right? E2 is literally averaging different Tmax measurements. Literally.”””””
“Temperature is an intrinsic property and cannot be averaged.”
Sol why do you insist we should be using the average daily temperature?
“Sol why do you insist we should be using the average daily temperature?”
As usual you have no understanding of the real world or even mathematics outside statistics.
Degree-days are *NOT* an average! Degree-days are what I advocate for, not “average daily temps”.
After two solid years you can’t even understand this distinction. You are hopeless.
TG said: “After two solid years you can’t even understand this distinction.”
It is my understanding that you are discussing the method from http://www.degreedays.net.
There is no distinction. Using the integration method HDD = Σ[if(T < 65, 65 – T, 0) * interval, begin, end]. So if the data is hourly your interval is 1/24 days. Mathematically this is equivalent to averaging since Σ[if(T < 65, 65 – T, 0) * 1/24, 1, 24] is the same as Σ[if(T < 65, 65 – T, 0), 1, 24] / 24 which is an average of the 24 values. It would be the equivalent of saying Σ[x_i, 1, N] / N is an average but Σ[x_i/N, 1, N] is not which is obviously absurd.
I’ll also remind you that https://www.degreedays.net/calculation says to “calculate the average number of degrees…” as one of the steps.
You were talking about daily averages not any sort of Degree-days. E.g. when you said a 1.5 degree difference was HUGELY different.
And, without wanting to drag this out longer, the fact you can add degree-days is precisely why your claim about being unable to average temperatures is bogus.
The GUM does not address how to propagate measurement uncertainty of different things. The term “same measurand” appears too many times to count.
The GUM does address methods for determining experimental uncertainty of the same (or similar) things when necessary. They specify that a coverage factor is appropriate in order to convey the range of possible experimental values.
Why “experimental”? Some “same” things can not be measured repeatedly. Results of a chemical reaction, coronal discharge across an insulator, length a spring stretches when a force is applied, many other things. Multiple “experiments” are required to determine a value to be expected.
TN1900 uses this fact to develop an expanded experimental standard deviation that encompass the values that can be expected for Tmax at a given station over a month. Tmax averaged for the single month is considered experimental MEASUREMENTS of the same thing. Tmax and Tmin are two different things from two different distributions. Their average DOES NOT result in a measurement of either.
As TG has pointed out, using average temperature WITHOUT adequately propagating variance leads one to conclude that the climate in the Sahara desert is the same as in Dubuque, Iowa since they can have the same average yet their variance is drastically different. That’s not a good metric!
Like it or not, averages like Tavg or a monthly average of Tavg is NOT a measurement. Tavg is a statistical parameter but NOT a measurement. An average of Tavg over a month is not a measurement, it is an average of statistical parameters. All of these statistical calculations should have a number of the same parameters, most important a mean μ, and a variance σ. Why do we never see a variance from you.
If you would spend less time cherry picking equations meant for measurements and really learn about and appreciate what a measurement actually is and how to treat it, you would realize that playing with statistical parameters is NOT a substitute for actually dealing with measurements.
“The GUM does not address how to propagate measurement uncertainty of different things.”
Apart from all the sections explaining combined uncertainty.
You keep going on about the uncertainty in a volume of a cylinder combining the uncertainties of the height and of the radius. That’s propagating the measurements of two different things.
“Tmax averaged for the single month is considered experimental MEASUREMENTS of the same thing.”
So why can’t you accept the average of multiple stations at different location as being experimental measurements of the same thing – i.e. the surface average? And if you can’t accept that, then why not just treat the average as an exercise in statistics rather than insisting it’s uncertainty be treated as a measurement uncertainty?
If it’s any help, here’s how TN1900 defines a measurement.
“Tmax and Tmin are two different things from two different distributions. Their average DOES NOT result in a measurement of either.”
Why do you think this is relevant. A daily average temperature is not meant to be a measure of either the max or min. Do you apply this logic to any combined measurement, such as area or velocity?
Height and width are two different things from two different distributions, and their product DOES NOT result in a measurement of either.
Distance and time are two different things from different distributions, and their quotient DOES NOT result in a measurement of either.
You guys have a totally off base knowledge of measurements. I grew up with a master mechanic. I learned the need for accurate measurements when repairing high horsepower motors and transmissions for tractors, combines, trucks, etc. Poorly done measurements results in return work with no income.
To address your comments. You are talking about measurements used to determine a single measurand. You are measuring THE SAME THING multiple times even the individual pieces of the measurand.
What you keep referring to are different things. An average of different things is NOT a measurement. As Tim tried telling you an average of 6′ and 7′ boards is not a measurement! The average does not exist.
Tmax and Tmin at a station are measurements. Tavg is not a measurement. It is a statistical parameter called μ (mean) and it has a variance.
(A = 1/2bh) is a measurement made up of two individual measurements OF THE SAME THING. You don’t take a “b” measurements from one triangle and an “h” measurement from another different triangle and find an area of either triangle. You can use the definition of an average and claim it is a function, but it IS NOT a measurement function it is a math function.
“Poorly done measurements results in return work with no income.”
Good for you. But you can’t but everything inopt the same box. To use one the many cliches that keep cropping up, not everything can be hammered in, just because all you’ve got is a hammer. What is possible in a mechanical workshop, will not work in a different profession.
“You are measuring THE SAME THING multiple times even the individual pieces of the measurand.”
So you have been saying for the last two years. But you still won’t define what you mean by THE SAME THING.
“What you keep referring to are different things.”
Which things? The argument keeps drifting.
“An average of different things is NOT a measurement.”
Then stop insisting it be treated as one. You keep wanting to have it both ways, insisting the global average has to be subject to all the rules used to measure engine parts, but then insisting it’s not a measurement at all.
“As Tim tried telling you an average of 6′ and 7′ boards is not a measurement! The average does not exist.”
Yes, he’s told me that ad nauseam, and I keep telling him I disagree. For some reason you think just endlessly repeating it will somehow prove your point.
And now you have the extra problem of persuading me that maximum temperatures taken on different days are all “the same thing” and just aspects of the same average maximum temperature, but that two boards are completely different things, and can never be seen as random variations about the average length of a board.
Does any day in TN1900 have the actual average monthly maximum value? Does that mean it doesn’t exist, and if it doesn’t exist does that mean there is no use in measuring it?
“Tmax and Tmin at a station are measurements.”
Measurements of what? In the TN1900 example, you are not treating them as measurements of the actual temperature on the day, but as measurements (with random error) of the true average” may maximum daily temperature.
“Tavg is not a measurement.”
The question is, is it a measurand? It’s a function of two things you have measured. What makes you think you cannot do what GUM 4.1.1 says
In this case, X1 = TMax, X2 = TMin, and f is the function (X1 + X2) / 2.
Why is that any less of a measurement than when f is the average of 22 TMax’s?
“It is a statistical parameter called μ (mean) and it has a variance.”
If it’s a parameter what is the population, and how does it have a variance? A population mean doesn’t have a variance, variance is a parameter.
“(A = 1/2bh) is a measurement made up of two individual measurements OF THE SAME THING.”
Which wasn’t the question I was asking. I was asking about your statement
Do you think that TMax and TMin for a specific day are not individual measurements of the same thing? How is that different from the breadth and height of a specific triangle being measurements of the same thing?
“You don’t take a “b” measurements from one triangle and an “h” measurement from another different triangle and find an area of either triangle.”
But I’m not doing that. I’m taking max and min values from the same day, not from random different days. Or possibly I’m measuring the average maximum and minimum values for a specific month. But in either case they are being taken from the same thing – temperatures at a specific station over a specific unit of time.
“You can use the definition of an average and claim it is a function, but it IS NOT a measurement function it is a math function.”
Go on then. Explain to me how you can tell the difference between a measurement function and a non-measurement function.
I suspect the answer will be that anything you don’t like will be the evil non-measurement type and only things you do like will be the true measurement functions.
“That’s propagating the measurements of two different things.”
But you do NOT average the height and width together to come up with some hokey AVERAGE.
Both height and width are measured in units of length, let’s use feet.
Barrel 1 has a height of 60′ and a width of 40′. Their average is 100/2 = 50′.
Barrel 2 has a height of 70′ and 30′. Their average is 100/2 = 50′.
Barrel 3 (actually a pipe) has a height of 99′ and a width of 1′. Their average is 100/2 = 50′
Does that average of 50′ tell you *anything* about the barrels?
Add ’em all up and you get 300′. Divide by 6 (the number of elements) and you get 50′ as an average.
Now, make those units degK, or lbs, or anything you want. Does the average tell you anything?
Why do you think it tells you something when you are using “degrees”?
When you have different things there is no true value. And if you don’t have a true value then the average tells you nothing useful. Just like the “global average temperature” tells you nothing useful.
Statistical descriptors are *not* the real world no matter how much you wish it were so.
“But you do NOT average the height and width together to come up with some hokey AVERAGE. ”
There go those goalposts again. I said nothing about averaging height and width. I was asking why Jim thought it mattered that “Tmax and Tmin are two different things from two different distributions. Their average DOES NOT result in a measurement of either.”
Why is that a problem with an average but not with a product?
“Both height and width are measured in units of length, let’s use feet. ”
Why use feet? Because it used to be traditional?
“Barrel 1 has a height of 60′ and a width of 40′. Their average is 100/2 = 50′.”
Are these 2-dimensional barrels? And why on earth do you want to know that average?
“Does that average of 50′ tell you *anything* about the barrels? ”
That’s the question I was asking. I mean yes, it’s going to tell you something, but it’s not very useful.
“Add ’em all up and you get 300′. Divide by 6 (the number of elements) and you get 50′ as an average.”
Gosh, the average of three things with the same value has the same average. What a coincidence.
“Now, make those units degK, or lbs, or anything you want.”
What’s a degK?
“Why do you think it tells you something when you are using “degrees”?”
What particular angels of the barrels are you measuring? It obviously wouldn’t make sense to measure the width and height in °C.
“When you have different things there is no true value.”
You ignored my second example of combining distance and time. Are they different things, and does that mean velocity is not a true value?
Again, a definition of “different things” would be a help. Are two different lengths of wooden boards “different things”? Are two different maximum temperatures measured on different days, but at the same place using the same instrument, different things?
“And if you don’t have a true value then the average tells you nothing useful.”
Which must be a shock to all the people who make use of these untrue values all the time.
“Statistical descriptors are *not* the real world no matter how much you wish it were so.”
Of course they are not the real world, but the are descriptors of the real world. Barrel 1 has an area of 2400 square feet. That’sa descriptor, not the real world. But knowing it might be useful.
“There go those goalposts again.”
Got caught with your pants down again, didn’t you! Now you are trying to wriggle your way out of it — TROLL!
Me: “Height and width are two different things from two different distributions, and their product DOES NOT result in a measurement of either.”
TG: “But you do NOT average the height and width together to come up with some hokey AVERAGE.“
Me: I didn’t say you average them.
TG: Stop wriggling, you were wrong, TROLL.
It isn’t up to me to accept it. TN1900 lists the things necessary to make certain assumptions. It is up to you to do the same.
Let’s list some of them.
As I have pointed out before, Tmax and Tmin are highly correlated and that contamination destroys any further assumptions that the Law of Large Numbers and Central Limit Theory conclusions apply to further averages.
In order to “uncorrelate” those random variables they must be transformed. Here is a lesson about transforming correlated random variables. Not a simple task.
3.7: Transformations of Random Variables – Statistics LibreTexts
Have you done any of these to generate the proofs necessary to justify “averaging” separate temperature stations?
Perhaps a solution is to make one large random variable consisting of all Tmax and another for Tmin, then use the TN1900 methods for calculating the mean and expanded experimental uncertainty. Although the above criteria needs to be addressed.
“It isn’t up to me to accept it. TN1900 lists the things necessary to make certain assumptions. It is up to you to do the same.”
Sorry but I’d sooner try thinking for myself. I don’t think TN1900 is meant to be a prescriptive set of rules, it’s intended to help you understand how to make your own evaluations. It’s intended as a guide to evaluating and expressing measurements. And it’s intended for NIST engineers and scientists, not for statisticians. It suggests that if you are using observation models (e.g. example 2), you should work with a statistician.
It’s a guide, not a list of commandments.
“Let’s list some of them.”
This will take some time.
Which is a problem because that’s not what the uncertainty is telling you. If the mean is defined as the mean of the 31 days, where is the uncertainty if you have all 31 days? The assumption of the example is not that the mean is defined as the mean of 31 days, it’s that 31 days are a random sample from random temperatures fluctuating around the true mean.
Which is a bad model in this case. The distribution are not Gaussian and certainly not independent. How much of an effect that has on the modeled uncertainty is another question.
It continues “However, because the dataset is quite small, the test may have little power to detect a violation of the assumption.”.
There is just not enough data to rule out the possibility that the distribution is Gaussian. But from my own observations, I’d say there was good evidence that in general maximum daily temperatures in May are not Gaussian.
All very well, but I suspect you can just take it as read that maximum likelihood leads to the assumption that the mean for a single sample is the best estimate of the mean. This might be more useful when applied to averaging multiple stations, where you don’t necessarily assume observed average is the true average – hence the need for adjustments.
Yes. See my point about lack of independence above.
I’m surprised the example doesn’t suggest Monte-Carlo methods in this case. It seems to be the preferred method in other examples.
“If the mean is defined as the mean of the 31 days, where is the uncertainty if you have all 31 days?”
There is none if you do as you usually do and just assume that all measurement uncertainty is random, Gaussian and cancels. That seems to fix everything every time as far as you are concerned.
“The assumption of the example is not that the mean is defined as the mean of 31 days, it’s that 31 days are a random sample from random temperatures fluctuating around the true mean.”
So what? This example is not meant to be a real world analysis. It is an EXAMPLE of how to treat experimental results. Why do you and bdgwx continue to ignore the assumptions Possolo set out?
“I’m surprised the example doesn’t suggest Monte-Carlo methods in this case. It seems to be the preferred method in other examples.”
I don’t think you truly understand what Monte Carlo techniques can tell you! Monte Carlo techniques are *NOT* a substitute for real world experiments with real world measured values. Get out of your statistical box for once.
“There is none if you do as you usually do and just assume that all measurement uncertainty is random, Gaussian and cancels.”
As I’ve said before, the only uncertainty would be measurement uncertainty. Apologies for not dotting every i.
But the point was to compare the idea of a mean as defined by the 31 days, and what the technique of exercise 2 does, which suggests there would be a lot more uncertainty than just measurement uncertainty. SD / √31.
“So what? This example is not meant to be a real world analysis.”
I wish you;d tell Jim that.
“Why do you and bdgwx continue to ignore the assumptions Possolo set out?”
I’m not ignoring any assumptions, but as you say it’s only meant to be an example.
“Monte Carlo techniques are *NOT* a substitute for real world experiments with real world measured values.”
They are a substitute for the approximations of the statistics you fail to understand. They are particularly useful when the approximations used in the standard equations can not be assumed. See the Simple Guide you keep talking about:
I said: “Monte Carlo techniques are *NOT* a substitute for real world experiments with real world measured values.”
And you go off on a tangent about statistical approximations.
“They are a substitute for the approximations of the statistics you fail to understand. They are particularly useful when the approximations used in the standard equations can not be assumed”
unfreakingbelievable.
Go on then. Explain how you use real world experiments with real world measured values to estimate the uncertainty in the daily maximum temperatures of a single station.
From the GUM:
4.2.3
Note 1
“””””… The difference between s^2(q_bar) and σ ^2(q_bar) must be considered when one constructs confidence intervals (see 6.2.2). In this case, if the probability distribution of q is a normal distribution (see 4.3.4), the difference is taken into account through the t-distribution (see G.3.2). “””””
You’ll see this referenced in TN 1900. It gives you the sections of the GUM from which its conclusions are made.
You need to reconcile in your mind the difference between:
1) measurement uncertainty – measuring the same thing, multiple times, with the same thing, vs,
2) experimental uncertainty -determined by multiple trials under repeatable conditions.
NIST, in TN1900, declared the measurand to be the average Tmax temperature during a MONTH.
That is not the same as declaring and propagating the measurement error of each reading made throughout the month.
NOAA recommends averaging Tmax and Tmin for the month separately. That is what TN1900 did. It is what everyone does when examining temperatures, even yourself. THAT MAKES THE MEASUREAND THE MONTHLY AVERAGE.
Nobody that I see ever use daily temperatures as the base being examined. Therefore, daily temps are not the measurand being sought. Both you and bdgwx have said that the “functional description” of the measureand is the monthly mean of daily temperatures. Don’t change the goalposts now.
NIST couldn’t have provided a better example than TN1900 for determining the mean AND the expanded experimental uncertainty of a monthly temperature.
The GUM even addresses this as TN 1900 illustrates. The total expanded experimental uncertainty Is the appropriate value that shows the dispersion of values surrounding the measurand.
As I have pointed out many times, ALL AVERAGES MUST INCLUDE A VARIANCE (UNCERTAINTY) TO BE AN APPROPRIATE STATISTICAL DESCRIPTION OF THE UNDERLYING DISTRIBUTION.
Those variances don’t disappear after the first average except in climate science.
The fact that the expanded experimental uncertainty dwarfs the measurement uncertainty should be good news to you because it removes the arguments about the measurement uncertainty of each measurement. It doesn’t remove measurement uncertainty, but removes it from consideration by making it negligible.
“The fact that the expanded experimental uncertainty dwarfs the measurement uncertainty should be good news to you because it removes the arguments about the measurement uncertainty of each measurement. It doesn’t remove measurement uncertainty, but removes it from consideration by making it negligible.”
You mean that thing I’ve been trying to tell you and yours the past two years, and kept being called all sorts of names because I was using the stated values rather than propagating the measurement uncertainties? Yes it’s good news that you finally accept that sampling is usually a much bigger uncertainty than the measurements. It would be even better if I thought you would stick to it, rather than claiming next week you only agreed to it “in order hoist me with my own petard.”.
The issue with this, and I disagree with Jim that you can ignore measurement uncertainty, is that your uncertainty *MUST* be totally random and cancel. If there is *any* systematic uncertainty then ALL the stated values will be offset from the true value and any population average you calculate will be offset by at least the same amount and probably even more. If that systematic uncertainty is calibration drift then it will add to any trend you find and you’ll be unable to subtract that out because it is unknowable.
You keep saying you don’t assume all measurement uncertainty is random, Gaussian, and cancels but you DO IT EVERY TIME. If you would actually read TN1900 for understanding you would see that Possolo ASSUMED no systematic uncertainty. In other words, all measurement uncertainty in the example would be random, Gaussian, and cancel! Over the short time period of a month that might, and I emphasize MIGHT, be justifiable. It is *not* justifiable over a longer period.
I already posted what Taylor has to say about this and, again as usual, you just ignore it and go on your merry way.
Taylor, Pg 110. “If there are appreciable systematic errors, then σ_ẋ gives the random component of the uncertainty in our best estimate for x:
ẟx_ran = σ_ẋ
If you have some way to estimate the systematic component ẟx_sys, a reasonable (but not rigorously justified) expression for the total uncertainty is the quadratic sum of ẟx_ran and ẟx_sys.
You ALWAYS want to assume that ẟx_sys is zero. EVERY SINGLE TIME. That way you can use the variation of the stated values as the uncertainty of the mean instead of having to the hard work of propagating ALL of the total uncertainty!
You can deny it till the cows come home but it is as obvious as the nose on your face that you always assume ẟx_sys = 0.
“If there is *any* systematic uncertainty then ALL the stated values will be offset from the true value and any population average you calculate will be offset by at least the same amount and probably even more.”
If you ever calmed down and read what I say, you’d know I agree with you on this. Systematic uncertainty will by definition remain after averaging.
“If you would actually read TN1900 for understanding you would see that Possolo ASSUMED no systematic uncertainty.”
Funny that. Just like did in the other thread, and had all manor of abuse thrown at me.
Rest of rant ignored.
“As I have pointed out many times, ALL AVERAGES MUST INCLUDE A VARIANCE (UNCERTAINTY) TO BE AN APPROPRIATE STATISTICAL DESCRIPTION OF THE UNDERLYING DISTRIBUTION.”
Take your finger of the shift key and say once and for all which variance you are talking about – the variance of the measurements or the variance of the mean? Tehn expl,ain why you want the variance stated rather than the standard deviation.
The variance of the sample means is *NOT* the variance of the population. They are two entirely different things. The variance of the sample means tells you how close you are to the population mean. The variance of the population tells you what to expect from the distribution. THEY ARE NOT THE SAME!
Finally. Yes that’s the point. Now all we have to figure out what Jim means when he keeps going on about stating the variance.
“As I have pointed out before, Tmax and Tmin are highly correlated”
You keep repeating it – never explain why you think it’s relevant.
What measurements and distributions are you talking about?
If you just want TAvg for a given day, the fact that they are correlated is irrelevant. In fact it’s good that they are correlated, as it implies TAvg is a useful value.
If you are talking about the average of TAvg over a month, or any other average, it’s irrelevant because you are averaging TAvg, not TMax and TMin. TAvg is just a single value which is derived from two correlated values.
“As TG has pointed out, using average temperature WITHOUT adequately propagating variance leads one to conclude that the climate in the Sahara desert is the same as in Dubuque, Iowa since they can have the same average yet their variance is drastically different.”
But that’s nothing to do with the measurement uncertainty.
Quoting the average maximum temperature for a given month along with the expanded uncertainty interval won’t tell you the variation in the individual daily maximums. Nor will it tell you anything about the average daily temperature, or the diurnal range, or how much the month warmed up, or how humid it was.
If the point is to determine CLIMATE then the average tells you nothing you can use to determine climate! It doesn’t matter what the measurement uncertainties are!
Of course the average temperature does not determine climate (or CLIMATE). You would need a whole book to determine all the complexity of a climate. It’s one indicator of one small part of the climate.
“It doesn’t matter what the measurement uncertainties are!”
Then why do you keep arguing about them?
But in this case the specific claim was about propagating variance. “using average temperature WITHOUT adequately propagating variance leads one to conclude that the climate”
So, what variances do you want to “propagate”, and how will that allow you to determine the climate.
“It’s one indicator of one small part of the climate.”
It’s not an indicator of ANYTHING having to do with the climate.
When two totally different climates can have the same mid-range temperature NO ONE will believe that mid-range value tells you anything!
You think average temperature has nothing to do with climate?
“When two totally different climates can have the same mid-range temperature NO ONE will believe that mid-range value tells you anything!”
You keep getting the logic in this backwards. It doesn’t matter that two different climates can have the same temperature, the question is if the same climate can have two different average temperatures.
It’s like arguing that if two very different computers can have the same CPU, then knowing what the CPU is will tell you NOTHING about the computer.
“You think average temperature has nothing to do with climate?”
I know it doesn’t. The average temp in Las Vegas and Miami can be *exactly* the same and yet they have different climates. There is a LOT more that goes into climate than just temperature. It’s why enthalpy is the *correct* value to look at, not temperature.
“It doesn’t matter that two different climates can have the same temperature, the question is if the same climate can have two different average temperatures.”
I’m not getting *anything* backwards. Did you actually read this before you posted it? The issue is that the avg temp is not a good proxy for climate. The question is not whether two different climates can have different averages, the issue is that two different climates can have the same average. That means that you can’t use temp to identify climate consistently! If the proxy is not a consistent indicator then it is useless.
Sorry, I forget you can only see words when they are written in block capitals.
You think average temperature has NOTHING to do with climate?
“The question is not whether two different climates can have different averages”
My question was if two IDENTICAL climates can have different averages.
“That means that you can’t use temp to identify climate consistently!”
And if I’d claimed you could, that would be a fair argument.
“If the proxy is not a consistent indicator then it is useless.”
You really need to get out of your binary box. NOT PERFECT does not equal USELESS.
“My question was if two IDENTICAL climates can have different averages.”
Of course they can! The three main climates of boreal, temperate, and sub-tropical allow for huge ranges of average temperature while still being within the climate classifications.
It’s the same with plant hardiness zone classifications. Each zone has a wide average temperature range.
As usual you are doing nothing but demonstrating your complete lack of awareness of the real world. Do you *ever* go outside or just remain in your basement all the time?
“Of course they can! The three main climates of boreal, temperate, and sub-tropical allow for huge ranges of average temperature while still being within the climate classifications. ”
You’re talking about classes of climate there. Two places can be in the same climate classification, but not have the same climate.
“Do you *ever* go outside or just remain in your basement all the time?”
Could you for once try to make an argument without these childish insults.
“You’re talking about classes of climate there. Two places can be in the same climate classification, but not have the same climate.”
More of your cognitive dissonance. Two locations are subtropical but one doesn’t have subtropical climate and the other one does. Unfreakingbelievable.
You’ll literally say anything, won’t you?
“Could you for once try to make an argument without these childish insults.”
You *deserve* a childish insult to a childish comment that two locations with subtropical climate classification can have different climates. Subtropical *is* the climate in that case.
“…cognitive dissonance…Unfreakingbelievable…childish insult to a childish comment…”
Do you really not see the difference between two regions belonging to the same classification of climate and two regions having identical climates?
The category of subtropical climates covers a range of different subcategories and even amongst subcategories the climate will vary.
Do you think everywhere in yellow has an identical climate?
https://en.wikipedia.org/wiki/Subtropics
The subtropical climate in Spain is not the same as the subtropical climate in Florida. And the there are huge differences in climate between different regions of Spain.
https://www.britannica.com/topic/classification-1703397
“Do you really not see the difference between two regions belonging to the same classification of climate and two regions having identical climates?”
I think *I* was the one that pointed this out to you. Miami and Las Vegas can have the same mid-range temperature but vastly different climates! And, as usual, you tried to argue against that assertion.
Now here you are arguing for it!
That’s called COGNITIVE DISSONANCE! Or, probably better, being a TROLL!
“I think *I* was the one that pointed this out to you”
Yet you are claiming that all sub tropical climates are the same – as when you said:
“Miami and Las Vegas can have the same mid-range temperature but vastly different climates!”
And you’ve forgotten my point, which is that two locations can’t have identical climates.
BTW, Miami and Las Vagas do not have the same average temperatures. Miami is about 4-5°C warmer than Las Vagas.
“And, as usual, you tried to argue against that assertion.”
You’re either lying or suffering from severe memory loss. I’ve never argued against the idea the two locations can have the same temperature but different climates.
“Yet you are claiming that all sub tropical climates are the same – as when you said:”
Both are sub-tropical. *YOU* are the one trying to claim that they aren’t.
If you don’t like this definition of climate then provide your own!
No, I’m saying they are all of the sub-tropical category. I’m saying that does not mean they have identical climates.
“If you don’t like this definition of climate then provide your own!”
https://en.wikipedia.org/wiki/Climate
So you think TN1900 is in error? Why do you think the variance of the experimental results is used to determine the values of the mean that can be expected?
“So you think TN1900 is in error?”
I said nothing about the holy text. I was responding to your claim that
I stand by my assertion. Whatever you think you are saying there has nothing to do with measurement uncertainty.
“Why do you think the variance of the experimental results is used to determine the values of the mean that can be expected?”
It’s standard statistics. You are taking a sample from a random distribution (in this case the assumed random distribution of all possible maximum temperatures in that month). Your sample average (in this case 22 assumed independent daily values) has the best likely hood of being the true mean value. But there is uncertainty because of the random nature of the samples. In another universe you could have a different mean purly becasue each day is assumed to give a random maximum value from the population.
In order to asses the extent of that uncertainty you use the standard deviation (not the variance) of the population distribution, divide it by the square root of the sample size. However, you don’t know the standard deviation of the population, so you have to estimate it from the standard deviation of your sample – hence you need to know standard deviation (not the variance) of your “experimental results”.
Now, what I think you are trying to say in the first pat, is that if everything else is equal, you could look at a quoted uncertainty for a particular monthly measurement, and reverse engineer it to figure out what the variance was in the data. But it seems a pointless exercise when you could just look at the variance in the data.
Using the measurement uncertainty as a proxy for variation in the data is problematic because there may be many factors that lead to the uncertainty estimate. For one thing, it depends on the number of days you have data for.
“””””Quoting the average maximum temperature for a given month along with the expanded uncertainty interval won’t tell you the variation in the individual daily maximums”””””
It doesn’t tell you the variance directly. Remember you calculate the variance as part of the procedure. Whether it was quoted directly or not, the SD was stated. An SD of ±4.1 is a variance of 16.8, a pretty large number.
That is not what an expanded experimental uncertainty (EEU) is for. AN EEU is a statement that you are confident at a given % that additional repeats of the experiment will fall within range specified.
In TN1900 the range is @95% coverage (23.8 ◦C, 27.4 ◦C) or 25.6 ±1.8°C. if you read this closely, it doesn’t say that measurement uncertainty doesn’t exist, but that it is negligible compared to the variation in the repeated measurement.
The lesson you should take from this is that averages have variance and it is important in assessing the range of values to be expected.
“It doesn’t tell you the variance directly.”
Yes, that’s my point. And if you want to know the variance for some reason, you want to look at that, and not try to guess it from the uncertainty of the average.
Using the uncertainty to deduce the standard deviation of the data is very indirect. You need to know how the uncertainty was calculated, you need to know the sample size, and you need to know what other uncertainties have been included in the stated figure.
“An SD of ±4.1 is a variance of 16.8, a pretty large number.”
That’s 16.8 square degrees. I’ve no idea how you would determine that was a large number. How big to you expect the variance to be? How big is a square degree?
We went through this before, but I just don’t get why you are interested in variance rather than standard deviation. Variance is just a means to an end. It’s not a figure that gives you any meaningful information. There’s a reason why variance is written as σ².
“if you read this closely, it doesn’t say that measurement uncertainty doesn’t exist, but that it is negligible compared to the variation in the repeated measurement.”
Yes, it says what I’ve been trying to tell you and your kin for the last two years. When you are sampling from variable data, the measurement uncertainty is usually negligible. You usually assume that any measurements you make have a smaller uncertainty than the range of things you are measuring.
“The lesson you should take from this is that averages have variance…”
Averages don’t have variances is the lesson I’d prefer to take. A distribution has a variance. The sampling distribution of a mean has a variance. But the average doesn’t.
“…and it is important in assessing the range of values to be expected.”
That depends on the purpose of the measurement. As the book says, all measurements are intended to support decision-making. And you need to tailor what and how you measure to support the decision being made.
Looking at the average maximum temperature for one may at one station, as in example 2, what is the point of the measurement? What decision-making does it support?
If the point is to determine the most extreme temperatures you are likely to see, then you want to know the standard deviation or prediction interval of the sample. If the point is to tell you whether on average that location is warmer or cooler than another location, you want to know how certain the average is.
Absurd. Every single example in Annex H on Examples is regarding the assessment/propagation of measurement uncertainty of different things. There’s even an example where different temperature measurements are used as inputs a measurand.
That is an odd statement considering Possolo calls the monthly average temperature a measurand in NIST TN1900 E2. And JCGM 100:2008 B.2.9 defines a measurand as “particular quantity subject to measurement”
And yet the GUM and NIST TN1900 use statistical techniques when dealing with measurements prolifically.
“””””Absurd. Every single example in Annex H on Examples is regarding the assessment/propagation of measurement uncertainty of different things. There’s even an example where different temperature measurements are used as inputs a measurand.”””””
You are so full of crap. You can’t even cherry pick properly!
H.1.2 Mathematical model -> “arithmetic mean of n = 5 independent repeated observations.” This was comparing an end gauge block to a standard and to determine “The length of a nominally 50 mm end gauge is determined by comparing it with a known standard of the same nominal length. The direct output of the comparison of the two end gauges is the difference d in their lengths: ”
What do you think machinists do everyday? You would use this same procedure in determining a correction factor when calibrating a device.
The whole purpose is to determine the difference between two things, i.e., that is the measurand.
H.2.2 Obtaining simultaneous resistance and reactance
“””””Consider that five independent sets of simultaneous observations of the three input quantities V, I, and φ are obtained under similar conditions”””””
“””””H.3.1 The measurement problem
A thermometer is calibrated by comparing n = 11 temperature readings tk of the thermometer, each having negligible uncertainty, with corresponding known reference temperatures tR, k in the temperature range 21 °C to 27 °C to obtain the corrections bk = tR, k − tk to the readings. The measured corrections bk and measured temperatures tk are the input quantities of the evaluation. A linear calibration curve is fitted to the measured corrections and temperatures by the method of least squares. “””””
These are not examples of measuring different things. I didn’t even bother to look at the remaining!
They are examples of experimental uncertainty!
Have you ever taken a high level lab courses and done repeatable experiments to find an expected range of results? That is what makes the world go around.
If you think V, I, and φ are the same thing then you are clearly working with a different definition of “same thing” as compared to everyone else.
Do you have a cognitive impairment?
How did you get that conclusion from:
“Consider that five independent sets of simultaneous observations of the three input quantities V, I, and φ are obtained under similar conditions””
This doesn’t say that V, I, and φ are the “same thing”.
It speaks to V1 to V5, I1 to I5, and φ1 to φ5.
All different values and all different from each other.
statisticians are not physical scientists. You are trying to explain something physical to a blackboard mathematician. As amply demonstrated in the recent threads it’s impossible for them to grasp the fundamental concepts of reality.
It’s the same old meme EVERY SINGLE TIME! All measurement uncertainty is random, Gaussian, and cancels thus leaving the stated values as 100% correct. Then you can statistically analyze the stated values with no complicating factors.
Keep repeating this lie enough times and maybe somebody will believe you.
The issue is less whether TN2900 is ultimately “correct” and more about following a standard procedure for determining a better mean and expanded experimental uncertainty as specified in the GUM.
Several friends on Twitter have been convinced that GAT has big statistical problems and one is better served by looking at local and regional temperatures instead.
They have been analyzing different locations around the globe and find many, many locations with little to no warming. Almost enough to question the whole global warming scenario. Up to now nothing where non-UHI stations have a 2 – 3 degree warming that would make an average of 1 to 1.5 degrees of warming!
The article here at WUWT on the Japanese location showing no warming was no surprise. That was already found. As more and more stations are found with no or little warming it will become harder and harder to keep insisting that warming is occuring and accelerating.
And there it is. The fact that you have to put the word correct in quotes is also telling.
Way to cherry pick. You would make a great liberal journalist. You have made zero effort to understand what EXPERIMENTAL uncertainty truly is.
The argument ISNT ABOUT whether it is correct or not.
It is accepted internationaly as an appropriate method for determining expected values when repeated measurements of separate trials under repeatable conditions is done. The expanded uncertainty delivers an adequate interval of acceptable values.
If you want to argue that expanded experimental uncertainty is not correct, you need to take your argument to NIST and the international body, Working Group 1 of the Joint Committee for Guides in Metrology (JCGM/WG 1).
If you want to use it combine means and variances from separate stations, feel free. Just be prepared to justify why you think all the necessary requirements are met for independence and distribution.
And as I keep saying I fully accept NIST TN1900 E2 and consider it correct without any reservations whatsoever.
But the gist I get from your comments is that you not only question its correctness, but you question whether 1/sqrt(N) was even used at all in the example and the fact that temperature measurements made at different times are different.
They use USCRN to correct the shit ClimDiv. There’s no way they can reproduce the same results; it’s just common sense Nick.
Not really. USCRN (the network itself) is not used to make adjustments to nClimDiv. The nClimDiv adjustments are all done by the pairwise homogenization algorithm. Of course there is no obvious reason why PHA would exclude USCRN stations in the neighbor list. They are stations too not unlike the other GHCN stations. But the network itself isn’t used to make the corrections. The reason why nClimDiv matches USCRN so well is because PHA is effective.
[Menne & Williams 2009]
[Vose et al. 2014]
[Hausfather et al. 2016]
the pairwise homogenization algorithm
I can’t take numbers seriously that are not pasteurized too.
I’m not sure what you mean here. Did you mean not pasteurized or just pasteurized. And what is this “pasteurize” method you speak of? I’ve never heard of it and it isn’t mentioned in the Menne & Williams 2009 publication.
Only because NOAA has used the short-term USCRN data that exists to “adjust” the crappy COOP based ClimDiv data.
Its a useless exercise, because there’s no fix for any of the crappy COOP data pre-2005, and thus trends from COOP data (USHCN/GHCN) are biased upwards.
The point I’m making is that NOAA/NCEI never reports USCRN in any public report/press release.
Yoi pointed out the poor weather station siting in 2009
You pointed out no improvement about 10 years later
NOAA obviously did not care.
So can such an organization be trusted wto provide an accurate national average temperature?
Now we have USCRN and NClimDiv almost the same. As if to prove siting does not matter. But that makes no sense.
Most people would assume NOAA is using USCRN to “fix” NClimDiv
i will put on my leftist thinking cap and analyze that belief.
(1) NOAA wants to show as much warming as possible to the general public.
)2) NOAA does not want two averages that look different, because people would ask too many questions.
(3) So NOAA probably decided to choose the US average with the most warming, and then “fixed” the other average to closely match the one with the most warming.
My guess is that NClimDiv showed a faster warming rate than the rural USCRN network. Therefore, NOAA probably “fixed” USCRN to more closely match NClimDiv, rather than “fixing” NClimDiv to more closely match USCRN.
That does not sound very honest.
But NOAA employees are government bureaucrats
Mainly leftists
So should we expect honesty from them?
“But NOAA employees are government bureaucrats
Mainly leftists
So should we expect honesty from them?”
Yes, they are just employees. So why should you expect skulduggery from them? Why would they lie to …? There isn’t even anything to be gained by lying.
And hundreds of them are involved. They operate under the scrutiny of FOI, Inspector-general etc. Someone would have blabbed by now.
And it would be such a useless exercise. As a managerial matter, what sense would it make to set up such an elaborate exercise as USCRN, only to throw the results away and replace by USCRN? Or vice versa?
If you put on your rational thinking cap, you will see that you have a relatively small area covered by two systems with far more stations than are needed. They are going to get a very accurate average, and so the two systems will agree with each other, because they are accurately measuring the same thing.
“But NOAA employees are government bureaucrats
Mainly leftists
So should we expect honesty from them?”
Because commencing in 1993, under under the watchful eye of Neville Nicholls, using the excuse of data homogenisation, Australian scientists within CSIRO and the Bureau of Meteorology, most recently Blair Trewin, have been fiddling the data to find warming that does not exist. See my latest series of reports on at https://www.bomwatch.com.au/data-homogenisation/.
It is now 2023, so these people have been cheating for 30-years.
Trust CSIRO, not likely!
All the best,
Bill Johnston
As someone who studies Australian temperature data in detail, and does a very good job, I appreciate reading Mr. Johnston’s opinion on government bureaucrats.
Truth is not a leftist value.
Bill Johnson: “excuse of data homogenisation”
Stations moved. Time-of-observations changed. Instrument packages changed. I think it is a stretch to dismiss those as “excuses”.
Yes I am well aware of all that bdgwx, I should have used the word “guise” rather than excuse.
At Townsville Queensland for instance, the local Bureau of Meteorology Garbutt instrument file told the story of how they negotiated with the Royal Australian Air Force to move the weather station to a mound on the western side of the runway from about 1965 to 1969. Observations commenced at the new site on 1 January 1970.
Letters on the file shows the site moved, aerial photographs and satellite images show it moved, careful analysis of the data shows it moved but they said “Observations have been made at Townsville Airport since 1942. There are no documented moves until one of 200 m northeast on 8 December 1994, at which time an automatic weather station was installed“.
(https://www.bomwatch.com.au/data-quality/climate-of-the-great-barrier-reef-queensland-climate-change-at-townsville-abstract-and-case-study/) They did a similar thing at Rockhampton, Cairns, Marble bar, … all over the place.
A blatant black and white lie comes in only two colours and since we started exposing them on http://www.bomwatch.com.au, Blair Trewin et al have gone very quiet.
All the best,
Bill Johnston
I think it is stretch to call station moves, time-of-observation changes, instrument changes, etc. “guises” as well.
Yes, they are just employees. So why should you expect skulduggery from them? Why would they lie to …? There isn’t even anything to be gained by lying.
In your fairy tale world, Mr. Stokes, all government employees are honest and no leftists ever have bias, or intentionally deceive. You must be living in an alternative universe. If governments and their employees were honest, why would this website exist?
The prediction of a coming climate emergency is not honest, because the claim of being able to predict the long term climate trend is not honest.
..
If anyone has ever studied government bureaucracy they should have learned that its goal is not the best product but the expansion of the bureaucracy. They exist to find problems that require more and more people in order to provide a solution. The solutions are best when more people are required.
Why more people? That is how people are promoted and gain higher and higher classifications which means what? MORE MONEY.
Lots of stuff gets glossed over, like scientific rigor, since that is not the fundamental incentive at work. where do you think the adage, “good enough for government work” came from?
How do you know a lefty bureacrat is lying? His/her lips are moving. Remember, “whatever the cost”!
Nick,
A true mathematician at work. Concentrate on the SEM that describes the interval within which the mean of a distribution may lay rather than the variance of the data used to calculate the mean. If you know the SEM, tell folks how you calculate the standard deviation of the population of temperature data.
Has anyone examined the distribution of the sample mean distribution to see if it is Gaussian or normal? That is a must if the SEM is to have any value.
Has anyone calculated the variance in the data distributions used to calculate means used to calculate the anomalies?
Has anyone considered why Significant Digit rules are ignored consistently when reporting measured physical quantities? At least Richard Greene got close when he rounded the means from 2 decimal digits to only 1 decimal digit. He should be commended.
RG said: “NOAA probably “fixed” USCRN to more closely match NClimDiv, rather than “fixing” NClimDiv to more closely match USCRN.”
That is an extraordinary claim. Can you provide extraordinary evidence to support it?
I wrote “probably”
I commented that it seemed suspicious that NClimDiv and USCRN were so similar. I speculated logically that one or both averages could have been “fixed” to have that result. If both averages are really so similar, NOAA would appear to be claiming that weather station siting does not matter. I believe that implication is false.
The code that makes the adjustments to nClimDiv (and USHCN and GHCN) is available here. I actually managed to get it to run on a Linux VM I had several years back. I never found any evidence that it was doing anything nefarious. And the USCRN station data is published in near real-time so I don’t know how the “fixing” would even work nevermind how it would go unnoticed.
Climate emergency is an extraordinary claim, do you have extraordinary evidence to support it? Thought not!
No. But then again I’m not the guy to ask since I don’t advocate for emergency/catastrophe/doom style hypothesis.
“Only because NOAA has used the short-term USCRN data that exists to “adjust” the crappy COOP based ClimDiv data.”
When and how do they do that? The system is very transparent. The results are published almost as soon as read. There is a NOAA site here which posts readings within the hour. It just wouldn’t be possible to adjust one lot to the other on that timescale, with results trickling in.
Nick “Mr. Government” Stokes declares deceptions by government employees are not possible. Proof of this is that USCRN data are published quickly? How is that proof of anything?
Mom: “Son, did you break that window in your room?”
Son immediately replies “I didn’t do it”
Mom says: “That was a very quick response, son,
so I know you are telling me the truth”
Based on Nick Stokes “logic”
Because the results come digitally from AWS to post. There isn’t time for the people you think sit with a thumb on the scale to intervene.
In fact, anyone who tried to fiddle with the posted data would get into serious trouble. A lot of the sites are airports. Plenty of people, from pilots down, would be up in arms if they thought the NOAA was not simply and accurately reporting what was observed. Which they do.
In fact, inference about climate is a very small part of the motivation for weather reporting.
AW said: “Its a useless exercise, because there’s no fix for any of the crappy COOP data pre-2005, and thus trends from COOP data (USHCN/GHCN) are biased upwards”
According to Hausfather et al. 2016 USHCN-adj matches USCRN pretty well, but if anything it is actually bias downwards. Now, obviously the comparison is only for the overlap period. But it’s not unreasonable to hypothesize that since USHCN-adj is biased downwards during the overlap period then it may be biased downwards prior to the overlap period as well since PHA is applied equally to both segments of data.
Zeke H. is the same guy who ignored the most common ECS with RCP 8.5 climate model predictions. He then claimed if the climate models used TCS and RCP 4.5, that would cut the ECS RCP 8.5 warming rate in about half, and prove that climate models are actually very accurate. That is a deceptive argument even if not actually false.
Your comment does not seem to address anything I said. And if the authors of the publication are a problem for you then don’t read the paper and instead download the USCHN-adj and USCRN data yourself and do your own comparison. That’s what I did.
Mr Watts makes an excellent point in stating that the weather stations were set up for weather forecasting. Why do we see so many weatherstations at airports and airfields? Because pilots need to know what the weather is like. It is abuse to try to use these weatherstations as indicators of any change in the climate since they have been subject to huge changes over the decades. London’s main airport Heathrow was a field until the 50s and the first permanent buildings were not constructed until the start of the 60s – now there are 5 terminals. You can also throw in the change from piston engines via turboprops to turbojets and turbofans.
Weather stations at airfields are necessary to find the right starting speed and length because these are temperature dependant.
Independent? In today’s world?
That seems to be a pipe dream. Science for science’s sake? Money for god’s sake
So, what does our “Stable Climate” look like?
Where is it represented in the data?
If Climate is an average of a 30 year period of weather, when has it ever been stable between concurrent 30 year periods to establish the “Stable Climate”?
30 years certainly isn’t enough to determine sea level rise acceleration.
Actually, not a lot.
If we are talking about ‘Climate Change’, where it is the average weather over 30 years, here is a thought. Why don’t we breakdown all the areas of the world in to something like 5 main zones and further subdivide these into a total of 30 to 35 Subsections. Then on a map, interpolate onto a 0.5° longitude × 0.5° latitude grid.
Then after 25 or 30 years see where this ‘Climate Change’ is happening and how bad it is. We base the changes as ‘GOOD’ for areas that have More Life and ‘BAD’ for for areas that have Less Life. Let’s Call it a Köppen Classification Map, because that is just what Wladimir Köppen did starting in 1884.
Here is the map with Major Köppen type has changed at least once in 30 years during the period 1901-2010. http://hanschen.org/koppen/img/koppen_major_30yr_1901-2010.png
http://hanschen.org/koppen/img/area_major_1901-2010.png
In graph form for the changes. A & C have more life, i.e. Good – B Dry (includes cold dry) is still better than D (Snow) and E (Polar) i.e. Bad.
So Good Areas, less than 1/2 percent and Very Bad Polar down 3.5% vs Just Bad Dry up 3% & Snow down 1.5%. Don’t see much Catastrophic going on.
I guess commenting rule number one is to never disagree with the owner of the website, or you may never be commenting again. I comment on the articles I read regardless of who wrote them, and I don’t censor myself — we have the social media and leftist biased mainstream media to do censorship.
Summary:
(1) Climate predictions are not based on data. There are no data for the future climate. And no human has ever demonstrated the ability to make accurate long term climate predictions. Climate predictions are merely speculation falsely presented as science.
(2) Actual data, such as a more accurate US average temperature statistic in the future, will not affect scary climate predictions
(3) The belief that the USCRN network is very accurate is speculation, not a proven fact.
NOAA claims it is accurate. That’s not good enough for me.
Details.
It would be nice to have well sited US weather stations and accurate measurements with very little infilling. But that would not necessarily change the US national average temperature by much, if at all. That average is a statistic that only NOAA could generate and verify. The NClimDiv and USCRN averages are whatever NOAA tells us they are. And they are very similar. You just have to trust NOAA government bureaucrats. Do you trust government bureaucrats? i don’t.
Let’s say that US government bureaucrats did have a very accurate US average temperature, and so did every other government in the world. I doubt if that would change the climate change scaremongering. The same governments would make the same claims to scare people, as they have been doing for 40 years.
What if the surface average warming declined to become more like the UAH satellite numbers? Would that reduce climate scaremongering? I say no.
Consider the relatively flat global temperature trend in the past eight years . Did that slow down climate change scaremongering? No. Climate scaremongering is more hysterical than ever.
USCRN
It seems that everyone here claims USCRN is an accurate network and some are disturbed that USCRN not used for the global average — NClimDiv is. I say that fact is irrelevant. For reasons I can’t explain logically, the NClimDiv and USCRN numbers are very similar. So using USCRN for a global average temperature would have virtually no effect.
Now comes the puzzle.
USCRN supposedly has great weather station siting, while NClimDiv is mainly haphazard weather station siting. So why are both US averages so similar? That does not make sense. Just a coincidence?
The important question is do you trust NOAA? Based on their non-response to the weather station siting problems reported here in 2009, I say NOAA can not be trusted.
And if NOAA can not be trusted, then it is not logical to assume the US average temperature from the USCRN network should be trusted. Good data from a bad organization? I don’t think so.
“”I guess commenting rule number one is to never disagree with the owner of the website, or you may never be commenting again. “”
I think you do Mr Watts a disservice there. What’s the point of commenting if you cannot express your ideas?
I have had many comments involving Ukraine disappear from this website. They were published, and later disappeared. I assumed a Moderator didn’t like them. Charles told me a computer program made them disappear after a delay. They never returned. I would assume tt would be polite to not criticize the owner of the website. I then criticized his idea in this article, although trying to be more polite than I usually am.
As this is “The world’s most viewed site on global warming and climate change”
Were your many comments involving Ukraine about the climate of Ukraine, or the politics regarding the aggressive invasion of Ukraine ???
If the latter I’m glad they disappeared, plenty of other sites for political ranting; most of us come on here to learn & discuss all things climate related.
One article was about the effect of the Ukraine war on energy and the other article spawned a comment discussion of the Ukraine war that I joined. The Ukraine War and sanctions on Russia are frequently referenced in articles on energy prices and energy supplies.
Shouldn’t we be discussing matters scientific? That’s what this site is about
GFY.
You were using banned words and your comments were automatically trashed.. We went over this via email. You slandered moderators of censoring and abuse. I even removed those words from the banned list, because the world has changed since they were put there.
You were proven wrong and you still have a stick up your ass about it
Again GFY..
At this point your whining can only be attributed to a denial to face reality or outright lying.
Why you were incredibly rude, Charles Rotter, escapes me.
I complained in the comment section that quite a few of my long comments on Ukraine were published and then later mysteriously got deleted. I NATURALLY assumed a Moderator did not like my comments on Ukraine, becausea computer program would ban the comment immediately and it would never get published in the first place.
You investigated, thank you, and discovered the word “genocide” in several of my comments was a banned word. And in one comment I explained that the N a z i genocide led to the 1948 Genocide Convention — I should have written “German” instead.
For some reason there was a long delay until the computer program deleted my comments, except the one with the “N” word in it.
Mr Rotter reported to me via e-mail that this censorship was not done by a Moderator, and then he removed “genocide” from the banned word list. That was all I ever wanted to know.
What I did not want, and did not deserve, was being repeatedly insulted by Mr. Rotter in most of the e-mails sent to me.
And this Rotter comment is even worse.
Complaining about censorship. And assuming a delayed removal of comments was done by a Moderator, is not an offense that deserves such insults
Charles Rotter, I have taken repeated insults from you in e-mails, for no logical reason, and today is even worse. You are extremely rude, and I deserve an apology, but I doubt if you are capable of that.
And if this becomes the last comment I am ever allowed to make here, which would not surprise me, other people can read your extremely rude comment above, and see I am telling the truth.
You insinuated intentional censorship today after having been demonstrated to be wrong in the past. You deliberately misrepresented what occurred.
You are disingenuous and deceitful and then get all verklempt when I call you out on your bullshit. You will not be banned. You will not be censored. You are simply full of shit about how moderation is done on this site.
“You insinuated intentional censorship today after having been demonstrated to be wrong in the past.”
Now you are lying in addition to being extremely rude.
I had too many comments concerning Ukraine disappear after being published
That is censorship whether done by a computer or Moderator.
Other comments that were pro-Ukraine were not deleted.
My comment above SPECIFICALLY stated:
“I have had many comments involving Ukraine disappear from this website. They were published, and later disappeared. I assumed a Moderator didn’t like them. Charles told me a computer program made them disappear after a delay. They never returned”
I specifically stated today that I had originally thought a Moderator was deleting my comments, but found out later it was a delayed reaction of a computer program.
In fact, in your first email to me, Mr. Rotter, you stated that some moderators were over zealous and you would investigate.
A few emails later you determined that the word genocide was causing the computer to delete my Ukraine comments, although not immediately when I posted the comment, but after a significant delay..
Your emails repeatedly insulted me and claimed I was lying to initially assume some unknown moderator was involved. I asked you to lose my email address and stop writing me, but that did not stop you. And apparently you are still on the warpath.
My primary point is several long comments were censored
I complained about the censorship in other comments.
Censorship happened — that is a fact.
I assumed a Moderator was responsible because that was a logical assumption.
But it does not matter whether a moderator censored, or a computer censored — many of my Ukraine related comments showed up and then later disappeared.
They disappeared forever.
Assuming that a comment not immediately censored for a banned word, was published, and then censored hours later, was done by a moderator, is not a lie or a slander. As you falsely claim.
It was a logical assumption of how my comments were disappearing.
I do not deserve to be called a liar for assuming a moderator was involved.
And you, Charles Rotter, remain an incredibly rude person.
I’d like to know if you have ever used the childish abbreviation GFY in any comment here before, or were you saving your uncontrolled anger for me? GFY hurled at me for the “capital offense” of complaining that many of my comments were disappearing, after they had been published, which they were? That is is not normal behavior of a civilized person.
Suck it up you disingenuous whiny putz.
Your tone policing has no authority here.
Your perfomative outrage makes you look really small.
What charles said.
What Anthony and Charles said.
Richard Greene, the level of comprehension of science displayed in your comments is poor. You seem to favour word plays on the data and conclusions of others, with little quoted research of your own.
Personally (and my view does not matter much) you remind me of the literary scene of French harridans grouped around the guillotine, crying (in their language) “Off with his head”. Over and over. That is not doing much for the advancement of science.
Geoff S
Tpp bad you do not have the coutesy to at least quote ONE SENTENCE from ANY of my comments here in the past five years, as an example of my so called POOR comprehension of science.
Instead, you hurl a series of generic character attacks of the type that is most typical of leftists.
I would imagine that few commenters here do their own original climate science research, much less peer reviewed published research, so them don’t have their “research” quoted.
Do you character attack them too, fpr makimng comments you do not agree with?
Or do you only character attack commenters you do not agree with?
If you consider it okay for Charles to verbally attack me repeatedly in emails and now here, for truthfully stating my anti-Ukraine comments were getting posted and later disappearing, that is also unacceptable behavior.
Most of my Ukraine related comments were in response to an article about how Russia and the Ukraine war affected energy.
A few were added to a string of EXISTING comments about Ukraine in an unrelated article. I never started a debate about the Ukraine war after any article that was not about the Ukraine war. But I did notice that pro-Ukraine comments did not mysteriously “disappear” like my anti-Ukraine comments did.
**********************************************************************
I will do you and Charles a great favor, because I am a nice person, and make this the last article on this website that I comment on.
**************************************************************************
Now you can celebrate.
You managed to get rid of me.
All it took were the three rudest letters
in the alphabet from “hostile” Charles:
GFY
And with this goodbye,
I am giving Charles another opportunity
to insult me, which he seems to love,
and get the last word in too.
Richard, it’s not an airport. No need to announce your departure, particularly since no one is on the plane with you.
w.
We are not a Ukraine discussion site, we are a CLIMATE AND WEATHER discussion site. If the fact that the moderation system removes useless off topic comments such as yours upsets you, well then tough noogies.
I almost always disagree with comments made by Nick Stokes, but his comments are always published. Nick knows better than to engage in wildly off-topic comments.
Note my comments admonishing the same sort of behavior from Mr. Schaffer at the top of this thread. You aren’t getting any special treatment.
Stay on topic, be courteous, and you’ll have no problems.
Review the policy page: https://wattsupwiththat.com/policy/
I have to give praise where it is due. You probably wouldn’t agree with my posts either, but without fail all of my posts have been published AFAIK. The only thing I’ve noticed that sets off the moderation is when I post links to peer reviewed publications. I do that prolifically and I understand it is not the nature of the content that is triggering it, but the just the simple act of including too many hyperlinks. And with the new login mandate I’m finding that I’m actually getting moderated less often to begin with. So yeah, big thanks for allowing me to put my voice into the mix here.
4 links or more in a comment triggers moderation.
My Ukraine comments were mainly in response to an article whose main point was the effect of the Ukraine war on energy prices. Many of the comments after that article were about Ukraine
A minority of my comments on Ukraine were ADDED in RESPONSE to EXISTING comments on Ukraine even though the article was not about Ukraine. My initial complaint was that I noticed my anti-Ukraine comments mysteriously disappeared after being published, but the pro-Ukraine comments did not disappear.
Only after an investigation was Charles able to determine that a computer moderator was deleting my comments, mainly for the word “genocide’, not a human moderator. And for that initial assumption i have been repeatedly character attacked.
That makes no difference to me.
All I knew were that my comments
were disappearing, only on one subject.
*********************************************************
Charles attacking me today with GFY
is over the top, unjustified anger.
I have decided to stop posting comments here
to prevent vile comments like that from Charles again.
**************************************************************
NOTE: Charles, for the second time, I am asking you
to please lose my email address, and forever stop
sending me emails that include insults.
You ain’t worth it Nancy.
Richard said”
You’re repeating yourself. You said that above. Go already. Don’t go away mad. Just go away.
w.
You missed the biggest issue of all – a global “average” tells you nothing about what is happening with the climate. Two different climates can have the same average – which is actually a mi-range value, a median if you will. Two different Tmax’s and Tmin’s can give the same mid-range value. So how do you conclude anything about climate from a mid-range value?
Climate science should join agricultural science and HVAC engineering in using integrative heating/cooling degree-days. Then you can tell what is happening to the actual climate.
‘Climate science should join agricultural science and HVAC engineering in using integrative heating/cooling degree-days.’
Interesting idea!
nClimDiv already includes CDD and HDD.
Are you related to a man named Tevye? You keep hollering about TRADITION! The CDD and HDD used in nClimDiv is the *OLD*, traditional way of calculating it. It is not the newest methodology using integration of the entire temperature curve.
When is climate science going to join the 21st century?
I’m okay with using a different method. Can you explain how you would do it for the nearest station to me: St. Charles Elm Point (USC00237397) for the year 1899.
You are *still* trying to justify using the same old traditional method!
There is *NO* reason why you can’t do the old way and the new way jointly for those stations that have the capability.
The *only* reason I can think of for not doing so is that it might point out something that the climate alarmists don’t want to come to light!
Stop deflecting and diverting. What is the CDD/HDD for the year 1899 at St. Charles Elm Point (USC00237397) using your method?
What’s your point? There is no data in 1899 to be used in the integrative method of calculating hdd/cdd!
So what? Does that mean we should *never* collect data that *is* useful in doing so? That we should never stray from the traditional methods?
Tyeve: “This isn’t the way it’s done, not here, not now.
Some things I will not, I cannot, allow.”
Someday you should watch the movie “Fiddler on the Roof”.
TG said: “There is no data in 1899 to be used in the integrative method of calculating hdd/cdd!”
Then how am I supposed to use the integrative method on historical data?
TG said: “Does that mean we should *never* collect data that *is* useful in doing so? That we should never stray from the traditional methods?”
This is your strawman. You and you alone owe it. Don’t expect me to defend your arguments especially when they are unreasonable.
OMG! I’ve told you this TWICE already. YOU DON”T. You run the two methods in parallel! Why is that so hard for you to understand?
“This is your strawman. You and you alone owe it. Don’t expect me to defend your arguments especially when they are unreasonable.”
The only thing unreasonable here is the continued use of the argumentative fallacy of Appeal to Tradition.
Let me repeat your argument at its base: Tyeve: “This isn’t the way it’s done, not here, not now. Some things I will not, I cannot, allow.””
You and Tyeve both are hidebound and resemble the establishment against Galileo.
TG said: “OMG! I’ve told you this TWICE already. YOU DON”T. You run the two methods in parallel! Why is that so hard for you to understand?”
How do you run the two methods in parallel on data from 1899?
TG said: “The only thing unreasonable here is the continued use of the argumentative fallacy of Appeal to Tradition.”
I’m not appealing to tradition here. Anybody who tracks my posts knows that I’m a proponent of improvement. If there is a better way then let’s do it. Let’s use better HDD/CDD methods in domains where it is possible.
TG said: “Let me repeat your argument at its base: Tyeve: “This isn’t the way it’s done, not here, not now. Some things I will not, I cannot, allow.”””
That’s not my argument. That’s your argument. I never said anything about sticking with tradition. It was first mentioned by your in this post. Like I said before. Don’t expect me to defend your arguments especially when they are unreasonable and are crafted as strawmen.
Dear Tim,
Against what base? Why not simply deduct the dataset average and track residuals using a 31-day running mean.
Speaking as an agricultural scientist (which I am), how will degree-days in Alaska relate to degrees-days in Hawaii? or Marble Bar, allegedly the hottest place in Australia?
Yours sincerely,
Bill Johnston
Use anomalies!! /sarc
In essence that is what degree days are. If a common base were used globally, when integrating the 5 minute temp data you would end up with an anomaly (heating degree days or cooling degree days) based upon a similar base.
The real issue is that it would remove the incentive to adjust temperatures quite so much. The difference between 72 and 72.145 would disappear. In general even plain old integer temps would suffice.
The key is that a consensus would be required to set the base temperature to be used. That is not much different than the question that is posed here about what is the best temperature for the globe.
If you are an ag scientist then you should be able to define the base. It’s what growing degree-days are based on. Why should this be so hard to do?
Integrate the entire temperature profile from sunrise to sunrise, sunset to sunset, sunrise to sunset and sunset to sunrise, or from 0000GMT to the next 0000GMT.
The value of the integration will allow you to differentiate between two locations with different climates where the use of a mid-range value will not.
How does UAH manage to handle varying time-of-day observations as it travels around the earth? Using local degree-day integrations would be child’s play compared to that.
Thank Tim (and Jim), [are you guys related by the way?]
I asked “Why not simply deduct the dataset average and track residuals using a 31-day running mean”; and,
“Speaking as an agricultural scientist (which I am), how will degree-days in Alaska relate to degrees-days in Hawaii? or Marble Bar, allegedly the hottest place in Australia?”
– genuine questions.
A base of 0 degC say for wheat in Victoria (OZ), would accumulate well off-scale out in the desert at Marble Bar??
Which of course is why they use anomalies. But why not anomalies relative to the dataset mean?
Remember too that pre-AWS we don’t have 1 or 5-minute data, High frequency AWS data has to be bought for $$$. Rightly or wrongly, Tav is (Tmax + Tmin)/2, and as I indicated elsewhere, only two of the 1440 1-min samples are important – Tmax and Tmin. It would be handy to have RH & DP-T as well, but we have to buy that as well.
Anyway, I have to move on.
Cheers,
Bill Johnston
If we never start collecting and using 1 min or 5 min data then we will never move on from the old, inaccurate mid-range value calculation. The mid-range value is simply not a statistically or metrology adequate protocol. It may have been the best we used to have but we’ve been able to do better for 40 years.
Teyve from “Fiddler on the Roof” : This isn’t the way it’s done, not here, not now.
Some things I will not, I cannot, allow.
RG said: “For reasons I can’t explain logically, the NClimDiv and USCRN numbers are very similar.”
USHCN-FLs.52j is similar to USCRN.
USHCN-raw is not as similar to USCRN.
The reason FLs-52j is similar is because it uses PHA to correct the time-of-observation change, instrument package change, station relocation, etc. biases.
Interestingly USCRN shows more warming than USHCN-adj which is leading some to hypothesize that it is still biased low.
Hausfather et al. 2016
RG said: “USCRN supposedly has great weather station siting, while NClimDiv is mainly haphazard weather station siting. So why are both US averages so similar? That does not make sense. Just a coincidence?”
Same answer as above. Like USHCN-FLs.52j the newer nClimDiv dataset has PHA applied to it to removed the biases.
“That average is a statistic that only NOAA could generate and verify.”
There’s nothing stopping you downloading all the USCRN data and producing your own average.
They are all NOAA numbers (data)
If I download NOAA numbers,
they are still NOAA numbers.
The government is the only source of the numbers
You are implying I must trust the government
I do not trust the government.
In addition, does NOAA provide the public with their exact computer program that they use to calculate a US average temperature. There could be many ways to do that and many possible adjustments to the data.
Can we be confident that NOAA’s raw data are REALLY unadjusted raw data, and assume the measurements were accurate in the first place?
And can we really trust NOAA’s claimed (I think) +/- 0.1 degree C. margin of error claim, even now, much less in the late 1800s?
“Unfortunately, the data from the USCRN network are buried by the U.S. government, and are not publicly reported in monthly or yearly global climate reports. It has also not been deployed worldwide.”
…
Given today’s technology, shouldn’t we confirm and validate the need for it with a network and data collection that is not relying on a 100-year-old system?
______________________________________________________________
If USCRN is willfully ignored (I assume that’s exactly what’s going on) then any “new” network & data collection system will also be willfully ignored.
Article says”… at least 100 feet from any extensive concrete or paved surface,”…”
As I and others have pointed out gases dissipate heat. Our world relies on this to happen. Cool our cars, heat our homes, dry our hair, etc. The above requirement is an example of gases having that capability.
What if ?…….What if there was an automatic temp measurement device on Pitcairn Island for the last 5000 years? Would not this single temp measurement suffice for the entire planet? If the planet warms or cools….it will be reflected on Pitcairn sooner or later? ….and/or Tristan da Cunha Island?
That is local climate, not regional or global climate. There are so many factors that actually go into determining actual heat content that it is impossible for one location to be an indicator for the whole planet.
Once upon a time…2 times….it was Snowball Earth…..once dinosaurs roamed above the arctic circle….do we measure the temp of a human by sticking a thermometer in all orifices and “averaging” the temp? The Little Ice Age temps were worldwide…and the current warming is reflected world wide.
What warming? The “annual average global temp”? Since two different climates can have the exact same mid-range temperature how do you know the *earth* is warming? And what warming is it that we are seeing? Min temps? Max temps? Both? How do you tell from a mid-range value?
It is now warmer than a couple centuries ago during the Little Ice Age….and the Dark Ages Cooling Period…There have been many cycles over the last 10000 years and the cycles are roughly a few centuries warming followed by a few cooling…it is reasonable to assume a continuation of these rough cycles until Nature decides to change it…no one knows for sure that CO2 controls climate….no one thoroughly understands climate changes. Heat disperses – it does not accumulate….sooner or later…the planet cools …or warms.
Geophysics studies tell us that the upper & lower Mantles of the Earth’s interior are where most of the internal heat of the earth is located.
Large convection cells of the mantle circulate heat and power rhythm of the earth.
There are identified ‘hot spots’ in the thermo-nuclear reaction that goes on continuously in the fluid innards of this planet.
How can these not be reflected in the (relatively) 70k thin Crust of Earth sitting immediately above the ~ 7,200F molten iron cauldron where we and all life exist?
So “average global temperature” conditions immediately above the Crust (incl oceans and polar ice caps) is a construct that has so many influences and irregularities of measurements as to be nonsensical.
While hotspots can and do move around, the total amount of energy coming from the interior stays pretty close to constant.
If you have an infection in your finger, that finger will get hot. But the body as a whole won’t change temperature.
A single human body is small enough that most of the time, a single data point is good enough for estimating the temperature of the whole.
This is not true of the Earth as a whole.
It’s local- sure- and therefore not so great- but maybe no worse than what we’re getting now- it could be a useful- if it was possible to have that real data.
Mr. Anti,
Meanwhile, off the island, temps in the Sahara change at different rates as the desert expands or contracts. In any case, air temps are too random and variable. Wouldn’t ocean temps at 10,000′ indicate the “global” temp more accurately? Record and track the internal temp.
UAH maybe?
If there were changes in sea currents, Pitcairn Island could warm or cool, while the Earth as a whole could be doing the opposite.
“Unfortunately, the data from the USCRN network are buried by the U.S. government”
They didn’t do a very good job of burring it. It call all be found here
https://www.ncei.noaa.gov/access/crn/
You didn’t do a very good job of providing the full context of what I wrote.
“Unfortunately, the data from the USCRN network are buried by the U.S. government, and are not publicly reported in monthly or yearly global climate reports.”
Please, do try to not be a pigheaded commenter.
I’m sure the “pigheaded” commenter (no offense Bellman) can speak for himself, but you did say USCRN was buried by the US government. All I did was enter “USCRN” into google. I went to the very first site. I clicked “Get Dataset” right on the main page and then selected “Data Access” under the Monthly section and Bob’s your Uncle I see all of the data right there. It doesn’t appear to be buried very well to me either. However, I do agree that it does not appear that USCRN is specifically mentioned in the monthly or yearly reports.
Yes, sorry. I should have copied the whole sentence. I read it as two separate clauses, they bury the data, and they don’t use them in the global reports. Rather than, they bury the data, by not reporting in them in the global reports.
That’s how I read too. But even if the context were that it was buried as evidenced by not being mentioned in the reports I would challenge that isn’t strong evidence of it being buried. Afterall, the reports are focused on changes relative to a base period and over longer periods of time. USCRN hasn’t even lasted long enough to establish a baseline with the typical 30yr duration. Plus, not being mentioned in this-or-that report doesn’t mean it isn’t buried. There are lot of datasets that NOAA maintains that are not mentioned in those reports.
Ask any reporter the last time they saw:
If you didn’t know about USCRN from either your own studies or writings here, you’d never even know how to search for it.
Think whatever you want, but is IS buried.
“Given the government monopoly on use of corrupted temperature data, questionable accuracy, and a clear reticence to make highly accurate temperature data from the USCRN available to the public, it is time for a truly independent global temperature record to be produced.”
I’m interested in who would finance such an operation, and how independence could be ensured.
There are good points in this article about the nature of the existing sources of surface temperature data.
Nevertheless, the problem of reliable attribution remains.
Even if the temperature acquisition system is perfected, so what?
If it shows warming, why did it warm, does it matter, and is it compelling to do anything differently?
If it shows cooling, why did it cool, does it matter, and is it compelling to do anything differently?
Those questions won’t go away.
But we already know enough not to expect emissions of the otherwise harmless non-condensing trace gases CO2, CH4, and N2O to force heat energy to accumulate on land and in the oceans to harmful extent by what happens in the atmosphere. Watch from space, get a grip about what the atmosphere and ocean circulations do, and move on.
https://wattsupwiththat.com/2022/05/16/wuwt-contest-runner-up-professional-nasa-knew-better-nasa_knew/
Don’t get me wrong – I would not oppose a better system to acquire data. But the bigger problem is harmful policies already taking hold based on unsound attribution.
Data versus predictions
Historical temperature data are not perfectly accurate
Future climate predictions are data free speculation
Predictions are unrelated to any past temperature trends
Not even just an extrapolation of the 1975 to 2015 warming trend
And the 1940 to 1975 cooling trend is ignored
And the flat 2015 to 2023 trend is ignored
There seems to be little or no correlation between the average temperature trends since 1940 and the very long term (400 years) ECS predictions.
For example, if IPCC predicted that the temperature trend in the next 82 years would be similar to the prior 82 years, from 1940 to 2023, would that scare anyone?
Actually, climate models do say that for TCS with RCP 4.5
But the IPCC prefers to scare people with ECS and RCP 8.5, which approximately doubles the warming rate for TCS with RCP 4.5.
“And the 1940 to 1975 cooling trend is ignored
And the flat 2015 to 2023 trend is ignored”
And the evidence from space (hourly CERES + near-real-time GOES) is ignored/explained away in favor of the “forcing + feedback” framing of the climate system response to GHGs.
I got myself a new little toy.
It’s an Elitech RC4 datalogger and is such a little sweetie.
Its great beauty to me is how diddy/compact it is, that it’s self powered, that it has a solid state sensor on the end of a wire and that you can program it
Especially 2 things come together in that:
After some initial familiarisations, on Sunday afternoon I installed it in my plastic-mushroom Stevenson screen, along with another datalogger. (Big Hefty Brother running at 4 minute logs)
Then on Tuesday morning I ’emptied it’ to see what I’d caught – that is the picture you see.
It’s a beaut. Eat yer heart out Leonardo and leave Lisa alone.
We’re all gonna hang this in Notre Louvres from now on.
😀
Two things: (the second one will be in another post)
1/ I wanted to catch The Event that happens in my part of the world, through the night, whenever temperatures drop to near zero Celsius.
I caught it alright – highlighted in pink
Ignore that spike at the very start. That was me still fiddling with and when installing it in my Stevenson. Goes to show how sensitive it is if nothing else
But, I wanted to be absolutely sure the El Sol would never ‘see’ it and that it was protected by all 3 separate screens that go to make up My Stevenson Screen
Despite having run a few recces around the neighbourhood, I am still none the wiser what causes those wiggles.
Somebody patently is striking up a very big heat machine whenever Jack Frost comes close but the amazing thing is, just how big it must be. There are maybe 5 or 6 Wunderground stations around me within a 2 mile radius and they all see those temperature wiggles.
Just look at it Holy Kow, it’s moving night-time temps by 3, 4 and 5 Celsius!!
From a range measured in miles.
Compare to Heathrow Airport, from where all the very best temperature records are set – it’s only 3 miles from one end to the other (East/West) and less than 2 miles North/South
Takeaway message: If we want super accurate and pristine recordings of temperature, we need to be a damn sight further away than 100 metres from ANY possibilities
If interested, the txt file of the actual data is here at Dropbox
Condensation of frost and dew during the still of night. A release of the latent heat of vaporization back to sensible heat under peak night.
During the day, the opposite, the variation of vapor pressure deficits of turbulent air and ass