Last year the UK Met Office was shown to be inventing long-term temperature data at 103 non-existent weather stations. It was claimed in a later risible ‘fact check’ that the data were estimated from nearby well-correlated neighbouring stations. Citizen super sleuth Ray Sanders issued a number of Freedom of Information (FOI) requests to learn the identity of these correlating sites but has been told that the information is not held by the Met Office. So the invented figures for the non-existent sites are supposedly provided by stations that the Met Office claims it cannot identify and are presumably not recorded in its copious computer storage and archive.
Mr Sanders is understandably unimpressed with the explanation that this vital identifying information is not retained, writing: “Is the general public just supposed to ‘believe’ the Met Office without any workings out evident. To me, and every single scientist who has ever lived, it is imperative to show the data used – ANYTHING LESS IS NOT VALID. No Verifiable Data Source = No Credibility = no better than Fiction.”
Until recently, the Met Office showed weather averages including temperature for over 300 stations stretching back at least 30 years. The data identified individual stations and single location coordinates, but when 103 were found not to exist the Met Office hastily rewrote the title of the database to suggest that the figures arose from a wider local area.
Following the change, Sanders sought FOI guidance about Scole, a temperature weather station in Norfolk that operated for only nine years between 1971 to 1980. Type in Scole on the new ‘location’ database and it is identified as one of five sites that are the “nearest climate stations to Scole”. Sixty years of average data are given including 10 years before Scole was actually established. This itself is odd since the Met Office justifies ‘estimating’ data for closed stations to preserve long usability of the data. It would appear a stretch to use this explanation to justify preserving 1960s data from a station that did not open until 1971. Sanders made a simple request and asked the Met Office to reveal the names of the weather stations used in compiling the climate average data for Scole from 1990 to 2020. If the Met Office was unable to supply the full list, he made it as easy as possible and asked for the name of the last station supplying data.
The astonishing claim that the Met Office was unable to help because the information was not held was followed by an explanation that “the specific stations used in regressive analysis each month are not an output from the process”. The unimpressed Sanders observes that the Met Office archives billions of numbers and data items but does not seem to keep a record of its workings out. “So they have no proof whatsoever of how their climate averages were compiled,” he observes.
Sanders also sought similar details about another ‘zombie’ site, namely, Manby in Lincolnshire. This actually closed for temperature readings in 1974 but again 60-year averages are currently available. Sanders was intrigued by this site since the CEDA archive that collects Met Office data showed it was still open, a claim also made in an earlier FOI disclosure by the state meteorologist. Again Manby is identified as the nearest climate station when its name is searched on the climate averages site. But the Met Office’s Weather Observations Website shows it is closed and Sanders notes the Met Office has since confirmed that to him. It has been 50 years since an actual temperature reading was taken at Manby but as with Scole the Met Office under a FOI request is unable to name any of the ‘well-correlated’ sites supposed providing data.
It is difficult to understand why the Met Office cannot answer a simple question seeking guidance on where temperature readings were taken. Presumably they would be obtained from the five nearest ‘stations’ identified when a location is entered into the climate averages database. But as the Daily Sceptic has reported in the past, there might be problems with this approach. Cawood in the West Riding of Yorkshire is a pristine class 1 site designated by the World Meteorological Organisation as providing an uncorrupted air temperature reading over a large surrounding area (nearly 80% of Met Office sites are in junk classes 4 and 5 with ‘uncertainties’ of 2C and 5C respectively). Cawood has good temperature recordings going back to 1959. But no rolling 30-year average for Cawood is provided. Instead, the Met Office flags data from five other sites, four of which don’t exist, with the fifth located 27 miles away at a 163 metres higher elevation. Even worse, the location of Norwich brings up five nearby stations, including Scole, none of which exist.
As the Daily Sceptic has noted in the past, the Met Office has only itself to blame for the often trenchant criticism it receives on social media about its temperature collecting operations. It does a fine job of forecasting weather, but activist elements in its operation have weaponised inaccurate temperature recordings to promote the politicised Net Zero fantasy.
Recently, the chief scientist at the Met Office, Professor Stephen Belcher, called for Net Zero “to stabilise the climate” claiming he saw “more extreme weather” in the Met’s observations. In the UK, he suggested that between 2014-2023 the number of days recording 28C had doubled, while those over 30C had tripled compared to 1961-1990. A more extreme weather trend is not something that the Intergovernmental Panel on Climate Change has seen, while observations about more recent hot days might ring truer if they were not based on the increasingly urban heat-ravaged Met Office databases.
And Ray Sander’s take? “We are regularly told in the mainstream media, particularly the BBC, that we are entering an existential ‘climate emergency’, so how is it nobody wants to discuss the obviously fictional data that is being manipulated to support this ‘argument’?”
Chris Morrison is the Daily Sceptic’s Environment Editor.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
The names are:
Bogus 1, 2, 3, n…
There was a time when people got fired for fiddling and extrapolating, now they get promoted.
“Why I returned to the Met Office
…every day I’m reminded of how lucky I am to work for an institution with such a storied past.
…
Looking ahead, I’m genuinely excited about what the future holds, especially with emerging trends in AI, machine learning, and cloud computing. “
https://careers.metoffice.gov.uk/stories/why-i-returned-met-office-innovation-and-partnership
Number crunching Bogus data.
No one should be allowed to gush about their excitement at the prospect of AI without being able to explain (a) what they mean by “AI”, and (b) how their notion of AI does the thing they think it does.
Anyone who attempts to give those two explanations will either stop or reveal themself to be an imbecile.
As for “machine learning”, that is the unfamiliar new name for a lot of stuff we’ve been using for years. And who the heck cares about where their computing is done? I guess if you’re still in primary school it could be exciting—I’d probably have been excited by it when I was about 10…
In truth AI is exceedingly fast pattern matching. Intelligent it is not.
Automated Idiocy.
The way it was programmed.
And yet nations of the world are in a race to be the best in AI.
Never trust Mannipulated data sources.
Likely these zombie datasets are averaged from other zombie data sites
Correct. Although the actual patterns being matched are often unknown.
There are relevant pages on Wikipedia?
Often schizophrenics can recognise nonexistent patterns and insist they exist
It just scrapes all of the Wikipedia pages at once very quickly so you don’t have to go to every relevant page one by one.
Sounds very
trustworthyeditable.I am not at all sure of this.
If you read “Game Changer”, by Sadler – you need to be able to follow the games at quite a high level, 2000+ rating minimum) you will find Alpha Zero making moves which seem simply incomprehensible, but which some moves later, turn out to have rested on what, if it were a human playing, we would call deep insight into the position.
Now, Alpha Zero isn’t programmed in any usual sense to do this, it learned chess itself. It also can’t be doing pattern recognition in any usual sense – it is playing like this in situations it has never seen before and does not have a database of. Its doing it with a game where the number of possible moves is huge, far beyond any program’s power either to exhaustively analyze or to have in a huge database the resulting positions.
People will say, this is just a program looking to you like a human player of great ability. Well maybe. But Stockfish, which Alpha Zero wiped the floor with, that is recognizable, and its limits are too.
The claim that its just pattern recognition seems to be to rest in the end on defining what it does in that way. Its not a claim that is testable by observation.
So what do I think its doing? I don’t know for sure, but my starting hypothesis is that what its doing is very much akin to ordinary human thought, at a very high level.
A possible counter example would be its less good performance and its defects in Go. It did outperform a very strong human player at Go, but it turned out later to have some rather basic vulnerabilities. Don’t know why. If it has any of these in chess I haven’t heard of them.
LLMs are probably just something akin to pattern recognition and extraction and synthesis in some vague sense of the expression. But I am not at all sure that the kind of AI which Alpha Zero represents is. And of course it will have moved on since then, that match with Stockfish was several years back.
All that chess program is doing is, starting with the current board piece positions, running every possible move and counter move, then calculating which has the highest probability of success or eliminating moves with the lowest probability of success, probably both.
It operates as an optimization search algorithm. It this sense it replicates the thought processes of a chess player.
AI has no current means of discriminating between “truth” and “falsehood”, other than by appealing to the current consensus viewpoint as revealed by Internet-sourced information/publications. And we all know how misleading a “consensus viewpoint” can be (reference “flat Earth”, Newton, Copernicus, Einstein, Plank, Heisenberg, and numerous other great thinkers).
AI has NO independent means of performing experimental testing of any given hypothesis or meme, which is an integral part of The Scientific Method that humans have found to be so useful in expanding science-based truth.
The intelligence of AI is far more artificial than most people recognize.
This isn’t true. Alpha Zero plays moves that in the end result in either win or lose. It doesn’t rely on any consensus about anything.
And how, pray tell, does an AI playing games like chess, shogi, or Go (e.g., Alpha Zero) involve it making decisions about truth or falsehood?
P.S. Making a “false” move in a game, as well as in life, is often called cheating . . . be careful how far you want to run with your assertion because I believe the consensus of humans it that solo “games” and multiple-player “games” should not involve cheating.
Perhaps, or perhaps it reviews all of the recorded games and uses that to create a best case consensus.
Yes with AI they can not only make up imaginary “data” faster, but it provides another ready excuse for the complete lack of transparency – the “proprietary” AI algorithms.
Strativariius, I am pretty sure I saw a preview of this type of alternate reality on the old Rod Serling show “The Twilight Zone”. It didn’t end well.
I couldn’t help noticing that Shatner, Nimoy et al featured in The Scary Door and The Outer Limits…
SF wasn’t just plucked out of the air, but it failed to chart the speed of progress – or dystopian advance. Take you pick.
Computing how clouds work would be quite a trick.
Oh everyone knows what the three stations they use for this “data.”
They’re called
Their;
Nether; and
Region
Where the Sun don’t shine.
Come on Nick. Tell us all why we’re wrong!
When I read statements like these, I assume that the writer is hiding something. For examples, “doubled” could mean they went from one instance to two instances over nine or ten years and “tripled” could mean they went from one to three instances over 29 years. Hardly earth-shaking.
Professor Stephen Belcher, Show Me the Data!
It’s a religion.
The stations are as real as the climate crisis.
Every single station is a prophet,the models are the prophecies,
Co2 is Satan and the climate crisis is the Apocalypse.
Trump is Paul turned into Saul.
And the CarbonTax are the indulgencies you pay to get rid of our sins.
The fact that a government agency is allowed to manufacture data is mind blowing. What else gets manufactured to justify “problem solving”?
Scientific data is physically measured. This is what so many scientific endeavors are finding when it comes to the replication crisis.
It is fantasy! It is fiction! There is no other way to put it. The people both doing it and allowing it should be dismissed for falsifying official records!
The people both doing it and allowing it should be dismissed for falsifying official records!
But they know best…. /sarc
Please stay 6 feet away.
There was a time when you got fired for data fiddling. Methinks it may be time to get fired at.
They didn’t manufacture it. Pls read their answer. Sanders and the OP either had problems understanding it or they pretended to do so, doesn’t matter. You, as an engineer, a knowledgeable, literate person (/s) are expected to understand these things.
If it wasn’t physically measured, then IT WAS MANUFACTURED. You can claim that the process that manufactured the temperature arrived at an accurate value, but to do that you also need the evidence to prove it. That evidence seems to be missing.
As an engineer, I would never falsely claim I made a physical measurement when I did not. Depending on the risk I MIGHT interpolate between actual physical measurements, but I would always mark it as such and show what physical measurements were used. In other words document the evidence for future review.
Resident troll nyolci’s relation to physical reality and truth is tenuous at best.
Read their answer. You will see what they actually claim, and the evidence. And when you actually understand that, then you’ll be able to assert things about it. Until then you’re just bsing, just us most of you deniers here.
I’ve read their answer, and it’s perfectly clear the Met Office is making up temperatures, and you are lying again (no surprise there).
It’s perfectly clear that you didn’t understand it.
I have a bridge for sale. Only slightly used.
Deniers.
You lose all credibility with that.
None of us deny weather is variable.
None of us deny climate is a running average of weather.
None of us deny the running average, therefore climate, changes.
We argue for science. CO2 has a miniscule effect on the temperature and weather of this planet. We do not deny there is a human element.
What we argue vehemently against is radicalization of civilization that does not benefit human beings.
You deny science, big boy 🙂
Your definition of science fails the sniff test. Activism, now there is passes.
I take personal offense to your comment.
What is my definition of science? 🙂 I love when you deniers are bsing.
I love it when an alarmist uses insults to try to silence people all the while avoiding answering a simple question.
I continue to take personal offense with your personal abuse.
Wish I could silence idiots like you who pollute the internet and make even the simplest debate turn into an unhinged bs festival. Unfortunately, idiots like you have the biggest loudspeakers nowadays.
So speaks the Princess Flame War.
That is all you got is a cooked up reply with nothing of substance in it, why do you bother coming here if all you have is non existent data to offer……
Snicker.
Princess Flame War comes here for attention and amusement. She scores points every time she gets someone to respond.
Are you running a brain in for an idiot?
Oh, and here we have the chief cretin. Nice to meet you. I don’t know what is worse, if you pretend not to understand the answer, or if you really don’t understand it. I assume the first but then I have to assume that you consciously take part in this propaganda process, just like very likely Watts and the McCretins. So you do it for pay. If not, then you are just dumb, just like Gorman and the rest here. So which one are you? Please specify it in your answer. Thx.
If you believe you understood that nonsensical “explanation,” then I feel sorry for you.
It is scientific misconduct. What you are saying is that in a medical efficacy study, if you don’t have enough patients, you can create homogenies of pseudo-people for the study to prove the effect. Of course, that is how we got where we are with vaccines and autism, and endocrine disrupters.
Good god… Okay, this is a misunderstanding on multiple levels. First of all, this is not even for proving anything, this is just a FYI style thing, courtesy from the Met Office. But this is not even what the claimed problem here is. That is about some supposed problem with the way the homogenization itself had been done. The thing is that the MetO is using a complicated algorithm (described in a paper) so for a certain location and for a longer duration the series has a non trivial dependence on the series of nearby stations and even the series of the location itself if that existed (even just for a limited period). The algorithm doesn’t record this, this information is irrelevant fot the purpose of the site. In other words, they just can’t tell this without long calculations. And the FOI is about information that is actually at hand. BTW the FOI request was obviously malicious.
The thing is that the MetO is using a complicated algorithm (described in a paper) so for a certain location and for a longer duration the series has a non trivial dependence on the series of nearby stations and even the series of the location itself if that existed (even just for a limited period). The algorithm doesn’t record this, this information is irrelevant fot the purpose of the site. In other words, they just can’t tell this without long calculations.
You don’t need complicated algorithms or long calculations to create a temperature series. All you need are calibrated thermometers in known locations and honest record-keeping.
Thanks for confirming the accusations of malfeasance.
The point of contention was exactly the fact that there was no calibrated thermometer at a certain location during a very long period in the past, and they wanted to calculate an (approximate) temperature for the location. (I cannot wrap my head around how you deniers are organically understand extremely simple things.)
And what is the uncertainty in an approximate temperature?
Was that uncertainty propagated into an anomaly calculation?
Why do you think they don’t know that? Anyway, this wasn’t even the question. They questioned the need for calculating a temperature instead of measuring it. I just pointed out that you couldn’t measure temperatures directly in the past if there had not been a primary measurement at the location and time in question. You have to approximate that value if you can’t measure it.
Yes. For each and every calculation the result is the average, the uncertainty and the weight (area and time period). A “tuple” of these values. When you use that average further, these variables are used (“propagated”) to those calculations, contributing to that average (resulting in another “tuple”). The whole thing is extremely simple, the only (literally the only, and accidentally a very reasonable) assumption is independence of measurements. Uncertainties don’t have to be Gaussian, don’t have to be of the same distribution etc.
The point of contention was exactly the fact that there was no calibrated thermometer at a certain location during a very long period in the past, and they wanted to calculate an (approximate) temperature for the location.
You measure a temperature, you don’t “calculate” it. Why would anyone want an “approximate” temperature anyway?
This is called “making things up”.
For that matter, even this is false, but I know your intention. (You don’t measure temperature, nowadays you measure voltage that is mostly proportional to the temp in a range. With LiG, you actually measure distance that is mostly proportional to the temperature 🙂 )
Hm, what was the temperature in your backyard on the 17th of May, 1989? Do you have a primary measurement for that? Or we only have some stations that are reasonably close?
I have no idea what the temperature was that day. However, I can tell you that the temperature was not the same everywhere in my backyard, and and that I would never use that approximate temperature to calculate an average temperature to 3 decimal places.
Good boy. You are at least trying. Okay, when you hear in the weather report that currently the temp in town X is T, do you call them to tell them it can vary?
Actually, yes. Ever heard the word “microclimate”?
Good on you 🙂
If I were claiming I had that temperature at that date, there would be a measurement, mercury thermometer likely, and a written record.
Claiming stations where none exist and publishing data for those non-existent stations is criminal.
Is there? Is there readily available?
They don’t claim that.
They don’t claim that.
They admit exactly that:
“In order to advice and assistance, the long term record is based on obervations data at the location, where it is available, any data gaps in the monthly data from this station are filled with estimates obtained by regression relationships with a number of well-correlated neighbouring sites using carefully designed scientific techniques”.
Well, they admit exactly the opposite. “any data gaps in the monthly data from this station are filled with estimates” This is not a claim for a station’s existence, this is exactly the opposite. Okay, you are nearing it, you need to take those few steps in the process of comprehension. I can see the progress.
“well-correlated neighbouring sites”
Correlation is not causation. Nor does correlation imply equal. Unless you are a climate scientist.
No one claimed causation. And in this specific case, it’s irrelevant, we are interested in correlation. Tim, we all know thinking is not one of your strengths. Regardless, I ask you (again) to think before you post to avoid ridicule.
You left out the other statement. Correlation does not imply equal!
If correlation does not imply equal then how do you justify substituting a value from one location for one at a different location?
Correlation doesn’t even imply that the slope of the regression lines for two different data sets are the same. Thus it’s not possible to just assume that the anomalies will be the same!
Yet climate science assumes both, that correlation means equal for both temperature and anomaly.
If correlation is established, that justifies substitution. Regardless of whether there is a cause-effect relationship in any direction. (Very likely the correlation at both points is the result of a common and complicated cause.)
Huh, another prospective motto for this site 🙂 The amount of nonsense you can come up with is staggering.
“If correlation is established, that justifies substitution.”
You just stated the same garbage again.
“Regardless of whether there is a cause-effect relationship in any direction. (Very likely the correlation at both points is the result of a common and complicated cause.)”
Why can’t you address the fact that correlation does *NOT* mean equal?
Correlation does not mean equal either in absolute value or in anomaly.
I’ll ask again (and again and again until you address it):
“If correlation does not imply equal then how do you justify substituting a value from one location for one at a different location?”
Who claimed it meant equality? Correlation, if demonstrated, means a strong connection, regardless of its causes, you idiot.
“Correlation, if demonstrated, means a strong connection, regardless of its causes, you idiot.”
Per capita use of margarine is highly correlated with the divorce rate in Maine. You are claiming that there is a strong connection between the two?
You *have* to claim equality if you want to substitute one value in place of another. That’s supposedly the whole idea of “homogenization”, that you can substitute a value in one data set for a value in a different data set. By claiming that homogenization of temperatures is a valid methodology you have to accept the implicit but unstated assumption that the substituted value is equal to the value that should be found in the primary data set.
Otherwise, one is creating new information.
See above. interpolation etc. is not substitution, but it’s not “new” information.
Interpolation is a method given to researchers to use in their research. Interpolation is not allowed in government provided official MEASUREMENTS. That is creating information that does not exist as a physical measurement and should not be portrayed as such with a station identification.
If the government wants to provide a separate database of interpolated data with geographic identification rather than station identification, that is their prerogative. But it should be identified as manufactured information and it is not physically measured data.
I’m glad you’ve given up on bsing about causation.
No one wants to “substitute”, you genius. Interpolation etc. is not substitution.
More BS. Inteerpolating a data point at Location1 in order to use it for a data point at Location2 *is* a direct substitution. In order for it to be valid the data point interpolated at Location1 *has* to be equal to the missing data point at Location2 or you’ve created a garbage data point at Location2.
You are in a hole. Stop digging it deeper.
So you admit they manufacture data.
So you admit they manufacture
dataguesses.Fixed it for you. LOL
That was the update after the missing climate stations were identified and the FOIAs needed an answer.
Is there? Is there readily available?
You really do not read. That or your reading comprehension vacuums.
So you deflect and amuse yourself by spewing insults and carrying on like you are the ultimate answer to life the universe and everything.
Okay, I said if you didn’t have a primary record for a certain data point, you have to interpolate somehow. You said, in your reply, that what if you had. This is a good demonstration for when the question doesn’t make sense.
Okay, I said if you didn’t have a primary record for a certain data point, you have to interpolate somehow. You said, in your reply, that what if you had. This is a good demonstration for when the question doesn’t make sense.
What I said is this:
“If I were claiming I had that temperature at that date, there would be a measurement, mercury thermometer likely, and a written record.”
You are spinning it. I did not say in my reply what if you had.
Such nonsense from The Princess Flame War.
I really don’t understand your problem. If you have a primary measurement, good on you. But we don’t have this luxury for most of the surface of the Earth. So if we want to know the temp there at a certain time, we have to use all the data we have from the neighboring stations.
No, no you don’t. If an individual researcher wants to use some method of creating information to “fill in” what his algorithm requires, then they need to document the procedure and show what has been manufactured.
For a government agency to pronounce temperature data that does not exist as the “official” data is misleading at best. To hide the fact that it is being done is not acceptable under any circumstance. You can not rationalize it away.
If the factors determining the value of interest at Location1 are not the same as the factors determining the value of interest at Location2 then you’ve just created a crap data point for Location2.
if datapoint = a * b * c
and
(a1)(b1)(c1) ≠ (a2)(b2)(c2) then datapoint1 ≠ datapoint2
Then substituting datapoint1 for datapoint2 creates a garbage data point at location2.
So, back to the “manufactured” data.
MET claimed there were stations, some 103 of 300, that do not exist.
MET claimed temperature data from those sites. The lied or they misrepresented.
Had they marked those sites as calculated, rather than pretending it was real measurement data, we would not be having this conversation.
Had they responded to the requests with transparency, we would not be having this conversation.
The MET behavior begs the question: What are they hiding.
Why did they think they needed 300 samples? Wouldn’t 200 be sufficient? Just drop the 103. It shouldn’t change either the average value or the standard deviation of the sample means. If it does then their sampling is garbage to begin with!
Questions abound, Tim.
One good question is the rapid updated to the website when these discrepancies were noted.
I could go on but this should be sufficient to show that “manufacturing” temperature data is physically impossible. It only serves to defraud those who thinks that the temperature data is based on actual physical measurement.
Tim, not again…
What the MetO does is not homogenization. It’s just an informative website. This is your first misunderstanding in a long row. The homogenization you refer to has to be done otherwise the missing data point introduces unwanted artifacts. If you cannot understand this, you’re lost in a debate.
This is so bad it has to be put as a motto to this site. Homogenization is not based on the square root law (which I guess you wanted to refer to). No one has ever claimed that averaging increases accuracy for the individual measurement. It increases accuracy for the average. I don’t want to delve into resolution, that’s another can of worms. Measurement uncertainty per se is eventually a random variable so it’s random. No one says it’s Gaussian, and this is not a requirement. As for canceling, the only (and obviously met) requirement is the independence of measurements.
This strange obsession of deniers with this and their persistent inability to understand these extremely simple things is astonishing. No, temperatures are not extensive, and no one has ever claimed that. When you average temperatures, you (usually implicitly) convert them to an extensive quantity and then convert them back to an intensive one with weighting. As for surface temperature with similar terrain all along, simple area weighting of the average will do this trick. I have calculated this to you and Jim at least 3 times already, and you still don’t get it. Congratulations.
Yet you said in a prior post;
Which is it, no homogenization or some homogenization?
As to an informative website, has this informative information been used in published scientific papers?
If yes, did the authors know that they shouldn’t be using the information for scientific purposes?
Yeah, my wording was sloppy, sorry. Technically the two are the same or at least extremely similar but this is just an informative website, nothing else. But anyway, I would like to remind you (just as I emphasized in the quoted part) to the fact that the supposed problem here was not even “homogenization” per se but the way it had been done.
No.
Why the fcuk do you think they didn’t know?
“What the MetO does is not homogenization.”
Bullshite! When they substitute temps from one location for another location that is the very definition of “homogenization”. Pairwise homogenization is comparing temperatures with neighboring stations. If there is nothing at the location to use for comparison then the neighboring station reading is substituted.
“The homogenization you refer to has to be done otherwise the missing data point introduces unwanted artifacts.”
More bullshite! In *real* science data points are typically determined based on interpolation from surrounding data points. When there is no surrounding data points there can be no interpolation. It is substituting data points from other data sets that introduces unwanted artifacts!
*Homogenization is not based on the square root law (which I guess you wanted to refer to).”
I didn’t say it was. It doesn’t matter one iota what algorithm is used to determine the value to substitute, IT REMAINS A GUESS. Guesses have in-built uncertainty. It is simply unavoidable.
“No one has ever claimed that averaging increases accuracy for the individual measurement.”
Stop putting words in my mouth. How do you average ONE measurement? I didn’t claim anything about averaging one measurement. The climate meme is that averaging MULTIPLE measurements reduces inaccuracy and increases resolution! That’s the only way temperatures in the hundredths digit can be determined from measurements in the units digit!
“Measurement uncertainty per se is eventually a random variable so it’s random.”
More bullshite! Very few physical measurement devices drift randomly. Most drift in physical measurement devices is caused by heat over time. Heat typically causes an expansion of whatever material is involved, be it the substrate on an integrated chip or the glass in a LIG thermometer. Expansion causes calibration drift in ONE DIRECTION, not in random directions. As usual, you are doing nothing here but demonstrating the fact that you have ZERO experience in the real world of measurements!
“When you average temperatures, you (usually implicitly) convert them to an extensive quantity and then convert them back to an intensive one with weighting.”
How in Pete’s name do you convert an intensive property to an extensive property while retaining the same dimensional description? A mid-range daily temperature has the same dimension as each component – degree of temperature. Averaging mid-range temperatures results in a value that has the dimension of DEGREE OF TEMPERATURE. Temperature is an intensive property. You can’t change that. You have to do something that changes the dimension.
In addition, since temperature is *NOT* an inherent extensive property converting it into one means you have to introduce multiple conversion factors. Things like humidity, pressure, elevation, geography, terrain, wind, etc. That conversion quantity is called ENTHALPY.
Tell us all where climate science has started to use ENTHALPY. Please provide a reference we can all find on the internet.
This type of conversion also introduces a problem with converting it back to a temperature since all of the other factors ARE TIME SENSITIVE. Even if the enthalpy at time T0 is the same as at T1 that doesn’t mean that humidity and pressure remain constant. So the temperature at time T0 can’t be directly compared to the temperature at time T1, only the extensive quantity of enthalpy can be directly compared.
I’ll repeat. You are just showing your lack of experience in the real world of measurements with everything you post!
“As for surface temperature with similar terrain all along, simple area weighting of the average will do this trick.”
More bullshite! What WEIGHTING? How do you weight an AVERAGE? You would weight the components, not the average. You can’t even get this one right!
“ I have calculated this to you and Jim at least 3 times already, and you still don’t get it.”
You’ve never done a single calculation that I remember. Just idiotic words – like you’ve posted here. You’ve not showed how you convert temperature to an extensive value using real world calculations. You’ve not shown how you weight an average with real numbers. You’ve just posted a word salad that makes no sense in the real world we actually live in!
Okay then, let’s call it “homogenization”. It doesn’t change anything.
No, this is substantially complicated. They used the method described in Parry and Hollis (2005).
When you interpolate from “surrounding data points”, you “substitute data points from other data sets” as per definition, you genius 🙂
No one was talking about drift, you fokkin genius. Each and every measurement has its deviation from the actual value. This deviation has multiple elements that may change with time (drift) but the non systematic parts are (universally modeled as) a random variable. Two consecutive measurements of the same quantity will differ due to this.
Oh, okay. Jim was known to be bsing about increasing the accuracy of individual measurements. But what you call a meme here is not a meme, averaging multiple measurements gives you a value that is a much more faithful approximation of the true average than the individual measurements w/r/t the true values. If you don’t understand this simple thing you’re lost in this debate.
How in Pete’s name do you convert an intensive property to an extensive property while retaining the same dimensional description?
No one claimed it would be the same “dimensional description”.
Good god… Temperature is internal energy per molecule per degrees of freedom. So if you multiply it with degrees of freedom (well known) and mass, you have internal energy, an extensive property. If you calculate the sum and then divide again, you get the average temperature for the whole thing. This is so fokkin simple… BTW, most factors cancel out, so under normal circumstances area weighting gives you an extremely good approximation.
Always used it. This calculation is essentially enthalpy (or proportional to that by a constant factor). This is a good illustration how off you are in these things.
Try google “weighted average” 🙂
You have memory problems then, old MoeFoe 🙂
Wrong again. If you want to calculate the internal energy of a body you must multiply the temperature in Kelvin by Universal Gas Constant R, by the degrees of freedom, and by the number of moles, NOT MASS.
Thanks for demonstrating your ignorance.
Yeah. But if you use some material specific constant, you can use mass, too. And for that matter, if the matter stays the same, mass is just as good as mole number. That’s still an extensive quantity that you get, that is proportional to internal energy.
Not true in the least. Temperature is an indication of the kinetic energy (sensible) portion of the internal energy of a molecule. Temperature IS NOT a proxy for the latent energy (latent heat) that is also part of the internal energy not sensible. Latent heat of water is a large value and never examined when using temperature as a proxy.
Good god, not again… https://en.wikipedia.org/wiki/Internal_energy#Internal_energy_of_the_ideal_gas U=CvNT where U is internal energy, Cv is a constant, N is mole number. Why the fcuk do you go to a fight without not simply serious preparation but any preparation at all?
Unlike water, an ideal gas does not undergo phase changes, by definition.
for nyolci there is no water vapor in the atmosphere – thus it can be treated as an ideal gas which has no latent heat.
He’s never heard of the steam tables.
Tim, again, comprehension. No one has claimed this (except, of course, for you). The claim was that water vapor was just gas. This claim is true.
“ No one has claimed this”
did you not give this site as a reference?
https://en.wikipedia.org/wiki/Internal_energy#Internal_energy_of_the_ideal_gas
Do you see the last few words? “Internal_energy_of_the_ideal_gas“
If you aren’t claiming the atmosphere is an ideal gas then why did you provide a reference to an article on an ideal gas?
Water vapor is *NOT* an ideal gas. Did you think you could fool us by claiming you said water vapor was just gas?
Oh, so this is the new subject of your masturbation 🙂 Ideal gas is a very good model in practice and it simplifies a lot of stuff here in these debates. But you shouldn’t think that its usage in debates like this invalidates the argument. Of course, in science they use the approximation that is appropriate.
“deal gas is a very good model in practice and it simplifies a lot of stuff here in these debates.”
The atmosphere is *NOT* an ideal gas. So an ideal gas is *NOT* a good model in practice applicable to the atmosphere which is the main topic of discussion in these debates.
It’s this kind of garbage that makes climate science into garbage science.
“Of course, in science they use the approximation that is appropriate.”
Trying to approximate the atmosphere by assuming it is an ideal gas with no latent heat is *not* appropriate. It’s this kind of garbage that makes climate science into garbage science.
Wiki. bwahahaha
So the fabricated temperatures are really a heat index temperature calculated using humidity?
Most recorded temperatures do not factor humidity into their temperature records, therefore enthalpy is never accounted for.
I think we should frame this and put to the top of this site, to show the ignorance and self confidence of the denierfolk here. https://en.wikipedia.org/wiki/Enthalpy Enthalpy is U+PV where U is internal energy, and that is, at least for ideal gases (a very good approximation) is CvNT. PV is NRT. So we have Enthalpy = N*T*constant . This is extensive, and it’s still extensive if we use an arbitrary constant. That cancels out anyway in during averaging.
You didn’t refute my point at all.
Recorded temperatures do not factor humidity into their records.
Find me a database that has Tmax, Tmin, or Tavg shown as calculated by using enthalpy. Better yet find a global land anomaly ΔT that includes enthalpy.
Tavg is always calculated using enthalpy like quantities as I have showed to you at least 4 times already. Try to comprehend it.
I asked for a temperature database that has recorded temperatures using enthalpy.
You can’t find one can you?
That means none of the global anomaly ΔT temperatures use it either.
Its usage is implicit.
“ts usage is implicit.”
More garbage. If it were “implicit”, i.e. understood but not stated, then it would be obvious that you can’t average temperature because it can’t be determined directly from just the enthalpy value.
Again, this is so bad, it’s good. I’m inclined just to leave it here as it is. Just a hint: here “implicit” doesn’t mean what you think it means. “Implicit” means that when you weight the average, that is essentially the conversion to some extensive quantity (but already mathematically simplified, not spelled out full). I know you won’t be able to understand it but it has to be stated anyway.
“Just a hint: here “implicit” doesn’t mean what you think it means. “Implicit” means that when you weight the average, that is essentially the conversion to some extensive quantity (but already mathematically simplified, not spelled out full). I know you won’t be able to understand it but it has to be stated anyway.”
More BS.
Again, you can’t directly determine temperature from enthalpy because other variable factors exist, especially humidity. A problem you continually fail to acknowledge or provide an answer for. No amount of “weighting the average” can account for not knowing the other factors.
One more time:
latent heat is m[ (cpw * t) + hwe]
“m” (i.e. mass) is an integral part of calculating enthalpy. If you don’t know the mass then you can’t calculate “t” and no amount of “weighting” can fix that problem. And you can’t know “m” unless you know the humidity.
All you’ve shown is how to calculate the enthalpy of an ideal gas! The atmosphere is *NOT* an ideal gas. An ideal gas, by definition, has no latent heat involved.
As usual, the only thing you’ve showed us is how little you understand the real world.
Yeah. Latent heat comes with phase transitions. And during averaging, there’s no mixing, there are no phase transitions.
latent heat is m[ (cpw * t) + hwe]
Latent heat *does* come from the fact that water can have phase transitions. The *amount* of latent heat, however, is directly dependent on the mass of water vapor involved. The only way to average latent heat is to *mix* the water vapor so you get a total mass. Otherwise the “average” is physically meaningless.
It’s like trying to find the “average” height of a herd of mixed Shetland poinies and quarter hourses. Yes, you can calculate an average. And yes the average is physically meaningless since you won’t find a single component of the herd that is the average height. It’s a multi-modal distribution.
So is latent heat if you don’t mix the components.
It’s all part of the climate science meme of “numbers is just numbers”. You can therefore average anything and whether it means anything physically is irrelevant! It’s a perversion of the use of statistical DESCRIPTORS. An average is a statistical descriptor, it is *NOT* a measurement. The average, variance, skewness, kurtosis, quartiles, etc are all DESCRIPTORS of a distribution but they are *NOT* the distribution themselves. They are not part of the data set. It’s like saying you have brown hair. The color brown is not the hair itself, it is a DESCRIPTOR of the hair. The map is not the territory.
Wonderful ignorance in basic thermodynamic things. Latent heat is not a property of a state. It’s a property of a reaction (state change). It is irrelevant for averaging, we use “snapshots” there, there’s no state change.
“Wonderful ignorance in basic thermodynamic things. Latent heat is not a property of a state. It’s a property of a reaction (state change). It is irrelevant for averaging, we use “snapshots” there, there’s no state change.”
Of course latent heat is a property of a state. Otherwise the equation for latent heat (m[ (cpw * t) + hwe]) makes no sense at all!
Can you show where m[ (cpw * t) + hwe] is somehow incorrect for calculating latent heat?
Latent heat exists in the atmosphere. That it comes from the evaporation of liquid water doesn’t make it not a part of the state of the atmosphere. The amount of latent heat in a volume of air depends on the mass of water vapor in that volume of air and thus it is not a constant value throughout the atmosphere since the humidity of the atmosphere is not a constant.
Evaporation is a reaction. Temperature is a quantity of a state. A snapshot in time. Evaporation is a reaction, a state change. Latent heat is associated with a reaction, a change in state, not with a state itself. It is incredible that this is the fourth round (or so) and you are unable to understand this. Water vapor and aerosols (liquid water) etc. themselves don’t have latent heat per se. When they change phase, latent heat is emitted or absorbed. Water etc. change the nr of degrees of freedom and this is the only relevant factor for temperature calculations (like averaging). But, for that matter, this change is not that great, so even ignoring it gives us a very good approximation. But scientists takes this into account in more detailed studies.
Don’t have latent heat per se. Exactly where did the term latent heat originate dummy?
The TOTAL internal energy of a molecule is a sum of kinetic energy (sensible temperature) and potential energy (latent heat).
I don’t know what you think water vapor emits when it changes back to a liquid, It is the latent energy that it carried with it.
If you can find a resource that describes what you are asserting, POST IT HERE, I want to see it.
“Evaporation is a reaction”
The issue isn’t the constant. The issue is the constant times the mass involved in order to get the total amount of heat generated!
The AMOUNT of heat generated *IS* a property of a volume of air.
You are doing nothing but arguing a red herring in order to avoid having to admit that latent heat exists. All so you can assume the atmosphere is nothing but dry air!
you are in a hole. Stop digging!
“enthalpy like quantities”
ROFL!!
Okay, so we can agree, that temperatures can be averaged with proper weighting, right? Your new problem is humidity, apparently, right? Bad news, for averaging, humidity doesn’t really change anything.
“Okay, so we can agree, that temperatures can be averaged with proper weighting, right?”
NO! Temperatures can’t be averaged. That is more climate science garbage. Temperature is not a extensive value. You can’t just assume that the humidity, wind, pressure, etc is the same from moment to moment let alone from location to location.
There is no way to *weight* temperature to make it extensive.
Heat transfer in the atmosphere can change temperature, pressure, or volume – or any combination of the three. There is no way to *weight* temperature to account for this.
“Your new problem is humidity, apparently, right? Bad news, for averaging, humidity doesn’t really change anything.”
Humidity is a ratio so is an intensive property. But the *mass* of water vapor, part of the latent heat equation, can be determined directly from the specific humidity and mass *is* an extensive property. So you can average the mass of water vapor. You can *NOT* determine temperature directly from enthalpy, temperature is not a ratio per unit anything associated with the atmosphere.
“No, this is substantially complicated. They used the method described in Parry and Hollis (2005).”
I’m not going to pay for the paper. I will note that the abstract makes *NO* mention of humidity as a variable in any of the interpolation methodology. Humidity is a primary factor in determining temperature of moist air.
The abstract also says: “are incorporated either through normalization with regard to the 1961–90 average climate, or as independent variables in the regression.”
Just how was the 1961-1990 average determined? By “climate” are they speaking of temperature? Where would the pressure, humidity, upper air wind velocity, etc come from in 1961? These values would be needed to make the factors into “independent variables” in the regression *or* they would be just guesses – i.e. primary elements of uncertainty in the results.
“When you interpolate from “surrounding data points”, you “substitute data points from other data sets” as per definition, you genius “
Did you read this before you posted it? If I know the temperature at location L1 was 10deg at time T0 and was 11deg at time T0 + 1sec I can “interpolate” the temperature at T0 + 0.5sec from the “surrounding” data in the data set. I can *NOT* take the temperature at L2 and T0 and T1
and use it to interpolate the data value at L1 and T0 + 0.5sec.L2 data is *NOT* surrounding data, it is independent data.
“No one was talking about drift, you fokkin genius.”
Total uncertainty = systematic uncertainty + random uncertainty. The problem is that if you can’t isolate the random uncertainty element then you can’t tell what the systematic uncertainty is. If you can’t isolate the systematic uncertainty (e.g. calibration drift) then you can’t tell what the random uncertainty is.
Both John Taylor and Phillip Bevington, in their tomes on measurement and uncertainty, state that systematic uncertainty can *NOT* be analyzed using statistical methodology. Therefore the random uncertainty can’t be isolated and cancelled either -BECAUSE YOU DON’T KNOW WHAT IT IS!
This is *further* complicated by the fact that even random data can be significantly skewed. Significantly skewed random effects can have asymmetric uncertainties that do not cancel, e.g. -1,+3.
You are trying to defend the climate science meme that all measurement uncertainty is random (it isn’t), Gaussian (it isn’t), and cancels (it doesn’t).
“non systematic parts are (universally modeled as) a random variable”
See what I mean? The implicit but unstated assumption you are making here is that random data is always Gaussian and cancels. You *must* justify that assumption before using it – and climate science (including you) never bother to justify the assumption!
“Two consecutive measurements of the same quantity will differ due to this.”
And they can vary asymmetrically. Assuming they will cancel should be justified explicitly but never is when it comes to climate science.
“I’m not going to pay for the paper. I will note that the abstract makes *NO* mention of humidity as a variable in any of the interpolation methodology. Humidity is a primary factor in determining temperature of moist air.”
Tim, here is the paper:
http://www.rengy.org/uploadfile/file/中文版/资源/文献/2004/Development%20of%20a%20new%20set%20of%20long-term%20climate%20averages%20for%20the%20UK%20%20.pdf
Thank you!
From the paper:
“It is recognized that there are some notable omissions, such as wind speed, wind direction, humidity, visibility, solar radiation, snow depth and days of snow falling. It is expected that some or all of these
variables will be addressed in future projects.”
How in Pete’s name do they leave out humidity which is a direct factor in enthalpy and therefore in temperature? How do they leave out solar radiation which is also a direct factor in enthalpy and therefore in temperature?
Much of the difference between coastal and inland stations as far as temperature is concerned are based on humidity, wind speed, and wind direction. How can they quantify any difference if they don’t consider these factors?
“The correlation coefficient is adjusted based on the length of overlapping record between the stations. Linear regression is used to generate the estimates, and a number of neighbours are used to reduce
random errors.”
How do they reduce systematic errors such as calibration drift? All this methodology does is spread around measurement uncertainty from one station to the next. Nor are they actually reducing “random error”. They are just increasing the number of highly correlated data points to make things look better!
humidity is not a factor in enthalpy (except for the slightly different constant). Humidity is a factor in a reaction (like mixing), and only if there are phase transitions. Temperature measurements can be regarded as point-like w/r/t time.
Jesus Christ…
You are full of crap.
What is the specific heat of water? How much energy can water vapor in a cubic meter of air at 50% relative humidity absorb as compared to CO2?
As an “expert” you should have these right at hand.
Doesn’t matter. That’s for a state change or reaction. We are talking about timewise pointlike things (temperatures at certain times). Water vapor, etc. only affects the constants (essentially the nr of degrees of freedom), and even that is not a great change, but anyway it can be handled.
Timewise! Really? The last I knew, timewise dependent variables required a gradient function to describe them.
All you are describing is a value at an infinitesimally small point in time. Have you ever heard of integration?
Again, latent heat
m[ (cpw * t) + hwe]
You keep ignoring the “m” piece of that equation!
” We are talking about timewise pointlike things (temperatures at certain times). Water vapor, etc. only affects the constants (essentially the nr of degrees of freedom), and even that is not a great change, but anyway it can be handled.”
Can you read that equation for latent heat at all?
The amount of water vapor in a volume of air is a VARIABLE, it is not a constant. The amount of water vapor in a volume of air, a variable in both space and time, determines how much latent heat exists in that volume of air at any point in time and any location in space.
You can’t just assume the atmosphere is dry air!
But climate science has to make the assumption that all is dry air. The movement of heat from one location to another by latent heat is anthema to the use of temperature alone.
Read this page.
If a body both absorbs and emits heat, then it must also store the energy associated with that heat. That potential energy is important in the transfer of “heat”
“humidity is not a factor in enthalpy”
Don’t you ever get tired of making idiotic assertions like this?
enthalpy of air = enthalpy of dry air + enthalpy of water vapor
The enthalpy of water vapor is directly dependent on the mass of water vapor which is, in turn, directly related to the humidity ratio!
The humidity DETERMINES HOW MUCH WATER VAPOR YOU HAVE!
That means it *IS* a factor in enthalpy.
“Temperature measurements can be regarded as point-like w/r/t time.”
You are your won worst enemy. That means that at each point in time you are measuring A DIFFERENT THING. How do you average different things and get a physically meaningful value? Does the average height of a herd of mixed Shetland ponies and quarter horses have a physical reality?
Yeah. And the enthalpy of water vapor has a different constant. So there’s a slight modification for the combined constant. That’s all. There are no state changes here, no actual mixing.
Please first understand the meaning of this. It means there are no reactions.
“And the enthalpy of water vapor has a different constant.”
No, the vaporization energy of water is a constant. But the enthalpy of a volume of air is dependent on the AMOUNT OF WATER VAPOR in that volume of air. The vaporization energy and the amount of water vapor are two different things entirely!
A lot of bollocks 🙂
Oh, the next step 🙂 First you cry out loud that the uncertainty is ignored. When we point out that it’s not true, you come up with some bs that even the uncertainty (or at least the random element of it) cannot be known. Can you please confirm that we can measure things at all? 🙂
Wrong. The only requirement for this is independence. And they don’t “cancel” per se. The uncertainty decreases for the average.
Again NO, and please understand this at last. This is your meme. The one single requirement is, again, independence. And, again, they don’t cancel each other completely. They get reduced.
“ First you cry out loud that the uncertainty is ignored. When we point out that it’s not true, you come up with some bs that even the uncertainty (or at least the random element of it) cannot be known.”
It’s not BS. It is exactly what both Taylor and Bevington SPECIFICALLY point out in their tomes on measurement uncertainty.
Taylor: “Error in a scientific measurement means the inevitable uncertainty that attends all measurements.”
Taylor: “The best you can do is to ensure that errors are as small as reasonably possible and to have a reliable estimate of how large they are.”
Taylor: “For now, error is used exclusively in the sense of uncertainty, and the two words are used interchangeably.”
Bevington: “Because, in general, we shall not be able to quote the actual error in a result, we must develop a consistent method for determining and quoting the estimated error.”
“Can you please confirm that we can measure things at all?”
You have *NEVER* understood the concept of measurement uncertainty even after multiple attempts by people to explain it to you. You remain willfully ignorant on the subject, the worst kind of ignorance there is.
“Wrong. The only requirement for this is independence. And they don’t “cancel” per se. The uncertainty decreases for the average.”
Independence does *NOT* guarantee a Gaussian distribution. And uncertainty always adds when you are measuring different things with different instruments under different conditions, e.g. temperatures The uncertainty never decreases, it ALWAYS ADDS. Thus when you are “averaging” temperatures their uncertainties ADD. Dividing by the number of data points only creates an “average uncertainty”. It is not the uncertainty of the average. The “average uncertainty” is meaningless in the real world. Just like saying the average length of two 2″x4″ boards, one 8′ long and the other 4′ long, is 6′. If you are trying to span a 6′ gap based on the average length being 6′ then one board will be way too short! You’ll wind up in the same position by using the average measurement uncertainty. In fact, it can be DANGEROUS if it involves public safety! If you use the average shear strength of I-beams to design a bridge span it means that some of the I-beams will have a shear strength LESS than the average, thus creating a dangerous situation.
You don’t even understand the difference between repeatability and reproducibility in measurements which apply to measuring the same quantity let alone when you are measuring different quantities.
“This is your meme.”
No, this is YOUR meme. Yours and climate science. It allows you to totally ignore the measurement uncertainty associated with even the daily mid-range temperature let alone the global average temperature.
You won’t even admit that the measurement uncertainty of a daily mid-range temperature for a typical measurement station (+/- 0.3C) is between 0.4C and 0.6C. That totally blows out of the water the ability to define an anomaly in the hundredths digit! And that’s just for one day at one location!
Jesus fokkin christ… Again, for the square root law (and the bit more complicated general form) the only requirement is independence.
For that matter, you’ve pulled a big one in this post 🙂 Such a heap of bs…
While this is just a glimpse into your confused and convoluted thinking, I have a question. There’s an instrument, your husband has written about it. Now its readings are actually the average of 6 “raw” readings taken at 10 sec intervals. Tell me please what the measurement uncertainty of this instrument is for the normal reading. You can assume literally anything for the raw readings (that you don’t see in practice). There must be something for the average, otherwise we can’t say anything about the measurement uncertainty of this instrument.
“There’s an instrument, your husband has written about it. Now its readings are actually the average of 6 “raw” readings taken at 10 sec intervals. “
If this instrument is measuring temperatures at 6 different times then it is measuring six different things. You are using the same instrument but you are not measuring the same quantity.
“You can assume literally anything for the raw readings (that you don’t see in practice). There must be something for the average, otherwise we can’t say anything about the measurement uncertainty of this instrument.”
You STILL don’t understand measurement uncertainty. You just remain willfully ignorant!
First off you have to state the measurements properly.
The measurement at time t0 would be T0 +/- measurement uncertainty.
The measurement at time t1 would be T1 +/- measurement uncertainty
….
The measurement at time t5 would be T5 +/- measurement uncertainty.
T0 to T5 are all different quantities.
Let’s average just the first two measurements and assume that u0 = u1 = u.
The average of the stated values becomes (T0 + T1)/2. But the range of possible values becomes (T0 -u + T1 – u) = (T0+T1 -2u) and (T0+u+T1+u) = (T0+T1+2u). So the range of possible values has gone from +/-u to +/- 2u. The measurement uncertainty has increased.
So your measurement value for the average should be given as
(T0+T1)/2 +/- 2u
The measurement uncertainty of the average has increased. The *average uncertainty* remains 2u/2 = u. Thus the average uncertainty is *NOT* the uncertainty of the average.
Every time you add another data point the measurement uncertainty of the average will go up while the average uncertainty stays the same.
You CAN NOT REDUCE the measurement uncertainty in a data set by averaging. It is truly just that simple. The only time it reduces to zero is if you have a truly random and Gaussian distribution for the measurement uncertainty. If you think there is a partial cancellation then the accepted method of propagating the measurement uncertainty is using root-sum-square. But even root-sum-square addition GROWS the measurement uncertainty!
Huh, this is perfect. It’s so bad, it’s perfect. But this is a true gem even here:
So you cannot reduce but you can make it null 🙂 But then you can “partially cancel” it with the square root law. Tim, you’re a genius, you can hold three contradictory things in your head without blinking.
“So you cannot reduce but you can make it null”
Your reading comprehension skills are atrocious.
How do you make measurements of different things truly random and Gaussian?
I gave you the quotes from Taylor and Bevington pointing out that *no* measurements are truly random and Gaussian. Therefore you can never get a null value for measurement uncertainty. You can make it small but you can never make it zero.
More bull crap.
Read the attached carefully. It comes from Dr. Taylor’s book, An Introduction to Error Analysis, 2nd Edition.
Note the following requirements for using the divide by √N rule when dealing with measurements.
Dr. Taylor says “we imagine performing a sequence of experiments in each of which make N measurements and compute the averages. We now want to find the distribution of these many determinations of the average of N measurements”
What Dr.Taylor is describing is many samples of the same thing and each sample will have N measurements. He also assumes x1, …, xn are all measurements of the same quantity x, so their widths are all the same and equal sigma x. In other words, the samples are IID.
Temperature averaging meets none of these requirements. You and others like bdgwx and Bellman really don’t understand metrology because you just cherry ick and try to rationalize using sampling theory and traditional statistics. Measurement uncertainty does not use sampling theory to sample a population. Measurement uncertainty uses probability distributions to provide an internationally accepted method of DESCRIBING how certain measurements are.
In statistics one can assume the the data is 100% accurate and therefore no special treatment is required in the calculations. The means of the multiple samples are also 100% accurate. Any error results only from sampling error. Therefore sigma/N gives an accurate view of the estimated means reliability. This is what Dr. Taylor also assumes in his derivation of the “square root rule” as you call it.
NIST N 1900 creates assumptions that do the same thing. You’ll notice bdgwx never answered my questions about the assumptions used in this EXAMPLE from NIST.
NIST says a better model is one that allows for temporal correlations to be incorporated. They assume calibration uncertainty was negligible (resolution). And, they assumed no other significant instances of uncertainty occur. In other words, the readings are 100% accurate. Note, they say the error terms are assumed to be independent andom variables (Dr. Taylor’s multiple experiments) with the same Gaussian distribution with mean 0 (ZERO) and and standard deviation. Read Dr. Taylor’s derivation again to see the relevance.
The reliability of the means caculation does not inform one of the measurement uncertainty as defined in the GUM. Measurement uncertainty is the dispersion of the observed measurements attributed to the measurand. There is no way that the decreasing interval of the standard deviation of the mean can be used to erase this dispersion or to make it smaller. The GUM, NIST, ISO, etc,., all require that the standard uncertainty of the mean also declare the degrees of freedom used to calculate it. That is so the dispersion of the observations can be calculated by “SDOM × √n”. This seldom
disclosed in climate science.
Long story short, you don’t understand much from this. All the above are just assumptions for the current example. These are not preconditions. Furthermore, you very often confuse similar (or similarly sounding) things like independence of measurement or temporal independence (of a variable with itself).
While “traditional statistics” is just an application of probability theory, I have never “used” statistics here, to an extent that you accused me of being too mathematical. Furthermore, in the past you specifically rejected probabilistic description as non-applicable. Especially w/r/t uncertainty (which is just a random variable, NB. this is term from Probability Theory).
THESE ARE PRECONDITIONS FOR DR. TAYLOR’S DERIVATION TO HOLD! Any mathematician should be able to understand that. However, you obviously don’t.
Show the math that refutes this derivation or a reference that shows a refutation. Otherwise, it remains true and your assertion is worthless.
Go look up measurement uncertainty in NIST’s Statistical Handbook and see what precondions are required.
Right, the “square root rule” is something a non-mathematician would know.
Remember, a random variable doesn’t create a probability distribution. A random variable is a container that holds the results of an experiment. Those results will define the probability distribution.
In essence, to use the “square root rule” in measurement uncertainty, each input value should be an independent random variable. Read NIST TN 1900Ex 2 again and find out how they make assumptions that treat each measurement as an IID sample
“Especially w/r/t uncertainty (which is just a random variable,”
Again we seem the meme that all measurement uncertainty is random, Gaussian, and cancels.
Random variables can be asymmetric, skewed, multi-modal, and decidedly non-Gaussian. But that ruins the ability to assume it all cancels out!
“ But what you call a meme here is not a meme, averaging multiple measurements gives you a value that is a much more faithful approximation of the true average than the individual measurements w/r/t the true values.”
Bullshite! If I take a data set where all data is inaccurate the average will inherit that inaccuracy. Averaging won’t reduce it in any way, shape, or form. Once again, you are implicitly assuming that the measurements will result in a totally random and Gaussian distribution where you can assume cancellation. Can you EXPLICITLY justify that assumption for temperature measurements where you are measuring different things using different measurement devices? Please show us how you do that!
“If you don’t understand this simple thing you’re lost in this debate.”
You simply don’t understand metrology at all. You can’t even tell the difference between measuring the same thing multiple times under the same environment using the same measurement device and making single measurements of different things under different environments using different measurement devices.
Your ignorance on the subject just shines right through in everything you assert!
“No one claimed it would be the same “dimensional description”.”
So you think you can have intensive and extensive values with the same dimensionality describing the same thing?
You aren’t showing *any* understanding of the basic gas laws at all. The pressure exerted by those molecules depends on the volume in which they exist. It’s the simple PV = nrT. These factors do NOT “cancel” out! And when it comes to moist air it’s even more complicated.
You don’t even understand enthalpy of moist air. The enthalpy is the enthalpy of dry air + the enthalpy of the water vapor. And the enthalpy contributed by the water vapor is a function of pressure and volume. What do you think steam tables were developed for?
And calculations? Where is your calculation of the enthalpy of moist air at 10000 feet of altitude?
I don’t think you get what cancel out. Okay, slowly again. X1 = N1*T1*c where “c” is a constant, N1 is the amount of air in one area, T1 is its temperature, and X1 is an extensive quantity. X2 = N2*T2*c. Their temperature, taken as a whole is (X1+X2)/c(N1+N2). “c” cancels. The result is (N1T1+N2T2)/(N1+N2), a weighted average of the temperatures. The weights can be (and are) approximated very well. And you can see that picking a specific “c” here is not even needed.
Good god… in Thermodynamics, dry air and water vapor are just gases with known (and the same) nr of degrees of freedom. As an illustration, https://en.wikipedia.org/wiki/Enthalpy does not mention humidity, mind you. Humidity is kinda special because of the phase transition that it may produce. But for averaging, where there is no actual mixing, it is irrelevant.
En = U + PV. U = CvNT, PV=RNT, so En = (Cv+R)NT. N is the number of molecules (times 6*10^-23). Cv and R are constants. See? This simple it is.
” (N1T1+N2T2)/(N1+N2″
The c only cancels if it is a constant! If N1 and N2 are different then c can’t be the same for both. Your “c” is actually Cv, the specific heat for a constant volume. Perhaps you should go read up on specific heat, especially as it relates to the atmosphere where pressure changes? Again, learn how to read the steam tables.
“nr of degrees of freedom”
The degrees of freedom have to do with the rotational and vibration modes of the molecules, not the humidity of a volume of air. The nr of degrees of freedom change as the temperature changes, e.g. the degrees of freedom are different for the troposphere and the stratosphere for CO2. Stop using chatgpt and learn the basics.
“As an illustration, https://en.wikipedia.org/wiki/Enthalpy does not mention humidity”
So what? This just shows that you actually don’t know the subject and are just cherry picking quotes that you think support your assertions.
For moist air h = ha + hw
h = total enthalpy
ha = sensible heat
hw = latent heat
ha = cpa * t
hw = (cpw * t) + hwe
cpa = specific heat of air at constant pressure
hw = latent heat
cpw = specific heat of water vapor at constant pressure
hwe = evaporation heat of water
expanding:
h = (cpa * t) + m[ (cpw * t) + hwe]
where cpa is specific heat of air at constant pressure
m is the mass of water vapor, i.e. humidity dependent
“Humidity is kinda special because of the phase transition that it may produce.”
Humidity is kinda special because it determines the mass of water vapor involved – which determines the latent heat which is a factor in the total enthalpy!
” See? This simple it is.”
Only because you have no idea about finding sensible and latent heat and their contribution to the total enthalpy of moist air.
The enthalpy of the atmosphere is a constantly changing factor even at a single location and volume, let alone globally. It is wildly chaotic and non-linear which, as usual, means the climate models have to parameterize the factor using some kind of a guess at an average value. It’s part of the reason why even the IPCC recognizes climate as a chaotic, non-linear process!
Nice!
So “c” is a constant ‘cos Cv is a constant, right 🙂 Actually, c is not Cv but it doesn’t even matter here. For our discussion, this is just a constant, and the resultant quantity is extensive. BTW, this is from wiki:
Please keep this in mind.
This is, characteristically, wrong. Or mostly wrong. Degrees of freedom depends mostly on the phase (or phases) of the matter, and on the matter itself (eg. helium has less, O2 has more in its gas form). This is why humidity may be important for air, water may change phases during reactions. Furthermore, the constants in the CxNT formula may be a bit different for different bodies of air because of the presence of liquid and a bit different materials. But the most important point here is that averaging is not real mixing, so we don’t have to deal with the phase changes that may really complicate the picture. We may have to play around the constants that are a bit different (while just using one constant is a very good approximation itself).
I’ve never used chatgpt or any other ai thing (except for the github copilot).
“ Cv is a constant, right”
But the volume of a parcel of air changes as it rises! Therefore Cv, which is based on a constant volume, is not a constant for that parcel of air!
“Degrees of freedom depends mostly on the phase (or phases) of the matter, and on the matter itself “
Word salad, pure and plain!
An ideal gas, which is what you continue to reference HAS NO PHASE CHANGE!
“water may change phases during reactions”
Meaning your continued reference to an ideal gas is simply garbage. And now you are trying to back away from that!
“Furthermore, the constants in the CxNT formula may be a bit different for different bodies of air because of the presence of liquid and a bit different materials.”
What in the hell do you think everyone has been trying to tell you? Moist air has latent heat. An ideal gas does not. But your entire argument has been based on an ideal gas!
” But the most important point here is that averaging is not real mixing, so we don’t have to deal with the phase changes that may really complicate the picture.”
You can’t average an intensive property. Thus all your word salad on the enthalpy of an ideal gas!
Did you not read *anything* I’ve posted? Have you gone to look up the steam tables yet? In essence you are trying to justify finding enthalpy using the ideal gas law – when the atmosphere is *NOT* an ideal gas!
“We may have to play around the constants that are a bit different”
It’s not just the “constants”! It’s the mass of water vapor that determines the latent heat! Again, did you not read anything I posted?
m[ (cpw * t) + hwe]
What in Pete’s name do you think “m” is? I defined it for you! “m is the mass of water vapor, i.e. humidity dependent”
When we average temperatures, we use the state at hand, from a point in time, no reactions, no rising air. A temperature applies to a certain point in time. You somehow cannot comprehend this.
As is typical in climate science, you don’t bother to write a gradient equation for the changing environment. Averages tell everything when you pick the point you want to emphasize. That isn’t how nature works.
Why are you using Cv? The correct parameter is Cp.
That aside, Cv and Cp are not constant. They change with pressure and temperature.
Root Sum Squared (RSS) is what I believe the reference is to.
RSS is used to calculate a combined uncertainty “u꜀” from a number of different input quantities, each with their own uncertainty. That is why an uncertainty budget is useful.
Averaging does not increase accuracy of the average.
Sheesh.
Claiming that air temperature averages have tiny “uncertainty” through invoking sigma/root(N), the essence of “all measurement uncertainty is random, Gaussian, and cancels”, is also Fake Data. It is another aspect of the fraud.
Since the standard deviation of the mean is determined by dividing the standard deviation by the square root of a value, you can determine the SD by multiplying by that same quantity.
It is interesting how little variation there is in global temperatures, even anomalies.
But with all this verbiage you still haven’t explained how the Met Office can provide figures for sites that do not exist
see Perry and Hollis (2005). The link to the Sander idiot’s site is in the article, and the link to the MetO answer is there, you can see the reference.
You realize this is nothing more than the argumentative fallacy of a False Appeal to Authority? It’s nothing more than name dropping. *SHOW* us how its done, preferably using data we all have access to. BE SPECIFIC.
It wasn’t even an “appeal to authority” (even if I take you seriously which is hard). There was a persistent question about “making up” things (inventing). I just pointed out that these temperatures weren’t “manufactured” arbitrarily, they are a result of a complicated interpolation process described in the paper.
The only poster using the word arbitrarily is you, the Princess of Flame Wars.
The temperatures were manufactured.
The significance is, there is no means provided to validate the calculations.
What was written in the paper is basic handwaving.
Reread what? I reread the article. I read all the links.
State their answer.
Oh, so you didn’t understand that. Good boy.
Re-read the paragraph where they reference Perry and Hollis (2005). That’s the key. Read it slowly, look up all the words in the dictionary even if you think you know them. Maybe you don’t.
“In order to advice and assistance, the long term record is based on obervations data at the location, where it is available, any data gaps in the monthly data from this station are filled with estimates obtained by regression relationships with a number of well-correlated neighbouring sites using carefully designed scientific techniques”.
Couldn’t be clearer, the Met Office is interpolating temperatures from non-existent stations,
Do the words “data gaps” mean anything to you?
Wrong again. You’ve failed to understand this not very complicated thing. Try harder!
Why should I take anything you say seriously? You have been exposed several times as a liar.
This is what you want to think but I see how butthurt you are.
Never engage in a battle of wits with the unarmed. They never know when they have lost.
Questions from the document you posted.
Read Perry and Hollis (2005). You will have some answers. Read what they reference. etc.
It didn’t disappear. This is an automated calculation. It wasn’t output. The whole purpose of this is just being informative, it’s not even a “scientific result” (while the calculation has the necessary rigor).
Questions from the document you posted.
Me thinks you work for the Met Office. There can be no other explanation for your rants.
How is this not a major scientific scandal akin to “Climategate”? Temperatures in the MET office dataset from non-existent stations should be expunged completely and any and all publications that included analysis that included these bogus values should be retracted.
The leadership of the agency that perpetrated the fraud should be terminated and prosecuted if possible.
When I managed a major independent laboratory we occasionally found a fault with equipment or procedure after a report had been issued. We had to immediately withdraw the report and either refund the fees or redo the project at our expense. It would have been a firing offense to ignore the problem or try to cover it up.
Rick, exactly on point.
That goes well beyond your lab.
Calibration.
In my case acceptance and qualification testing.
It is clear the path from data to results can not be obstructed, must be direct, or validation and verification are impossible.
Mistakes happen. In your lab, same as me in engineering, if we make a mistake, we own it, admit it, and correct.
I see none of that over the past 5 decades of this climate modelling fiasco.
The UK Civil Service is very practiced at making the information match the Minister’s desired outcome 🙂
UK Government Ministers are preprogrammed to accept everything a Civil Servant tells them.
This was very clear in the evidence to the Post Office Limited Horizon Inquiry.
Nothing mindblowing.
The government and its agencies have manufactured data to justify wars
and conspired on the highest levels with other governments and agencies to bomb other country (iraq) and coconspired with the media to perpetuate the lie(100% support by MSM)
and killed a whistleblower(Dr Kelly).
They have withheld data about masschildrape(Rotherham,Rothington,Newcastle and dozen other cities) and called truthtellers racist conspiracy theorists and eventually marginalalized childrape with the euphemism Grooming.
There are signs all over the place for decades.
It’s just business as usual.
And keep in mind: A parasite with a degree will do anything to keep their job.
Same MO during Covid
It’s quite possible a computer program selects what data to use, in which case it is entirely plausible that they cannot answer the question simply.
Try asking for the process specification or the code instead.
Programmed accordingly….
Good luck with that
Mr GM. You have clearly not improved your perception of reality since you came off NALOPKT
lol -no excuse can be too cheap for guys like you?
Expert1 :
“How can we avoid accountability and fool the average GrimNastyZombie”
Expert2:
” Let’s outsource the process to a program or AI? That way we can play our fake game forever.
And if things get exposed,it’s AI’s fault and we the innocent victims with the best of intentions”
Expert1:
“But if we use this excuse for the 100th time GrimNastyZombie might get angry?!”
Expert2
” No.We used this strategy already 300 times and GrimNastyZombie is not only perfectly fine with it.
He actually wants such an excuse from us.
No matter what.
He will not only eat and promote the lie.
He will even invent new lies to protect us and his dillusion.”
Nonsense. I understand the process perfectly well which is why I asked for the inputs NOT the outputs. How difficult is that to understand?
“Try asking for the process specification or the code instead.”
And you think they would have complied??
This is essentially what the MetO claimed in its reply. FOI is about information at hand. This certain piece of info had not been recorded, they would’ve had to do a lot of calculations.
The MetO actually gave that information in the reply. Perry and Hollis (2005).
Summary: good old Sanders is either an idiot or he maliciously “misunderstands” not too complicated things.
Your problem is you accept as credible based on the logic fallacy of authority.
You dismiss the attempts to independently verify and validate that which is published.
Therefore, your point of view is soundly unscientific.
You are in fact a Science Denier.
But they are the authority, mind you. Should I accept as credible an idiot in a blog?
They have falsified data and you are accepting the always tell the absolute truth?
In my work, independent verification and validation are vital.
You are clueless. You accept anything you are told that matches your self-deceptions.
typo: “they always tell”
So I am an idiot in addition to a denier. Cool
Richard Feynman: “Science is the belief in the ignorance of experts.”
Perry and Hollis excludes stations based solely on the size of regression residuals. That is the difference between the predicted value and the observed one. But this assumes that the observed value is accurate by default. Most UK stations do not meet WMO Class 1 siting standards as emphasized by WUWT for several years now.
The paper also relies on some other questionable assumptions. It applies a single set of regression coefficients across the entire UK to generate the 1 km x 1 km grid estimates. This assumes geographic predictors like proximity to the sea have the same effect on temperature everywhere, which clearly ignores regional dynamics.
Being near the sea may moderate temperatures differently in the southeast compared to the northeast, yet the model treats these influences uniformly.
The accuracy of the gridded maps is evaluated using RMSE. But again, this is calculated using withheld data from the same corrupted stations, just as the regression residuals used for station excursion. If the underlying observations are systematically corrupted then the validation exercise simply measures consistency with flawed data, not accuracy relative to the true climate signal.
Also, the validation metric itself is questionable. The authors withhold 6 years of monthly data from 20 stations, but monthly averages smooth over any potential physically meaningful errors.
For example, if the regression underestimates lapse rate cooling by 1C during windy days, and there are only five such days in a month, that error is mostly smoothed in the monthly mean.
So can we put away the “primary criticism” that the MetO makes up stations and data? Because now you criticize how they interpolate. This latter problem may be a good candidate for a paper. Go try, submit one.
No. The fact remains that it is manufactured information with no traceability whatsoever.
Just because the location WAS a physical station is no excuse for propagating it’s ghost existence. If it’s ghosted existence is vital and necessary for the public, then it should have a physical existence.
Truthfully, any location anywhere could be chosen, even places where no station ever existed following this logic.
If you have a sufficient number of samples then dropping one sample should make little difference in the variance of the data and little difference in the standard deviation of the sample means. In other words there simply isn’t any justification for making stuff up!
Climate science tries to justify making stuff up because “we need long records”. No, you do *NOT* need long records. All you need is sufficient samples at any point in time to generate an acceptable standard deviation of the sample means.
I’ve even seen it stated that if you don’t have data for a location and you don’t infill it with something then you are assuming that location is the average value. SO WHAT? When you calculate an average of a number of samples you are basically assuming that average value applies everywhere – i.e. every sample is the same value. Otherwise the average is meaningless and useless!
If your sample size is so small that dropping one value out of the data set significantly changes the average or the standard deviation of the sample means then your sampling is garbage to begin with!
Interpolation means you have data points to interpolate between. Substituting data interpolated from a different data set into the one under scrutiny *is* not acceptable. You may as well just pull it out your backside.
Again, correlation is not causation. Correlation does not imply equal. Unless you are the MetO.
You miss the point. Those are interrelated.
“Go try, submit one.”
From what I’ve observed over the years in the climate blogosphere, publishing contrarian papers often comes at a professional cost. Errors are dissected with unusual intensity, arguments are misrepresented, and the researchers themselves are targeted rather than just their work.
It’s no coincidence that most publishing skeptics are either retired, in senior roles, or outside academia entirely. They do it once they have little left to lose.
I prefer to offer criticism anonymously. After all, WUWT is structured as a form of public peer review.
If someone finds my input useful, great. If not, that’s fine too. I’m not willing to ruin my life on a paper that’s unlikely that minds.
“Errors are dissected with unusual intensity, arguments are misrepresented, and the researchers themselves are targeted rather than just their work.”
You just described nyolci’s methodology.
POOMA, except they spell it arses?
Summary of the above article:
From the UK MET Office: “What, you want factual information . . . sorry, we don’t deal in that.”
Anticyclonish
As I was going up a stair
The Met – a temp that wasn’t there
It wasn’t there again today
Oh how I wish it’d go away!
with apologies to William Hughes Mearns – Antigonish.
Any claims about maximum temps from electronic thermometers housed in Stevenson screens during fine weather need to be treated with caution. Because during fine weather the warming of the Stevenson screen by the sun can cause the temperature readings to be overstated by between 1C to 3C from the real shade temperature. Has as been made clear to me since l have been recording temperatures with a LIG thermometer in open shade.
Today has been typical of what happens during such weather conditions.
Here in Scunthorpe as of 3.50pm l have recorded a temperature of 18.6C, yet the weather stations on Hatfield Moor and Thorne Moor are recording 21.2C and 19.9C.
While the Met Office weather station at Waddington is recording around 21C and a weather station in Lincoln is showing a temp of 20.7C.
This is what constantly happens whenever there is fine sunny weather.
Interesting. Over the years the Climate experts have said that using temperature anomalies solves the problem you’re identifying. I haven’t heard anomalies used as an argument lately, perhaps they’ve given up on that?
By having all of the thermometers housed in Stevenson screens outside in the sunshine. Then this overstated warming of daytime temperatures is no longer a anomaly, but rather it becomes the benchmark for claimed daily maximum temperatures.
What’s needed is a major rethink on this matter. There needs to be the testing of the temperature readings on site between a electronic thermometer in a Stevenson screen along side a electronic thermometer in open shade. As its my view this is only way it will be shown just what a issue this has become. The Stevenson screen may have done the job back in the late 19th century, but that’s no longer the case with modern electronic temperature recording.
I’d go further on this. The comparison reference should be an electronic thermometer in a Stevenson screen in open shade.
There can be considerable radiance from a sunlit scene into a shaded area.
Yes l would agree with that.
These screens need to be in the shade due to the advancements in temperature recording.
Anomalies don’t fix anything. You use (Tmax+Tmin)/2 to find a daily mid-range temperature. Those daily mid-range temperatures are then averaged to get a monthly “average” temperature. Those monthly “average” temperatures are then used to create a baseline “average” temperature. Then current daily mid-range temperatures are subtracted from the baseline to get an “anomaly”. If Tmax or Tmin is incorrect at the start then that propagates throughout the whole chain of “averaging” and the anomalies wind up being inaccurate.
Climate science is based on the assumption that averaging always increases accuracy. It’s garbage, pure and utter garbage.
Climate science meme 1: All measurement uncertainty is random, Gaussian, and cancels.
Climate science meme 2: Numbers is just numbers, no need for significant digits when measuring physical phenomena.
Climate science meme 3: Averaging can increase resolution of measurements.
Are the Stevenson screens made of whitewashed wood? Or are they more modern materials. Is it possible that there is a difference in temperatures read depending upon the material used to make the screen?
Here is a good paper.
https://www.researchgate.net/publication/249604736_On_the_USCRN_air_temperature_system
It used to be whitewash. In the US some years ago they changed to latex. One study estimated as much as a 1.5C error was introduced with that change.
And then add the digital thermistor data sampling rate to the equation.
Yes replacing the large wooden screens with smaller screens made of plastic has made the matter worse.
Due to the smaller screen been warmed quicker by the sun. But also because over time these plastic screens lose their bright white colour and become more of a creamy white in colour. Which causes them to warm up quicker in sunlight.
No matter what the station enclosure is made of aging will introduce measurement uncertainty. Even with modern plastics the level of exposure to direct sun will change the reflectivity of the enclosure material thus changing the accuracy of the measurements. Different stations with different sun exposure will change differently thus causing an asymmetric spread of measurement uncertainty. Plastics simply don’t get *more* reflective with age. Neither does any other material. This makes the climate science meme that all measurement uncertainty is random, Gaussian, and cancels nothing more than garbage. You simply can’t “average” that asymmetric measurement uncertainty out of existence. Especially not at the resolution of tenths or hundredths of a degree Celsius.
I’ve explained this to you before.
It is the same synoptic situation … an easterly from off a cold N Sea. Places to the west of you will be warmer by dint of a longer land track after coming from the lincs coast.
The Lincoln stations (Waddington, Cranwell) are furher south and the air affecting them had a shorter sea-track and was quicker to warm once overland.
Conversely places furthe rnorth need a longer land-track to reach a similar temp.
Try to think of the met setup before jumping to erroneous conclusions.
“This is what constantly happens whenever there is fine sunny weather.”
No it doesn’t !
The air over the UK is rarely homogeneous, and quite often over England too.
And yet they average temperatures.
Taxed is beyond reason.
Facts do not penetrate.
He’s single-handedly discovered a major error dontcha know.
The fact you are getting so rattled by my posts, has convinced me that am onto something here.
So trot on
My statement is based on long term study, not one day’s weather.
Everytime there is fine sunny weather my LIG thermometer record’s lower daytime temps then the local weather stations. It does not matter what the wind direction is or what time of the year it is. There is a constant overstating of the daytime temperature readings.
lts been constant enough to convince me that having modern electronic thermometers housed in Stevenson screens outside in the sunshine is a serious flawed way of trying to obtain a true recording of the shade temperature.
“Try to think of the met setup”
You do know that because of the incompetence of past Met employees, a very large proportion of UK surface sites are totally unfit for the purpose of “climate”, being totally contaminated by urban effects and bad site placement..
So incompetent are these past Met officers, that even those sites installed in the last couple of decades are mostly class 4 and 5. !!
Yes which is why I put in the FOI re artificially aspirated screens. They refused to respond but then said as additional information that they did not use them anywhere. Wait for my follow ups on that. I now have access to comparative data from some non Met office screens artificially aspirated screens directly alongside MO ones
I have pointed out several times that RTDs over time all drift in the same direction. Guess which direction that is. They need to be calibrated regularly.
Which, in turn, brings up the issue of how the calibration is done outside of a calibration lab.
There is a way to approximate the calibration.
Get a freshly calibrated device and take it on site.
Make a series of measurements using the field and the cal units side by side.
That is something that was once done when the change over from mercury to electronic thermometers was happening.
It’s not perfect, but it is better than guessing.
The other choice, obviously, is to remove the sensor, take it to the cal lab, then reinstall it. The downside is measurement loss during cal.
You still don’t don’t the calibration drift for past data. Measuring the calibration drift today doesn’t mean the same calibration drift existed a year ago. It could have been more or less a year ago and based solely on the external microclimate. You are probably correct that this is the best “possible” way but it simply doesn’t remove the measurement uncertainty in the data.
That is true. It only gives you a present time benchmark.
It would need to be done periodically to get an estimate of drift.
You are correct. The measurement uncertainty is not addressed by this simple procedure.
So the UK is going Net Zero and bankrupting its economy based on fake data that they know is fake because they faked it. WTF? When can we expect the barricades and guillotines?
One temperature station is in Kew Gardens London, behind a public toilet on a concrete base surrounded by metal cabinets.
Completely false. I’ve just visited Kew, and it’s nowhere near any toilets, it’s in the middle of a grassy area with no concrete or metal cabinets to be seen.
Here’s a photo:
Not intended as a challenge, but did you search everywhere in the vicinity?
I do not know the area. I process by analysis of alternatives. Is it possible there are 2 stations in the area?
On the other hand, I do not trust the media. If a publication stated what MD contends, then it is anecdotal, not definitive, until independent verification is conducted.
So telling that you want me to prove a negative, yet don’t ask M. Dack to provide any evidence for his absurd claim.
No. I didn’t search behind every public toilet for hidden weather stations, as I didn’t want to get arrested. I think it requires high level of conspiracy thinking to suppose the Met Office have a well sited station which they ignore in favor if a station hidden behind a public toilet.
Besides, I think the claim comes from a story about a recent record being rejected by the Met Office, because of temporary toilets in place for VE celebrations.
I was NOT asking or telling you to prove a negative or anything else.
I asked a yes/no question.
Your last statement matches my last 2 statements.
Make UK Great Again, without all the deception and advocacy science cheating.
It’s an updated excuse from the 1990s of ‘the computer did it’.
Here we go again – if not Homewood its Morrison wasting energy and in outrage over the MetO doing something that is not climate compatible
The data is not for investigating climate or to provide exactitude to any casually interested party.
It’s for people who would like to know as close as is possible what the weather was on a particular day at a particulay location and NOT to insert into a global climate GMST series (as if it would make any difference even so).
This place seems to think that the MetO is there purely to provide 101% verifiable data for the likes of Homewood/Morrison as if they are in any way important. The person/s and the data required.
It’s primary remit is to provide the UK public with any weather information of interest … and it’s NOT at all related to verified climate quality data.
That is abut a mall part of the MetO’s undertaings.
Sorry.
That’s why more than one “location” is on a beach … because people go there and want to know conditions.
That anyone with a single brain cell of intelligence would have groked from what the MetO say, that it is of scientific (ie integratable into long term climate series) quality and use is beyond me…. oh, wait!
From the MetO….
“These maps enable you to view maps of monthly, seasonal and annual averages for the UK. The maps are based on the 1km resolution HadUK-Grid dataset derived from station data.
*Locations displayed in this map may not be those from which observations are made. Data will be displayed from the closest available climate station, which may be a short distance from the chosen location. We are working to improve the visualisation of data as part of this map.
Where stations are currently closed in this dataset, well-correlated observations from other nearby stations are used to help inform latest long-term average figures in order to preserve the long-term usability of the data. Similar peer-reviewed scientific methods are used by meteorological organisations around the world to maintain the continuity of long-term datasets.
Also this oft regurgitated myth gives the faithfull something to exercise their anger on. …
“(nearly 80% of Met Office sites are in junk classes 4 and 5 with ‘uncertainties’ of 2C and 5C respectively).
Necessarily, as is explained by the UKMO:
https://www.metoffice.gov.uk/weather/learn-about/how-forecasts-are-made/observations/observation-site-classification
“WMO Siting Classifications were designed with reference to a wide range of global environments and the higher classes can be difficult to achieve in the more-densely populated and higher latitude UK. For example, the criteria for a Class 1 rating for temperature suits wide open flat areas with little or no human influenced land use and high amounts of continuous sunshine reaching the screen all year around, however, these conditions are relatively rare in the UK. Mid and higher latitude sites will, additionally, receive more shading from low sun angles than some other stations globally, so shading will most commonly result in a higher CIMO classification – most Stevenson Screens in the UK are class 3 or 4 for temperature as a result but continue to produce valid high-quality data. WMO guidance does, in fact, not preclude use of Class 5 temperature sites – the WMO classification simply informs the data user of the geographical scale of a site’s representativity of the surrounding environment – the smaller the siting class, the higher the representativeness of the measurement for a wide area……”
And no it’s not feasible to tow the uk to the west of Iberia or to depopulate and raise trees and buildings to accomodate the myth…..
“but continue to produce valid high-quality data. WMO guidance does, in fact, not preclude use of Class 5 temperature sites – the WMO classification simply informs the data user of the geographical scale of a site’s representativity of the surrounding environment – the smaller the siting class, the higher the representativeness of the measurement for a wide area……”
Anthony, please stop trolling.
Bollocks
Ant is saying that Met office like using temperature data which is totally unrepresentative of the surrounding area. 😉
Yes, in what sense does a weather station surrounded by human modified landscapes or subject to low sun angles produce high quality data?
If the surrounding conditions remain stable, the station may capture trends and anomalies over time, but those measurements still fail to represent the broader region’s true atmospheric state. The data might be precise, but it isn’t accurate.
Y
Thanks for confirming what we all know, that the Met Office should STFU about Climate Change and stick to forecasting the weather.
+10
Please stop talking about Met surface sites as if they are relevant to anything to do with climate.
Because of the incompetence of the Met and its officers, a large number of those surface sites are in a totally unfit for purpose state of disrepair.
““(nearly 80% of Met Office sites are in junk classes 4 and 5 with ‘uncertainties’ of 2C and 5C respectively).”
At least you got that correct.
Accepting data they the Met themselves state is from mostly class 4 and 5, shows that it may not actually be total incompetence, but a deliberate attempt to manufacture fake warming.
Let me add that those uncertainties are added to the regular uncertainty of the station.
For example, if they match U.S. ASOS at ±1°C, the total uncertainty would be 3°C and 6°C. Six degrees for God’s sake!
And that will not be a +/- thing.
Most of these sites are likely to err very much on the “hot” side.
A very skewed uncertainty.
And remember, a large proportion of their sites will have that highly skewed large uncertainty…
Its ridiculous that the Met Office, and its workers let their sites get into such an appalling state.
And even more ridiculous that they still pretend the data is anything but junk.
But, but, but it is the longest continuing temperature record in the world!
/s
“Locations displayed in this map may not be those from which observations are made. Data will be displayed from the closest available climate station, which may be a short distance from the chosen location.”
Define “short distance”. I’ve seen differences as high as 27F from two places 13 miles apart.
That is perfect for a 1 km grid, eh?
/s
“however, these conditions are relatively rare in the UK. Mid and higher latitude sites will, additionally, receive more shading from low sun angles than some other stations globally, so shading will most commonly result in a higher CIMO classification – most Stevenson Screens in the UK are class 3 or 4 for temperature as a result but continue to produce valid high-quality data.”
This is nothing more than a rationalization that is basically “inaccurate data can be high-quality data”.
It’s garbage!
Except they do.
Funny how they call them “climate stations” not “weather stations.”
Chew on that for a bit.
Why not? The Met Office is simply ensuring that it will never be contradicted by its own made-up data.
We should not be surprised. Climatologists like Mikey have entire careers based on just making stuff up. MBH98, anyone? The most useful statistical tricks for the global warmers are the methods which will take any sort of data and produce whatever results you want. Mikey and his Nature Trick showed everyone how it’s done.
Data is observed and then recorded. The Met office needs retraining on what observation means. Data is either acceptable for purpose or is discarded as useless. People who cite Met Office data in their studies should retract.
If unacceptable, it still needs to be retained, but with the appropriate annotation including why it is not acceptable.
“ris·i·ble
adjective
such as to provoke laughter.”
Been meaning to look it up, I can’t be the only one.
“Until recently, the Met Office showed weather averages including temperature for over 300 stations stretching back at least 30 years. The data identified individual stations and single location coordinates, but when 103 were found not to exist”
300-103 = 197
I wonder about the other 197.
A British version of AW needs to run a surface station project to survey them. I wonder how many have been visited since installation?
We all know what the problem is with MET, it is their leadership. If I had the authority I would request the Met Office employees to list the people responsible for the shenanigans taking place there. If a sufficient number don’t respond I would fire 5% of the employees from the top on down. If there is no change I would fire the next 5% and so on.
IIRC, Anthony Banton claims to be a former Met Office manager.
That would explain a lot of the Met Office’s problems.
Where are the references to text books and serious papers that allow conventional statistics about error and uncertainty to be used on made-up numbers?
Surely, stats can only be applied in commerce (as opposed to academic research) when based on properly measured actual observations. Are made up numbers illegal if not used properly? Geoff S
Anthony Banton endorsed the Met Office upthread, suggesting that weather stations affected by altered landscapes and sunlight angles somehow still provide high quality data.
One has to wonder what definition of “high quality” he is working with.
From Google AI:
That explanation is easily accessible. I just typed “what is a high quality measurement?” into Google search bar.
Those are the qualities of measurement that I was trained in during my engineering degree.
There are qualities of measurement that I used daily for 50 years and counting.
Most Met data is only useful for “climate” propaganda.
Built-in warming at a large proportion of Met weather sites.
They don’t want to discuss it because it might reveal the absolute fraud the MET office has been involved in!
DStayer,
What is holding you back? Before I retired, if we found prima facie evidence for fraud by anyone who seriously affected our operations, we would go straight to the best lawyers we could find.
Have you been threatened?
Are you not too sure of yourself?
You have to have a deal of self confidence, as in being called big head, to succeed this way.
Geoff S
Bit of reverse psychology should be used on the. Ask them what they would think of someone trying to sell them some product, claiming it to be thoroughly tested by several laboratories, but actually had one or 2 tests performed? Would they still buy the product knowing the tests claimed were a lie?
Like Chinese inverters?
I wish to officially protest nyolci.
Personal attacks.
Use of “denier” in almost every post and intended to encompass everyone who visits WUWT.
This is akin to racial profiling and it unacceptable.
Then there is use of cherry picking snippets that nyolci responds to ignoring the full comment.
This is apparently done so he/she can be amused by deflecting and detracting from a legitimate discussion.
For the rest of you, do not feed the trolls.
nyolci is not contributing anything worthwhile and is apparently only engaging to get a response for entertainment only.
nyolci makes Plato’s Sophists look like rank amateurs.
The sophistry is strong in this one.
Nick Stokes and Simon run nyolci close for mendacity and sophistry.