Guest “geological scale and context” by David Middleton,
2023 has been a hot year… We have the makings of a super-El Niño and an unprecedented injection of water vapor into the upper atmosphere stacked on top of a general warming trend since 1978, if not since the nadir of the Little Ice Age. So, it should come as no surprise that we have seen satellite-era record high temperatures, this summer and early fall.
As a geologist, I always have to apply scale and context to everything.
Scale
Temperature anomaly records are great tools. They are the only way to accurately describe how global temperatures are changing over time. However, they lack scale. They lack a frame of reference.
It is a common adage that when a geologist takes a photograph of a person, that person is simply there for scale. Other scale references include: camera lens covers (rendered obsolete by smart phones), quarters, rock hammers, spouses and action figures (on April Fools Day only). The key is to come up with a reference that is relatable. And what temperature reference is more relatable than a thermometer?

Here’s an enlarged version:

Tenths of a degree don’t really stand out on thermometers.
Context
In geology, context refers to the setting. How do observations fit into the overall setting. Granite is a very common rock on Earth. If I find a football-sized (real football, not soccer) granite rock in Albuquerque, New Mexico, it wouldn’t deviate from the setting. If I found any sized granite rock on the Big Island (Hawaii), would instantly know that a smart-ass geologist intentionally put it there.
Irrespective of whether or not any of the recent warming has been caused by anthropogenic activities, it’s fairly easy to put that warming into context.
Terando et al., 2020, will help be demonstrate this. It features a variation of one of my favorite climate models.


If the models are reasonably accurate, the early 20th century warming can be explained by natural forcing mechanisms. Whereas, some or all of the warming since about 1975 cannot be explained by natural forcing mechanisms alone. That said, the models only incorporate known, reasonably well-understood, forcing mechanisms. Judith Curry illustrated this concept quite well…

Let’s assume arguendo that all of the warming since 1975 is due to anthropogenic greenhouse gas emissions. What would this mean? It’s about 0.8 °C. warmer now than it was in 1975 (the last time the models didn’t require an anthropogenic component). Here’s UAH 6.0 overlaid on the Terando, 2020 model:

1974-1975: The Context




Assuming the climate models are valid, fossil fuel emissions saved us from “The Ice Age Cometh.”
WTF do Hawaii, Albuquerque and granite have to do with this? Anyone?
Reference
Terando, A., Reidmiller, D., Hostetler, S.W., Littell, J.S., Beard, T.D., Jr., Weiskopf, S.R., Belnap, J., and Plumlee, G.S., 2020, Using information from global climate models to inform policymaking—The role of the U.S. Geological Survey: U.S. Geological Survey Open-File Report 2020–1058, 25 p.,
https://doi.org/10.3133/ofr20201058.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
https://www.nsstc.uah.edu/climate/2023/august2023/GTR_202308AUG_v1.pdf
“At this point, it appears this influence will be minor…”
While one can’t say it isn’t playing role, neither can the other party claim that it is.
It is playing a role. At this point, a minor role was all it took to elevate the September temperature a few hundredths of a degree above the previous record monthly temperature of the satellite era.
When you are a wet sponge, a tiny bit of warming can be so, so terrifying !
Dave, excellent point, however, looking at the broader UAH data, I think the argument that H-Tonga shows only a minor effect is incorrect. Note that the Arctic Ocean had its 65th warmest Sept (i.e it’s cold) and the Antarctic land and ocean are 15th warmest. This violates normal “Polar Enhancement” which is double or more increased warming rate above the global warming increase (due to the regions – particularly the Arctic- being the heat exhaust end of the planetary heat engine). This flags something abnormal having caused the abrupt warming elsewhere.
The fact tropical oceans are the 8th warmest reinforces the fact that the highs globally weren’t caused by the usual heating in the Intertropical zone. Indeed, these anomalies suggest that the cooling of the last 9 years continued in the polar regions but were abruptly attenuated elsewhere.
Here is my forecast: cooling will follow Oct-Nov-Dec and the 9 year cooling trend will resume. Like the 2015 el Niño, there is not a lot of warm water to sustain it. Cold water slanting into the central equatorial region from cold ‘blobs’ in the western temperate zones of both hemispheres, dilutes the warm water.
I also think the water vapor effect is currently being underestimated.
Underestimated, or down played?
This doesn’t compare like with like. You’re referring to two different months. The September 2023 record is +0.45C warmer than the previous warmest September. That’s a fair bit more than a few hundredths of a degree. And many of those previous Septembers occurred during El Ninos.
How much warmer must it have been 1000 years ago for trees to grow where there are now glaciers?
Still waiting. !!!
Waiting for what??
“How much warmer must it have been 1000 years ago for trees to grow where there are now glaciers?”
The answer you keep running away from, little cockroach.
The entire point of temperature anomaly time series is to compare like with unlike.
The problem is that the anomalies don’t tell you much about climate, either global or local.
You start with a daily mid-range value in creating all the averages used up the chain. Different climates can give the very same mid-range value meaning you have lost your ability to relate anything to climate. A cool climate with a small diurnal range can have the same mid-range value as a warm climate with a large diurnal range.
That doesn’t change when you group the daily mid-range values into a monthly average or the monthly averages into an annual average. Different climates giving the same mid-range values give you the same average all the way up the chain!
In essence the “global average temperature” isn’t even a good *index* let alone a good actual average temperature.
Models reproduce surface warming in response to the Hunga-Tonga eruption to the point that a temporary increase over +1.5ºC has been cautioned:
Are you going to admit your famous gatekeeper hypothesis has now been falsified?
My still not sufficiently famous hypothesis explains a lot more about recent climate than the human emissions hypothesis.
As with any climate hypothesis, it does not predict volcanic eruptions, but it can interpret its results.
Are you going to admit that there is no scientific evidence of CO2 warming, anywhere. !
For the record, I’d have to say I don’t care for the CO2 maniacs or what they have to say.
But I also don’t like coolists.
Australian coolistas don’t care that koalas are about to go extinct.
Nothing to do with human CO2
CO2 provides growth for koala food.
It is the anti-CO2 stooges that want to deny life to the biosphere.
Where did that idea come from? The extinction, that is.
“The extinction, that is.”
Neighbour’s kids love looking at the pair that live in their trees.
Parents not as happy when the koalas get “horny” at midnight !! 😉
38% of the koala population has died in the past 5 years.
Again, where did that come from?
I have no koala in this fight, but found this. I am open to data based rebuts. Not so much to ad homs on the source.
https://www.worldwildlife.org/stories/east-coast-koalas-newly-listed-as-endangered#:~:text=Koala%20numbers%20have%20halved%20in,impacts%20of%20the%20climate%20crisis.
Koalas are always welcome… 🍻
Thanks, Bob. Yeah, they’ve had problems for years, largely due to people building houses and disrupting their food trees and having to brave predators more on the ground.
The bushfires in late 2019 hit them hard as well – the lack of timber cutting or hazard reduction burns in a managed fire ecology is a serious proble.
So have 33% of dogs and cats
When Australia’s NATURAL bushfires (unless lit by humans) hit the vast east coast forests… yes, you can get a tragic loss of koala numbers.
But koalas evolved in exactly those sort of conditions.
The numbers will increase gradually.
Perhaps better clearing of forest junk layers would help the situation.
But the greenies are the ones responsible for stopping that happening properly.
Dude the animals in Australia are crazy. Hopefully I’ll get to go and see a koala up close before they go extinct from the green policies.
Every morning I go for a bushwalk with dog.
Estimate about 60/40 chance of seeing a wallaby or roo each day. (maximum at one time was 7)
Koalas are a regular visitor to the neighbours’ trees… and there are plenty of koala and other small animal skats on the walking trails.
The big issue now is the huge amount of dry grass, scrub and eucalypt undergrowth.
Primed for another bad bushfire season.
Of all the hypothesises I’ve seen, Javier’s has to be the best I’ve seen. He ties no loose ends in his book. Definitely like a college graduate book, and not for the public casually interested in climate. I’m still trying to read it lol.
My next book, to be published in a month, is written for a general audience. Still lots of data and evidence supporting what I say. With over 400 papers cited and over 100 figures showing the data. No acronyms are used except IPCC and GHGs. It should be a much easier read.
I hope it will be available from somewhere else than Amazon.
Lol.
Is that you Griff?
Looks like pretty good modelling!
“Is +0.8 °C since 1975 a problem?”
Not yet. The right question is, it will go on, so when will it become a problem? And how can we stop it?
The question is… When will the proposed solutions cease to be far worse than the potential future problem?
The trouble is, the proposed ‘solutions’, besides being ruinously expensive and highly damaging to the environment, are farcical. Giant inefficient wind turbines and toxic solar panels, seriously? Not to mention, severely undermining food production and other farming practices.
Stokes won’t answer.
Modelled, assumption driven, vs massive urban fabrications….
Scientifically meaningless junk !!
WUWT posted it, not me.
So you agree completely with what I said.
Thanks.
Yep. 1958 was not half a degree cooler than 2001
Yep,
The balloon data gives a much better idea of temperature trends than urban and airport surface site could ever do.
You can clearly see the colder period where the chicken-littles were screaming about a new ice age…
Pity the data doesn’t go back to the 1930s, 40s.
Pity the data doesn’t go back to the 1930s, 40s.
It is. It would probably be game over. We know that global temps dropped for 30 years from the late 30’s. This graph does not even show 1/3 of that.
Why would we want to?
Pretending we can stop totally natural climate change.
You have to be a totally brain-washed anti-scientist to want to, and to think that we can.
If you can affect AGW then you can affect any GW. So what is the right temperature? Im sure Alberta and Aruba have different answers.
I’d love to go to Canada. I know a lot of people here hate the cold, but I love White Christmas’. Does anyone here live in the northern stretches of Canada?
Experiencing the cold makes you appreciate the warm so much more.🤔
And seriously, Nick.
When you have enough “parameters” ie fudge factors,..
… and a pre-determined FAKE temperature series…
… you would have to be pretty incompetent if you couldn’t make them somewhere close to each other.
How can we stop it? The same way we can stop volcanic eruptions, earthquakes and tsunamis. In other words, we can’t control any of these natural phenomena. However, lots of ancient societies did try very hard to keep the climate stable (as well trying to prevent plate tectonics from triggering devastating catastrophes). Plenty of blood was spilt (both animal AND human) but no amount of sacrifice and human suffering could make one iota’s worth of difference.
Looks are deceiving. It shows a warming trend all the way from 1885 to 1963. It misses the late 19th-century cooling and the mid-20th-century cooling while reproducing a very weak early 20th-century warming.
Yes, the temperature profile is all wrong. That’s the problem with bogus Hockey Stick charts.
The Hockey Stick “hotter and hotter” temperature profile looks nothing like the profiles of the regional written surface temperature records.
The bogus Hockey Stick charts are a fraud perpetrated on the people of the world.
It’s a crime against humanity.
The bigger problem is that it’s shown as a single value. Totally bogus.
Gosh!
So just because a few virtue signalling, venal and generally incompetent folk get all anxious about this trivial (and entirely beneficial) fraction of a degree, we must trash the economy of the West by spending Trillions on stuff that never has and never will really work?
Just like Maurice Strong, Chrisina Figueres, Otmar Edenhoffer advised us and for the same reason?
The model fit 1910 – 1960 is very poor in both the Figure 3 and Figure 4.
The only place the model fits is post-1975 – but that can be easily tailored as its a single period of warming. Fitting two periods of warming shows up the flaws in the models.
Fitting three periods of warming, the models start to look appalling. Here’s an overlay of CMIP6 (dark brown), HadCRUT4 (as used in AR6 SPM Fig1; green), UAH6 and sea level and glacier retreat (the latter two calibrated as warming) and all baselined to the period 1961-1990.
CMIP6 is clearly running too hot over-warming post 1995 BUT more importantly CMIP6 (and the input forcings) show no warming trend at all until 1910.
So what caused all that sea level rise and glacier retreat in the 19th century? And no, soot has already been discounted in the literature…
“Looks like pretty good modelling!….”
No, it looks like pretty good fraud.
It’s too good to be true – so it probably isn’t true.
How can a model be that accurate over 140 years when they don’t even know the effects of clouds on the long term climate?
Over the long term, climate models show up to three times more warming than observed.
This model is dated 2020 – so they already knew the answer. Any fool can predict the past.
A model based purely on the physical laws is impossible, due to low precision, lack of understanding and the chaotic nature of weather and climate.
The model is “accurate” because it contains huge numbers of fudge factors, otherwise known as “parameterisation” – something called a “dark secret” by a modeller. As they already knew the answer in 2020, it’s obvious that the parameter values will be endlessly adjusted until they get a good match – a kind of Darwinian evolution as described by Willis.
Because of this, “proofs” that show model runs with and without the effect of CO2 are completely meaningless. And completely fraudulent.
Apart from anything else, it’s blindingly obvious that much of the modern warming occurred because the planet was emerging and recovering from the Little Ice Age.
There’s only one way to evaluate a climate model. Do the model runs in, say, 1980 and wait until 2010 (30 years). You have to be patient, but fortunately we have serious super computer runs from the seventies.
When you do this, you find that the models run up to three times warmer than reality.
They have net zero predictive skill.
Chris
If the error bars are accurate, would you explain why the blue line isn’t possible.
The shaded region represents the 95% interval for individual model results – it doesn’t mean that any line you can draw within its boundaries is equally likely. All it says is that 95% of the models didn’t go higher or lower. If your blue line represented an actual model run, that model would become an extreme outlier immediately after 2020 when the trend vastly exceeds the 95% envelope.
“The shaded region represents the 95% interval for individual model results”
Meaning it is assumed that the model outputs have no measurement uncertainty. The outputs are 100% accurate.
NO MODEL OUTPUT IS UNCERTAINTY FREE. If the inputs have uncertainty then so does the output.
Why is that uncertainty never shown?
That is nowhere assumed. In a multi-model ensemble, the structural and calibration differences between models will swamp uncertainty arising from measurement error in observational datasets used to initialize the models, so illustrating model spread by showing the 95% envelope of individual model runs is a good way to show the range of model uncertainty. But it is not really a probability distribution, so you have to be careful in interpreting it as such.
“That is nowhere assumed. In a multi-model ensemble, the structural and calibration differences between models will swamp uncertainty arising from measurement error in observational datasets”
Oh MALARKY! This is the kind of logic that says wild outliers swamp the measurement uncertainty of the rest of the data.
The problem is that EACH of the models has their own uncertainty. Therefore the it is unknown what the actual limits are for the ensemble!
If the measurement uncertainty for the daily mid-range value is +/- 0.7C then that uncertainty will grow with each daily mid-range added to the data set and it will grow even further when those daily mid-range values are grouped into annual totals. Add in hundreds of stations and the uncertainty grows even more.
If the models are then tuned to that data the models will inherit the measurement uncertainties associated with the data.
Again, each individual run of a model will have uncertainty. If you then create a data set from those runs those uncertainties will ADD in the sum. Saying you are using the 95% envelope of the runs assumes that the output of each run is 100% accurate with no inherent uncertainty in each run. The measurement uncertainty interval for the grouped outputs will be WIDER than the interval between the highest and lowest run output.
Unless those runs generate outputs with differences in the thousandths digit then there is no way they are accurate to the hundredths digit. And even with differences in the thousandths digit the outputs of the runs can be wildly inaccurate. That’s the problem with the SEM, it tells you how precisely you have located the population mean but it tells you absolutely nothing about how accurate that population mean is.
Uncertainty in the dataset used to initialize the model does nothing but produce a base-state error that is propagated through – you just wind up in a place that is some constant-value too high or too low. What process uncertainty in the models is the details of how they’re constructed, the choices made, the assumptions, how things are parameterized, etc. You can examine each of these things for an individual model, and try to reduce them, but you can also compare the spread in model results to assess the range over which these structural differences produces different outcomes. Both approaches give you good information that you can use to understand model uncertainty better.
In the image in question, we are simply comparing the range of individual model outputs, which gives a good sense of the range of model uncertainty (but can’t be assumed to be a probability distribution).
“Uncertainty in the dataset used to initialize the model does nothing but produce a base-state error that is propagated through – you just wind up in a place that is some constant-value too high or too low.”
That’s just not true. If it was then every run using the same inputs would give the same outputs.
Models are tuned to hindcasting data. Any uncertainty in that data will, as you say, be propagated into the model and come out the far end. The issue is that the models are iterative. An uncertainty in one steps input grows in its output and then feeds the next iterative step. It’s exactly like compound interest at a bank.
“structural differences”
These are *NOT* propagated input uncertainties. They ADD to the propagated input uncertainties. And if your model has 100 iterative steps then the uncertainty compounds 100 times!
No one is saying that models don’t provide information. The issue is that if that information doesn’t match reality then of what use is the information?
The typical BS noise and smoke generated by the climate tools: “everything cancels!”
Not necessarily. Some of the equations being solved by climate models for fluid mechanics don’t have analytical solutions and are approximated using numerical methods. So they are not necessarily idempotent.
A base-state error propagates through as a difference in the base-state. It doesn’t grow with each iteration, it maintains a constant value. If the global mean started as 5 degrees too high, say, in step 2 it will be 5 degrees too high, same for step three, and so on.
That’s a great question, because no model, of any kind, will ever exactly match reality. Otherwise it wouldn’t be a model, it would just… be reality.
Wrong—you need to learn some elementary uncertainty.
Not only do comments such as yours add nothing to the conversation, they actively detract from it, for everyone. Please reflect on that as you contemplate your next interjection.
Are you now the self-appointed moderator?
You are clueless about metrology—how do you know the true values?
He doesn’t! In order to know the errors he has to know the true values. If he knows the true values then why bother with the models? Job security for programmers?
They might as well be reading goat entrails!
“Not necessarily. Some of the equations being solved by climate models for fluid mechanics don’t have analytical solutions and are approximated using numerical methods. So they are not necessarily idempotent.”
If that is true then the output of the equations contain uncertainty that is propagated into the next iteration run of the model. You are assuming without justification that all those uncertainties are random, Gaussian, and cancel. You simply cannot KNOW that unless you also know what the true value should be and what the errors are. If you know that then what use are the models?
“A base-state error propagates through as a difference in the base-state. It doesn’t grow with each iteration, it maintains a constant value. “
Oh, malarky AGAIN! Each iteration has as it’s base state the output of the previous iteration. If each iteration is just outputting what is input then of what use is the model? If it is outputting something other than what is input then the base-state error at the input gets reflected in the iteration output which then gets propagated into the next iteration which, in turn, compounds the output error caused by a base state error.
You are trying to blow the same smoke up everyone’s backside that Stokes does. First the models output are not uncertain and then they are. Then the model inputs don’t have any uncertainty and then they do.
Pick one story and stick to it!
You don’t know this, you just hope it might be true.
It’s the same old meme: All measurement uncertainty is random and Gaussian and therefore cancels!
And then push the downvote button without reading or thinking!
It’s telling how obsessed you are on these buttons. You completely forget that this is fundamentally a fawning fora for you and yours. Ending up in the red here channels Goldwater telling Nixon to resign. I.e., good advice….
Heh.
blob has been reading goat entrails, again.
The usual, hand-waving typical of the ciimatrologers, who know nothing about real metrology.
What does it matter.!
The temperature series IS NOT REAL.
The models do not represent REALITY.
It is all just non-science fiction.
They represent someone looking into a cloudy crystal ball and “seeing” what the future will bring. No different than a carnival fortune teller.
Very good modelling, little Nicky – you coloured it all in and stayed between the lines this time! Pre-2000 model estimates appears far too perfectly aligned with the observations to be mere coincidence, then the models start going haywire – sudden surge upwards whilst observations head towards the lower model boundary. So you want an award for models that can be forced into a rough approximation of past temperatures before going off the rails into an unrecognisable fantasy?
Warmer is better!
Anyone who doesn’t understand that is an idiot.
Let’s assume arguendo that all of the warming since 1975 is due to anthropogenic greenhouse gas emissions. What would this mean? It’s about 0.8 °C. warmer now than it was in 1975 (the last time the models didn’t require an anthropogenic component). Here’s UAH 6.0 overlaid on the Terando, 2020 model:
UAH 6.0 ESTIMATES the Anomaly at 5-7km above the earth.
Terando does not.
apples meet oranges! granite meet gypsum
Surface stations measures all the MASSIVE urban and airport contamination, mal-manipulated to over-ride the slightly more stable rural data.
It has ZERO probability of giving anything even remotely realistic as a “global” temperature.
Plus for “arguendo“, There just won’t be eough data for a few centuries.
Arguendo does not mean argue till the end.
Correct. It means, “for the sake of argument, let’s assume something is right/correct/accurate.”
It would simply mean that urban populations and activity have increased since 1974 whilst air temperatures have remained roughly the same.
Setting aside the red herring fallacy…
The only reliable surface data set, HadCRUT4, hasn’t been updated since December 2021. Here’s HadCRUT4 and UAH from late 2018…
Where do we measure CO2? Above the mixing layer. Where should we measure the temperature? Above the mixing layer.
As I have seen no convincing explanation for the Little Ice Age, or the Medieval Warm Period, estimating what effects non-anthropogenic factors have is difficult.
Solar variability seems, or at least the proxies do, fit several previous cycles, but fall apart on others. Other factors, like ocean currents, do not have a long enough measurement history.
CO2, given proxies for much earlier epochs, seems to be mostly saturated, even at current levels. So I mistrust all the claims that anyone knows what is going on.
If you believe in billions, then a long enough measurement history might be something for later generations to look at.
“As I have seen no convincing explanation for the Little Ice Age, or the Medieval Warm Period . . . .”
Didn’t Michael Mann get rid of those inconveniences? Climate scientists have no intention of explaining something that they say never happened. I imagine we will have to wait for the next interglacial for any attempts to explain climate.
Tom Halla:
The Medieval Warm Period was a period of very few volcanic eruptions, only 31 eruptions over a period of 300 years, ~950-1250. (13,7,11/century, ). The air was largely free of volcanic SO2 aerosol pollution, and temperatures naturally rose.
This was also true of the Roman Warm Period, as well as the Minoan Warm Period.
For the 600-year Little Ice Age, ~1250-1850, there were 144 eruptions (18,13,13,32,28,38/century, with 38 >VEI4). From 1635 to 1850 all but 4 LIA temp. decreases were caused by SO2 aerosol emissions from a known volcanic eruption.
Our warming trend since 1980 has been due to “Clean Air Act” reductions in industrial SO2 aerosol emissions.
Everything that has been going on with respect to temperatures has been due to changing levels of SO2 aerosol pollution in our atmosphere!.
“Our warming trend since 1980 has been due to “Clean Air Act” reductions in industrial SO2 aerosol emissions.”
We didn’t have a Clean Air Act from 1910 to 1940. What caused that warming? Warming that was equal to the warming since 1980.
Tom Abbott:
Primarily for the same reason as for since 1980: Decreases in the amount of industrial SO2 aerosols in the atmosphere.
That span of time includes the 1930’s which were clearly due to decreased industrial activity and hence fewer SO2 aerosol emissions. There were also many other business recessions, with periods of idled factories, etc., with the same effect.
See my article on Google Scholar “A Graphical Explanation of Climate Change”
“clearly”…
karomonte
“clearly”
Yes. Between 1929 and 1932, there was a 13 Million ton decrease in SO2 aerosol emissions, due to the depression and idled factories, etc. And between the Dust Bowl years of 1935-1938, a heat Dome sat over parts of Kansas, Oklahoma, Texas, Colorado, Nebraska, Wisconsin, Iowa, and Minnesota, with the highest temperature (114 deg. F) of the era occurring on July 14, 1936.
(During Heat Domes, SO2 aerosols within the area settle out, and due to the less polluted air, temperatures soar. Winds at their peripheries can be very high). .
There’s very little evidence that volcanic eruptions affect anything beyond locally.
Here is a journal article that sums up much of the conflicting measurements that people have gathered about climate.
How much has the Sun influenced Northern Hemisphere temperature trends? An ongoing debate
https://iopscience.iop.org/article/10.1088/1674-4527/21/6/131/pdf
Speaking of perspective….
Can anyone make a guess just how much WARMER it must have been 1000 or so years ago…
… for trees to have grown where there are now glaciers ??
It was only warm where those particular trees were growing.
They had active volcanoes underneath them.
Or something . . .
How does a glacier form over an active volcano?
Enquiring minds want to know. 🙂
I was being frivolous, but I reckon that string of 91 volcanoes under west Antarctica has some contribution to the detected melting of ice sheets from the bottom.
Despite the denials from all the “scientists” that it can only be attributed to manmade CO2 in the atmosphere.
“I was being frivolous”
I know 🙂
One word: Iceland 🙂
https://guidetoiceland.is/nature-info/glaciers-in-iceland
In Iceland, there are many volcanoes and many glaciers that have formed on top of active volcanoes.
“How does a glacier form over an active volcano?” Quickly; very, very quickly.
Not in the North West Territories!
Maybe, like people, trees used to be a lot tougher 1,000 years ago! “Merely a flesh wound, I’ll butt you!”
The ice dwarves knit them tree jackets… duh.
Thank you guys..
These ideas are FAR better explanation that any of the AGW stooges has come up with !. 🙂
Within the last couple years, when forest remains were discovered at altitude in Norway (2500 feet above the current tree line), the scientists researching what was recently uncovered by a melting glacier calculated, from the lapse rate, that sea level temperature at the time that forest grew, must have been about 3.5C higher than today.
While I can rarely (almost never) find a past WUWT article using the search function, now it might be that WUWT, or the web service provider, isn’t going to allow me to try. Anyone reading my post here will notice that I didn’t (could not) provide a link to the article I mentioned. When I attempted to sign onto WUWT in order to be able to post, and search, I got the follow message.
429 Too Many Requests
You have been rate-limited for making too many requests in a short time frame.
First it was no editing allowed, now apparently it is very limited access allowed. This occurred on only the second WUWT browser tab opening since coming to the site, perhaps 5 minutes earlier.
Found this in Alaska, not Norway, from 1,000 years ago.
Ancient Forest Thaws From Melting Glacial Tomb | Live Science
BTW, I’ve also had the message:
429 Too Many Requests
You have been rate-limited for making too many requests in a short time frame.
I think that is another bug that popped up after the last software update. I’ve seen it, too.
Really? Use a light gray line on top of a low resolution picture of a dirty thermometer? My 61 year old eyes can’t really see that. Why not use a dark blue or green so it would stand out.
My 64 year old eyes see a red line…
Red-green color blindness is fairly common.
I wondered if that was a possibility.
I see a red line and I want to paint it black.
I should have thought of that… 😉
The “gray” line is the monthly UAH. At this scale, it is completely covered up by the thick red line, the annual average.
AGW and ice ages both take so long to happen that me and all the people I know will be gone. Nuclear ice melters could have the added benefit of drinking water – funny massive, concentrated power sources seem to be the answer to both problems??
I wonder what happens when every Western once-was-civilisation defaults on the loans for their “we-have-zero” claimate agendas
Not going to be nice for future generations.
Countries who hold US debt will then own even more of this country than they do today.
They don’t any of this country, and when that debt is defaulted on, they won’t own the debt either.
(They probably won’t actually default on the debt, they will just inflate it away.)
So you believe all the stories about Chinese, Saudi, and foreign citizens buying large amounts of US real estate are just urban myths? Why?
I hope they buy a lot more real estate here in north central Wokeachusetts, a once thriving industrial area, now a classic member of the rust belt- with relatively low real estate values. I keep seeing the high prices on any real estate close to Boston. If those “furiners” buy real estate here and drive up values, I can then sell my modest hut and move to a WARMER area. 🙂
Your claim was that those who own US debt, own parts of this country.
Now you are talking about people who have directly bought property in the US.
Two entirely different things.
When you own debt, the only thing you own is a right to be paid back. However if the debtor defaults, you lose even that right.
The Fed holds the debt. They call it quantitative easing but it’s really just printing money without the expense of the paper. When they need more money they just push a button.
They can’t create too much money or they will get inflation. If they don’t create enough money they will get slow growth or a recession.
If they don’t print enough money, they get deflation.
Both inflation and deflation can cause slow growth or a recession.
Back during the global cooling scare of the 70’s, I remember reading about a suggestion to drop soot over glaciers to speed their melting.
Come now, David! You know that our alarmist posters have time and again shown their complete understanding of all the major ocean cycles and the dynamics of the water vapor in our atmosphere!
Surely by now you are ready to lay down your rock pick, and pick up your very own copy of their video Bible; An Inconvenient Lie!
Why I’ll bet that they can cogently argue that this summer was warmer than the Paleocene Eocene Thermal Maximum, and that we would be far better off as a species if CO2 was back under 200ppm, where it belongs, and mile thick ice sheets covered much of North America, Europe and Asia again! That seems to be the ideal climate that they are striving for, and, since wind and solar energy are essentially free for the taking, we should soon see our utility bills fall to near zero balances as well!
What’s that you say? Oh, never mind!
Idealists of every age hearken back to a once more perfect world- which of course never existed.
JZ,
I think you mispelled “idiots!” Either that or you mixed zealot with idiot and got idealists!
I was an idealist in the ’60s. Then I grew up.
I’ve watched the news and read newspapers since I was young.
I don’t think I was ever an idealist. The cynics got hold of me early.
Well I was a pot head in the late ’60s at the U. of Wokeachusetts, Amherst (one of the stopping off points I believe for Mickey Mann and many other alarmists) – so here in the belly of the liberal beast, it was difficult to avoid the brainwashing. It took years of working in forests with real world salt of the Earth type folks that helped snapped me out of it. People who LOVE fossil fuels. 🙂
Michael Mann’s connection with Amherst is that it was where he was born.
Dropping from around 400 ppm to 200 ppm would probably reduce agricultural production by around 50%.
Keep going.
The $US 200 trillion price tag for stopping warming by 2050 works out to about $US 1 million per family in the developed world or about $US 35,000 per year per family. Most people would probably prefer having an extra $US 1 million in their bank account and a degree or two of warming.
https://www.bloomberg.com/opinion/articles/2023-07-05/-200-trillion-is-needed-to-stop-global-warming-that-s-a-bargain?embedded-checkout=true
The Earth is currently in a 2.56 million-year ice named the Quaternary Glaciation, in a cold interglacial period between very cold glacial periods. Outside of the tropics, people have had to find ways to keep warm during the winter, and sometimes during the spring and autumn as well.
https://en.wikipedia.org/wiki/Quaternary_glaciation
David,
Great context. I think you should have another line at the far right that says “for $200 Trillion you can be here by 2100”
For $200 trillion…
Liberals take it for granite that CO2 is solely responsible for the interglacial warming occurring after the Little Ice Age. 🙂
The Granite is a hotel in downtown Albuquerque?
Some liberals
The rest of them are willing to use CO2 as an excuse to implement the policies that they have wanted all along.
The geology of Washington State is so complex that when finding any sort of rock anyplace one does not assume that some s-a geologist had visited, but where the exotic terrane came from. Next question — did glaciers or the floods drop an erratic? Eventually, one gets around to s-a geologists, almost as common as mosquitoes but more annoying. 🙂
Good post, and I see a red line too.
Bingo!
David,
I’ve heard of physical geology and petroleum geology; I’ve never heard of SA geology! Is this a new discipline? If so, I may have to go back to school for another degree!
If you are in Albuquerque and you find a football sized piece of granite you should take it back up into the Sandia Mts. from whence it came. But first stop at Rudy’s Texas BBQ and get some smoked brisket or prime rib (weekends only) to fortify you for the trek. I may be a little erratic, but I don’t think your football would be so unless you flew it to Hawaii. From the Rio Grande Rift Zone to the Hawaiian Vocanic Hot Spot; that’s not a hotel on Waikiki, by the by!
We have Rudy’s BBQ here in Texas. We don’t have Little Anita’s breakfast burritos with genuine hatch chili sauce!
I’m now smoking my homegrown, ripe red jalapeños along with a brisket! The first batch two weeks ago came out great, and I hope to have another harvest of ripe jalapeños later this month. I end up with something that is somewhere in between roasted green chiles and chipotle; a mild jalapeño with smoky overtones and a hint brisket flavor. Yummmmm!
With the next brisket I’m going throw most of the point and a mess of the jalapeños in a pot and make some chili! I know it’s sacrilegious to use brisket for chili, but as a climate realist I’m quite used to heretical thinking. Even worse (or better, depending on your point of view) I’ll probably make it with beans; a crime which may prevent me from ever being able to enter the state of Texas!
I do exactly what you do quite often. With the beans. Tastes even better left over, with corn bread. I usually smoke a pork butt along with it (cheap at Costco), and then the missus pulls and crock pots it. AGW made it too hot to do it in St. Louis, until just this week, but we had a cool day earlier, So, I used my tri tip cooker with a sirloin roast. and chicken leg/thigh quarters. Linguica and garlic bread on the grill at the last half hour for appetizers, then eat with mex pinto beans, salsa, pre-dressed salad. It’s a whole thing on the Cal central coast, and is the fav church money maker.
Folks, since every thread ends up open here anyway, no whining about ‘jacking it….
Indeed, grew up in ABQ and miss the food
Scale and context are indeed things to be considered with regards to the miniscule changes in temperature in the last few decades, and in the last century or so. I swear, if I hadn’t read about or otherwise seen in various media copious mentions of “climate change” (and its previous iteration, “global warming”), I’d have no idea that anything had changed at all!
This is not unlike anything that changes over such a long time on such a small scale as to render it nearly unnoticeable to the average observer. Imagine a huge old tree, at least a couple centuries old. In the lifetime of an average person, the tree will change very little (in so far as getting bigger/taller), and the only way you might notice changes is if you find a picture of your great grandfather next to the tree when he was a boy.
I’ve been around for more than 50 years, living in the same area, and with regards to seasonal temps, it’s just more of the same-ol-same-ol…. Some winters have barely enough snow to cobble together a tiny snowman, while others leave people doing lots and lots of shoveling. Some summers have several days that top 100F, while others never have even a single day that tops 95F.
For something that is apparently so devastating and dangerous, temperature changes over the last 150 years sure are hard to notice!
Of course, the likely answer (that has been mentioned here at WUWT many times in many articles) is expanding UHI temp data being added into the records. This is where the (miniscule) changes have taken place – in urban areas, and, like trees that have been around for a while, they grow slowly and over the course of many years.
The “hottest year eva” all boils down to concrete, asphalt and other man-made materials that absorb and re-radiate heat.
I wonder what would happen if you took all of the temp readings from stations all over the world and pitched out any that were in (or even near) urban areas?
A number of such studies have found warming rates from 1/3 to 1/2 less than major temperature databases claim.
Trouble is, even so-called rural stations can often have had a significant local warming effect over time.
And, in almost every case, that size of the local effect is very hard to evaluate with any accuracy.
Even CET, which is calculated from relatively good sites, shows a population-based trend when put next to Valentia** (SW Ireland) data.
**very few sites would be as unaffected by local changes as Valentia.
That Valentia record is quite interesting. The Central England data is generally considered to be handed down from Olympus, but there you have it, a lengthy temp record from “as-good” thermometers with .6 or so degrees less warming….probably indicating temp increase due to land use change in CE….
A bit of extra information…
… Population growth in the UK is basically linear since 1800.
Ah, but population distribution has been nothing like uniform during that time!
Correct.. there was probably more growth in the area of the CE data.
typo … !SE Ireland !!!
“For something that is apparently so devastating and dangerous, temperature changes over the last 150 years sure are hard to notice!”
But… but… “the science” tells us it’s an EMERGENCY! Greta Thunberg says we must panic! Al Gore says the oceans are boiling! The seas are rising 2 mm/year! Oh, the humanity! 🙂
Here is a 2021 paper that separated out the rural stations. It also looked at the whole field of climate and found that hardly anything agrees as it should.
How much has the Sun influenced Northern Hemisphere temperature trends? An ongoing debate
https://iopscience.iop.org/article/10.1088/1674-4527/21/6/131/pdf
This is not correct. Models don’t reproduce the early 20th-century warming. According to models, only moderate warming punctuated by volcanic cooling is possible between 1850 and 1963, then intense warming.
Models don’t properly reproduce any 30-year trend prior to 1975. Except for volcanic eruptions, warming and cooling decades are wrong.
Black lines, HadCRUT5. Red line, CMIP6 multimodel average. The warming rate was calculated over a 15-year moving average.
It dawned, inside my head and fairly recently, how to **properly** measure temperature. ##
It is to use a method that is traceable back to First Principles and the Systeme Internationale of units (metres, kilogrammes, seconds)
(OMFG – Le Kilogramme de Standarde dans Le Paris est shrinking – quelle que fois est nous supposed to fait maintenant?)
Much much better, it does its own averaging, intrinsically and perfectly over very large areas of ground/sea/ocean/ice.
= the same principle that ‘proper’ scientists of 100’s years ago used to do their science – using their own home-made glass instruments yet still producing unrepeatable, consistent and unarguable results down to 0.001°C
While State of The Art Star Trek Sputnik Technology. a-la ‘Spencer’ is only good for plus/minus 2°C at very best – yet gives results to 0.01°C ??????
ye straynge tymes indeede
How: Use Barometric Pressure
It’s superb for trend-spotting/interpretation and is dead simple to understand.
First of all tho, adjust your barometer for your height above sea-level (1millibar per 27 feet) and take average pressure to be 1013millibar
Interpret: If your average readings are above that figure, you are in a dry place and if the trend is rising, it is getting drier and also hotter
If your average is below 1013mB, you are in a wet place and if the trend is dropping, it is getting wetter and colder.
Simple.
A gorgeous example is attached; See a bone dry and warm Europe under slack winds and high pressure while UK has much lower (and dropping) pressure plus wind, rain, thunder and flood warnings. And it’s cold.
Esp see the offshore winds coming off Spain, France, Germany & Italy – doing nothing else but causing Europe to become ever drier, arid and desertified.
(Colours are indicating wind-speed – NOT temperature)
Doesn’t The Mistral show up beautifully, pouring out off France into the Med
The barometer will average air pressure over a vast area (unless it is insanely windy when you take your reading(s)) and thus be immune to ‘siting issues‘ such as tarmac, air-conditioners, car parks/airports etc etc etc. The air itself does all the averaging.
Not at all dissimilar to what I’ve been doing this last 12 years = measuring the temperature of the ground/soil/dirt, at about 15″ depth. That does an epic job of averaging air temperatures
## Recording air-pressure is much closer to a ‘record of climate’ than any temperature record could ever be.
Primarily because temperature is the output of the climate system: climate causes temperature, not vice-versa.
If anything to do with temperature does ‘cause climate‘, it would be= Temperature Difference.
i.e The temp difference between ‘places’, in all 3 dimensions.
One may argue ‘temp difference in 4 dimensions’ but because the 4th only goes in one direction (apart from inside the GHGE) – there’s naught we can do about it nor even properly measure it.
‘Natural Variation‘ is just a feeble way of trying to turn the clock back.
IOW: Human History often repeats, Natural History never does.
Pressure and temperature are different parameters and should not be confused.
While height above MSL is fixed, to account for mercury expansion, which affects the weight of Hg/cm depth, a Torricelli (mercury) barometer should be adjusted for ambient temperature. It is also incorrect to claim that air temperature and pressure are related (i.e., as they are affected by different processes, they vary independently).
All the best,
Bill Johnston
These types of records are meaningless spot readings that do not show alarmist trend
Where it’s hot somewhere, it’s cold somewhere else
The UK has had a wash out, cool summer this year, hardly indicative of Antonio’s global boiling hysteria
Nature is a wonderful guardian of the dynamic, multi input climate – she does her thing regardless of human activity and will always do so
Humans should be more caring of plastic pollution, something they create and can control, that harms the environment, as are multiple marine / wildlife deaths by useless wind farms
When the green mobsters turn a blind eye to factual, proven human adverse effects, they have lost the moral high ground and room
Private jetting to climate conferences is the ultimate two finger salute to our planet
“Temperature anomaly records are great tools. They are the only way to accurately describe how global temperatures are changing over time. However, they lack scale. They lack a frame of reference.”
They also lack any uncertainty reference. Temperatures taken with measurement devices having measurement uncertainties in the tenths digit can’t distinguish differences in the hundredths digit – those differences in the hundredths digit are part of the great UNKNOWN.
See Judith Curry’s illustration. Do you see “measurement uncertainty” on there anywhere? It’s part of the “Unknown Unknowns”.
Anomalies inherit the measurement uncertainty of the components used to calculate the anomalies. And the measurement uncertainty is *NOT* defined by how precisely you can locate the mean of the data set, i.e. the SEM which is what climate science uses since it can be made arbitrarily small. You can calculate the global tempeature mean down to the millionth digit but it won’t tell you that the mean is inaccurate by 1C, 5C, .1C, or whatever. Nor is measurement uncertainty random, Gaussian, and cancels across a thousand measurement stations as climate science always assumes.
Is anyone saying this? I’ve followed the many discussions on this topic you’ve been part of and it often feels like you’re arguing against an imagined version of what people are saying.
YES! When you do not state the measurement uncertainty then you are implying that there is none! The uncertainty is not the difference between the outputs of different models nor is it the difference between the outputs of two different runs of the same model.
The uncertainty of the inputs determines the uncertainty of the outputs.
What *IS* the uncertainty of the model outputs for just one model? Can you state it? Or do you just ignore the uncertainty like the rest of climate science does?
That’s not what I asked, I asked if anyone is making the specific claim you cite – that computing the mean reduces the uncertainty in individual measurements. I’ve never seen such a claim be made, here or anywhere.
“That’s not what I asked, I asked if anyone is making the specific claim you cite – that computing the mean reduces the uncertainty in individual measurements”
No one has claimed that the computing an average reduces the uncertainty in INDIVIDUAL measurements!
What is being said is that the uncertainty in the individual measurements must be propagated onto the average!
Your lack of reading comprehension skills is showing again!
The SEM is *NOT* the measurement uncertainty of the average. The SEM is basically a measure of the sampling error contained in calculating the mean from a sample smaller than the population. The SEM tells you *NOTHING* about the measurement accuracy of the mean you calculate.
Write thIS on a piece of paper 1000 times. Maybe it will eventually soak in. “THE SEM TELLS YOU NOTHING ABOUT THE MEASUREMENT ACCURACY OF THE MEAN”.
It tells you something about the precision of the estimate. As the sample size approaches infinity, the SEM approaches 0. This doesn’t tell you if there is a systematic bias in your measurements (i.e. you might have a very precise but inaccurate estimate of the mean), but no one you’ve ever argued this topic with disagrees with that assessment. You can never reduce systematic bias by making additional observations of the thing containing the systematic error, you have to identify the bias and remove it.
Where do models and air temperature measurements make multiple repetitions of the same quantity?
Showing your lack of metrology background (but typical of climastrology)—bias is UNKNOWN and can’t be corrected (as is done in a calibration, for example).
To know the bias, you have to know the true value, which is UNKNOWN.
Every day, at tens of thousands of locations across the earth, weather stations make a recording of the same quantity – air temperature. If we could blanket the earth with an infinite number of weather stations, each mean temperature estimate would exhibit an SEM of 0.
Well that isn’t true at all. I want to measure the temperature in my house over a few days and get the average temperature. I learn that my thermometer was incorrectly calibrated and always measures 1 degrees too high. This is a systematic error. Every measurement I take will be 1 degree too high. I do not need to know the true value of any measurement to know that I need to subtract 1 degree.
WRONG—ten microseconds after a sensor reading is complete, that particular temperature is gone, forever. Each location has a DIFFERENT temperature! The SEM is meaningless, N = 1!
So? I’ve captured the temperature at that moment. That’s the thing I wanted to capture.
And I want the mean of all those different temperatures. Just like if I were measuring the mean height of people in a room, I’d want everybody’s height.
And the SEM only applies if you are averaging the same thing.
You are not.
A temperature is the same thing as a temperature, a human height is the same thing as a human height, and so on.
Does it matter that the temperatures worldwide are not sampled simultaneously?
And the fact remains that for all the noise generated about how the SEM works these miracles — no one uses it nor reports it.
It matters not at all, why would it? The SEM is not being reported because it is functionally nil for the surface temperature products because of the large sample size – it is orders of magnitude smaller than other components of the uncertainty such as systematic error, station bias, and sampling uncertainty due to incomplete spatial coverage. You can read about how uncertainty is estimated for NASA’s index here:
https://dx.doi.org/10.1029/2018JD029522
A “sample” size of 30 is large?
And climastrology bumbles along assuming these are all nil too (or they “cancel” via magic).
But the SEM is *NOT* a measure of the accuracy of the mean of the samples.
If every measurement station took a daily measurement at 0000 GMT, the accuracy of the mean of those measurements is the measurement uncertainty of each individual measurement propagated onto the mean. It is *NOT* how many digits your calculator can handle in calculating the mean!
Once the measurement uncertainty overwhelms the standard deviation of the sample means you are done. Anything past that is part of the Great Unknown. And since the typical measurement uncertainty of the measurement stations ranges from +/- 0.3C to +/- 1.0C you can *NEVER* know anything past the tenths digit. Trying to calculate anomalies in the hundredths digit is gazing into the cloudy crystal ball of a carnival fortune teller and GUESSING at an answer!
No, the temperature at the top of Pikes Peak is *NOT* the same as the temperature at the zoo in Colorado Springs. Each is one sample at a different location. The temp at the top of Pikes Peak is *not* the average of the the temp you read there and the temp at the zoo in CS.
Each is an individual sample with one member. Therefore the SEM for each is is undefined.
What you are trying to convince us of is that if you measure the heights of 100 Shetland ponies and the heights of 100 Arabians that the you can calculate an average that means something. You can’t. They are different measurands, just like the temp in Las Vegas and the temp in Chicago are two different measurands. It simply doesn’t matter how many measurements you take, they will *never* be the same measurand.
Nothing I’ve said is inconsistent with this, I’m not sure why you’ve phrased it as an objection.
Of course you can, you can say something about the average height of the population of Shetland Ponies and Arabians. If you find a change in that average over time, it reflects a change occurring in the population. If you’re saying we don’t know if the change is coming from the Shetlands or the Arabians, or a combination of both, I agree. But we have as a basic understanding the knowledge that some change worth investigating is occurring.
“A temperature is the same thing as a temperature, a human height is the same thing as a human height, and so on.”
A temperature reading in one place, with specific conditions to that site, is completely irrelevant to a temp reading in a completely different place.
“And the SEM only applies if you are averaging the same thing.”
Idiotic. The SEM is describing a sampling distribution of the mean. There is little point taking the mean of the same thing.
You are NOT sampling!
I’m not doing anything, except explaining that you can take a mean of different things.
Yep. You can take the mean of the heights of Shetland ponies combined with the heights of Arabians. The issue is what you have when you find that mean!
You can, I just don;t know why you want to. It seems to be some horse fetish with you.
Usually when you are taking an average or any other sort of measurement it’s for a purpose.
“The issue is what you have when you find that mean!”
That’s the question you need to ask yourself. Why do you want the average of two different breeds of horses, and no others?
Do as many breeds as you want. From mini-shetlands to Belgians. It doesn’t matter.
If it doesn’t have meaning for horses then it doesn’t have meaning for temperature either. If you can’t assign a meaning to the mean of different things then how does that mean apply to the real world – an issue which you never seem to care about.
Finding a daily mid-range value is no different. What does it mean in the real world? Different max values and min values can give the same mid-range value so what does the mid-range value tell you about climate? It’s no different than what the mean of the heights of shetlands and arabians tells you.
“You can, I just don;t know why you want to. It seems to be some horse fetish with you.”
That’s exactly the point – which you seem to be unable to grasp. Why would you want to find the daily mid-range temperature value when it tells you nothing about climate?
are you back on the bottle?
No point in taking the mean of multiple measurements of the same thing? Only one measurement is needed to define the measurand with 100% accuracy?
Why don’t you stop to think before hitting the “post comment” button?
For once it would be nice if you could make a point without these insulting ad homs.
Yes, you can measure the same thing multiple times in order to reduce measurement uncertainty. I was going to add that, but didn’t want to have to run into multiple sub cases. The point is that when you are talking about the SEM, it’s usually in the context of taking a sample mean, which is usually going to be the mean of different things.
“The point is that when you are talking about the SEM, it’s usually in the context of taking a sample mean, which is usually going to be the mean of different things.”
So you *can* combine the heights of Shetlands, Arabians, quarter horses, and Belgians and find a mean.
Now, tell us exactly what that mean tells you about horses. Tell us what the SEM of your sample tells you.
“And I want the mean of all those different temperatures. Just like if I were measuring the mean height of people in a room, I’d want everybody’s height.”
Apparently you are blissfully (willfully?) unaware of the concept of Intensive Properties. Averaging all those disparate readings together is scientific malpractice.
And the SEM is σ/(n-1). If n = 1 then σ is undefined since you are dividing by zero.
Why can’t CAGW advocates *ever* get the math right?
The real math is inconvenient?
But they will NEVER acknowledge that N is always exactly equal to one.
Why can’t Bellman calculate his confidence intervals correctly?
Because he’ll never admit that he assumes the data used to create the trend line is anything other than 100% accurate.
“And the SEM is σ/(n-1).”
Followed by “Why can’t CAGW advocates *ever* get the math right?”
No. The SEM can be calculated by σ/(√n). If the sample size is 1, then the SEM is σ. Something that should be obvious if you would only learn what these terms mean.
We went through this once not long ago. Your memory is poor.
The real definition of the SEM is σ/(n-1). If n = 1 then you are dividing by 0 which is undefined.
“We went through this once not long ago. Your memory is poor.”
You are right, I can’t remember you claiming anything that stupid before. Maybe I just blocked it out of my mind because I was feeling embarrassed for you.
“The real definition of the SEM is σ/(n-1). “
Then just provide a link to a source say it is the “real definition”.
You seem to be mixing up the equation for the SEM with the that for a sample standard deviation, and creating a meaningless hybrid of the two.
You were given the quote from Taylor on this in another thread. Go study Taylor if you need to. I’m not your research assistant. It has to do with the degrees of freedom in a small sample size – i.e. a size of 1 for a temperature measrurement!
“I’m not your research assistant.”
Of course. You’ll spend untold hours insulting me and prattling on about Shetland ponies. But you can’t be expected to actually remember the part of your book that justifies this extraordinary claim.
It would be so much easier on both of us if you would just consider the possibility that you might have made a mistake.
I’ll give you a hint: Taylor, Eq 5.45
Again, it is *YOUR* responsibility to study the literature and come to an understanding of how to handle metrology. IT IS NOT MY RESPONSIBILITY TO DO IT FOR YOU!
You are the ultimate cherry-picker, scanning through different documents looking for something, ANYTHING, that at first glance seems to confirm your misconceptions. But you *never* actually study anything to understand the context of what you see.
“I’ll give you a hint: Taylor, Eq 5.45”
Which to the surprise of none is the equation for the sample standard deviation. Not the standard error of the mean. You would think that it is labeled as bring the best estimate of σ would be enough of a clue.
Now my hint. EQ 5.63. Another hint is that it’s in the section titled “Standard Deviation of the Mean”
You really must stop cherry picking Taylor. Read it for meaning.
It is the standard deviation of the measurements – assuming only random error. THAT *is* the measurement uncertainty. The SEM isn’t needed in this case!
From Page 138: “The numbers x_1, …, x_n are the actual results of N measurements, thus x_1, …, x_n are known, fixed numbers.
You can’t even read two pages of text and understand the context! If you have the measurements then you can calculate the standard deviation directly. Why would you just take a sample of the measurements, with its attendant sampling error, in order to *estimate* the standard deviation?
You’ve been asked before if the global temperature data is a population or a sample. You never get around to answering. So which is it?
He truly believes that the uncertainty in the UAH is only the little up-and-down wiggles he can see.
I see I’m still living rent free in your head. But it’s still too expensive.
I do not think the “little wiggles” in the data are the uncertainty. I think they are mainly real ups and downs caused by natural causes such as ENSO. What I’ve said is that any random uncertainty in the monthly measurements cannot easily be greater than the standard deviation in the monthly data.
Poor bellcurveman, all he is capable of is regurgitating insults, can’t make up anything original.
Of course you do, this is obvious. Confirmed every time you post your CI graph, calling it the “uncertainty”.
And now you contradict what you wrote just two sentences ago.
Good job.
“What I’ve said is that any random uncertainty in the monthly measurements cannot easily be greater than the standard deviation in the monthly data.”
How do you justify that assumption? Why can’t the measurement uncertainty, which adds with each measurement you include, grow larger than the variation in the stated values?
When the measurement uncertainty in just the daily mid-range value is +/- 0.7C that turns out to be AT LEAST +/- 3.8C for the measurement uncertainty for a month, a range of almost 8C.
A standard deviation of 3.8C would give a variance of about 14C. How many months have a variance of 14C in their daily mid-range values?
“the measurement uncertainty, which adds with each measurement you include”
It does not – no matter how many times you repeat that error.
“grow larger than the variation in the stated values? ”
Because each monthly stated value is a combination both of the real world variation in temperatures, and the random measurement uncertainty. The combined variation has to be bigger than either – hence the natural variation and the measurement variation has to be less than the observed variation.
Of course, that still leaves any possible systematic errors, which is why comparing different data sets is useful.
“When the measurement uncertainty in just the daily mid-range value is +/- 0.7C that turns out to be AT LEAST +/- 3.8C for the measurement uncertainty for a month, a range of almost 8C. ”
You’re an idiot. Have I told you that recently. Worse you are an unteachable idiot. I and multiple other people have explained to you for well over two years why what your saying is wrong and couldn’t possibly be right – but you’ll still just repeat the same drivel whilst claiming that you are an expert and every one who disagrees with you needs an education or to read all your sacred texts for some hidden meaning.
If you really believe that the measurement uncertainty for a monthly average at a single station could have an uncertainty of 8°C, when the individual daily measurements only have an uncertainty of 0.7°C – then I can’t help you any further. Just don’t expect anyone else to swallow it – apart from your relatives and karlo.
Liar.
Lies are pretty much all climastrology has in the tank.
Projection time!
That *is* all you can see when you assume all measurement uncertainty cancels. That meme is so ingrained in his mind that he can’t get around it, he doesn’t even recognize that he is controlled by that meme in everything he asserts.
Typical backtracking. You said the SEM was equal to σ / (n – 1). Now you say the SEM isn’t needed and you really meant the measurement uncertainty. That’s what the sample standard deviation is estimating in this case, the uncertainty if an individual measurement. It is not the uncertainty if a mean.
And you equation is still nonsense. You do not divide σ by (n – 1) to get the standard deviation.
Really, just consider the possibility that you’ve misunderstood something. Rather than lecturing me on how to read a text book.
Nitpick Nick has taught you well.
Of course, you got soured on the GUM when they gored your sacred SEM ox.
Pointing out that σ / (n – 1) is not the equation for the SEM, is hardly nitpicking, or for the standard deviation. The fact you think it is, demonstrates how little you understand the subject you claim to be an expert on.
“You’ve been asked before if the global temperature data is a population or a sample.”
I’m sure I have answered this before. But obviously it’s a sample. You can’t measure an infinite number of points contiously, which is what the population would mean. The population in this case is the entire continuous surface temperature.
But as I also told you many times, it isn’t a simple random sample. You can’t just decide the standard deviation of all measurements by root N to determine the uncertainty.
If it is a sample then all the SEM can tell you is how close to the population average you are. It can’t tell you *anything* about the measurement uncertainty of the population. UNLESS you do as you usually do and just assume that all measurement uncertainty is random, Gaussian, and cancels!
“But as I also told you many times, it isn’t a simple random sample. You can’t just decide the standard deviation of all measurements by root N to determine the uncertainty.”
Then why do you continue to claim that the SEM of the sample is the measurement uncertainty of the mean?
“If it is a sample then all the SEM can tell you is how close to the population average you are.”
Which is what you want to know. The population being the average global anomaly.
“all measurement uncertainty is random, Gaussian, and cancels!”
You still don’t understand what Gaussian means do you?
And you still don’t understand what a systematic error would mean, in terms of measuring from different stations, and taking an anomaly.
Poor beelcurveman didn’t bother to read E.2 of the GUM.
And what do you think E2 adds to this argument? It says nothing about the distinction between SEM and SD. It’s about making realistic assessments of uncertainty, quoting the best estimate – not understating it or overstating it. All good advice – nothing to do with any point you are making.
Really, rather than trying to come up with ever more pathetic name calling – just accept you might have gotten something wrong. The real definition of the SEM is not σ/(n-1).
The point is, uncertainty is quantified by standard deviation, NOT standard deviation divided by something to make it artificially smaller.
Like I wrote, the GUM gored your sacred SEM ox.
“, NOT standard deviation divided by something to make it artificially smaller.”
You are the one defending the claim that the real definition of the SEM is the standard deviation divided by (n – 1). Surely even you can see that that would produce a smaller uncertainty than dividing by √n.
You still haven’t explained the relevance of E2 to the claim that σ / (n – 1) is the real SEM, or why you think the SEM is not an appropriate measure of the uncertainty of a mean, whilst saying the SD is.
“Like I wrote, the GUM gored your sacred SEM ox.”
Yes, you keep repeating nonsense like that. It must be so much easier than actually trying to justify your claims.
The GUM may not like the words “standard error of the mean” but does say that the renamed “experimental standard deviation of the sample mean” is the correct measure of the uncertainty of the mean of repeated measurements.
Shirley, its true, bellcurveman cannot read for meaning.
Sorry, ShirIey, I was wrong, it was E.4 not E.2:
E.4 Standard deviations as measures of uncertainty
The only expression of uncertainty allowed in bellcurveman-land, because it gets him past the roadblock in the holy quest for destroying Western civilization.
Thanks for the correction. Pity you can’t acknowledge your mistake without insulting me at the same time.
I’m still not sure how E4 helps your argument. Are you thinking of equation E7? That involves dividing something by 2(n – 1), but it isn’t to determine the SEM. It’s there to determine the uncertainty of the SEM when it’s estimated from a sample standard deviation. It’s the variance of the experimental standard deviation of the mean, using the GUM’s definitions.
Of course that’s undefined when n = 1, as is the sample standard deviation. It’s why they say a type B uncertainty can be more reliable than a type A, when you only have a small sample size
“Every day, at tens of thousands of locations across the earth, weather stations make a recording of the same quantity – air temperature”
Temperature is *NOT* a measurand! The temperature in Phoenix is *not* the temperature in Miami! You are *NOT* measuring the same thing!
” If we could blanket the earth with an infinite number of weather stations, each mean temperature estimate would exhibit an SEM of 0.”
You truly don’t even understand what the SEM *is*, do you? In this case you would calculate the POPULATION AVERAGE! There would be no SEM because the population itself would be the sample!
And it *still* wouldn’t get rid of measurement uncertainty because each data point would be a measurement of a different thing! In other words, the population average accuracy would *NOT* reach 100%!
“Well that isn’t true at all. I want to measure the temperature in my house over a few days and get the average temperature.”
That average temperature will inherit the propagated measurement uncertainty of each individual measurement! You would *NOT* be measuring the same thing each time and there is no guarantee that the measurement uncertainty would be random and Gaussian and therefore cancel.
“I learn that my thermometer was incorrectly calibrated and always measures 1 degrees too high.”
How would you know that? Do you have a lab-calibrated thermometer in your house? One that sits in a thermal bath all day? Again, Taylor, Bevington, Possolo, and the GUM all say you can’t identify systematic error with statistical analysis. So how would you know how far off your thermometer is at each measurement? If the humidity and barometric pressures are different at each measurement there is no guarantee that the systematic bias is the same for each measurement!
“I do not need to know the true value of any measurement to know that I need to subtract 1 degree.”
If you don’t know the true value then how do you know how much to subtract?
You are just digging the hole you are in ever deeper. Stop digging.
The measurand is simply the quantity you are interested in determining, in this case the average temperature of Miami and Phoenix.
Correct, assuming no systematic error, the estimated mean would be exactly equal to the population mean, and the standard error would approach 0 as the number of samples approached infinity. I think we are in agreement there.
Scientists spend a tremendous amount of time and effort to work out how to identify systematic error and bias in the network. You can read about the treatment of systematic error in any of the papers published alongside the various temperature indexes, like GISTEMP.
“The measurand is simply the quantity you are interested in determining, in this case the average temperature of Miami and Phoenix.”
You’ve never once read the GUM. It’s obvious when you make statements like this.
From JCGM_200_2012, section 2.3:
“NOTE 3 The measurement, including the measuring
system and the conditions under which the measurement
is carried out, might change the phenomenon, body, or
substance such that the quantity being measured may
differ from the measurand as defined. In this case,
adequate correction is necessary.” (bolding mine, tpg)
The conditions will be different at each measurement station. The conditions will be different at a single station each time a different temperature measurement is taken.
How do you correct for that when you don’t know what the different conditions *are*?
“Correct, assuming no systematic error, the estimated mean would be exactly equal to the population mean”
You are continuing to dig the hole deeper. Systematic uncertainty would play no role whatsoever. The estimated mean and the population mean would be the same whether there is systematic bias existing or not. Only the accuracy of that mean would still need to be determined!
“Scientists spend a tremendous amount of time and effort to work out how to identify systematic error and bias in the network.”
Climate scientists apparently don’t! They just assume that all error and bias is random, Gaussian, and cancels.
The biggest clue is that it is IMPOSSIBLE to “work out” and identify systematic error. Can’t you read? That’s what all the experts say – there is no way to work out how to identify it. How do you work out that the person mowing around the station changed from cutting the grass at 3″ to 4″. How do you work out that a farmer built a pond last year upwind of the measurement station?
“You can read about the treatment of systematic error in any of the papers published alongside the various temperature indexes, like GISTEMP.”
I’ve read some of those. And they all fly in the face of what Hubbard and Lin found. You simply cannot apply the same adjustment to a group of stations. It *HAS* to be done on an individual basis by going to the station and actually calibrating it. And even then subsequent calibration drift will falsify the adjustment factor over time!
The idea that you can average (they call it homogenize) temperatures over distances greater than 100 miles totally ignore the impact terrain, geography, elevation, etc can have on the temperature readings. What they do is the same thing *YOU* try to do. They take a group of readings and assume that if they have enough of them that all the measurement uncertainty will cancel and the SEM becomes the measurement uncertainty of the mean!
That meme that “all measurement uncertainty is random, Gaussian, and cancels” just totally permeates everything that is done in climate science. If they did that in medical science there would be no medical research because the researchers would all be sued into bankruptcy! If an engineer designing a bridge assumes that all measurement uncertainty cancels the engineer would probably wind up in criminal court charged with negligence and would wind up in civil court being sued into bankruptcy!
The thermometers will be measuring degrees Celsius regardless of the prevailing conditions at the weather stations (assuming no systematic calibration error in the thermometer is present). I’m not sure why you think the statement you’ve quoted from the VIM contradicts anything I’ve said. In fact section 2.3 quite succinctly states: “measurand: quantity intended to be measured.”
They don’t assume this at all, and I’m quite sure you cannot quote a single scientific source echoing this sentiment.
It is not remotely impossible to identify and eliminate sources of bias or systematic error. It is presumably impossible to identify all sources of bias and systematic error, but that is simply means the science is a never ending pursuit. If you’re trying to suggest that data cannot be useful unless we can be certain it doesn’t contain a single error or element of bias, then we can’t use any data that has ever been collected in the real world.
Every thermometer in the world has the exact some systematic error? Is this what you are claiming?
They are all different instruments, there is a clue here for you.
Evidence that you have not understood a single word Tim has tried to explain to you.
If the population mean is inaccurate then no amount of precision in calculating it can correct the inaccuracy!
“ but no one you’ve ever argued this topic with disagrees with that assessment.”
Of course they do! They do it every time they try to foist off the SEM as the inaccuracy of the mean while ignoring the propagation of measurement uncertainty from the individual components onto the mean.
“You can never reduce systematic bias by making additional observations of the thing containing the systematic error, you have to identify the bias and remove it.”
How do you do that with temperature measurements? Do *YOU* know the systematic bias in the weather station at Forbes AFB in Topeka? If not then how do you remove it?
Do *YOU* know the systematic bias in all of the temperature measurement stations in Buenos Aries? If not, then how do you account for the bias?
*YOU* can’t even quote the variance of the winter temps in the NH vs the summer temps in the SH. How then do you assess any systematic bias in the measurements?
Refer you to this paper, for GISTEMP (see the various other publications for different indexes if you’re interested in those):
https://agupubs.onlinelibrary.wiley.com/doi/10.1029/2018JD029522
You’re asking a good question, you just aren’t the first person to ask it, and you’re assuming that no smart people have come up with good answers.
You are advocating for fraud via data manipulation and manufacturing.
Yes. Nick Stokes has been drivelling about this for years on WUWT. He’s still wrong.
I’ll look forward to you providing a citation for that. What is said is that the uncertainty in the mean grows inversely to the number of observations used to compute the mean. You an have measurements with a large uncertainty produce a mean that is quite certain if you take a large number of measurements. If the thing we are after is the mean, this is quite helpful.
“What is said is that the uncertainty in the mean grows inversely to the number of observations used to compute the mean. “
Not this malarky again! You have just described the standard deviation of the sample means – commonly called the “standard error of the mean”, the SEM. I wish that term had never been coined by the statistics world. It should have always been called the SDOM, the standard deviation of the sample means.
The SDOM *only* tells you how precisely you have located the population mean. That is *only* equivalent to the measurement uncertainty of the average in one of two situations:
Once again, all of the recognized experts (Taylor, Bevington, Possolo, the GUM, etc) state that when systematic bias in measurements exist you cannot analyze the data with statistics. Statistical analysis simply will not identify systematic bias in the data.
“You an have measurements with a large uncertainty produce a mean that is quite certain if you take a large number of measurements.”
But you do *NOT* know how accurate that mean is! If every temperature measurement has a systematic bias of +1C, you can calculate the mean of the data down to the ten-millionth digit but that mean will *STILL* be inaccurate by at least +1C! You cannot reduce that systematic bias by averaging!
At the risk of getting into the weeds, each set of data used to create a mid-range daily value has to be considered as a separate sample of the temperature. Each daily mid-range value will have a different uncertainty then the next one, even for the same measurement station. So when you combine those into a monthly average, the measurement uncertainty simply can’t be random and Gaussian and will never cancel. That’s why you must add the measurement uncertainty of those individual daily mid-range values in quadrature to get the total measurement uncertainty of the monthly average.
When I have stated to bellman, bdgwx, Stokes, and now you that the basic climate science assumption that all measurement uncertainty is random, Gaussian, and cancels I am speaking the truth. That’s the *ONLY* way the SDOM can become the measurement uncertainty of the average. And that assumption is just WRONG when used for temperature measurement data.
And not a single one of them EVER report even a single SEM from any of the thousands of averages needed for these trendology charts.
Total hypocrites.
There is no assumption, whatsoever, that temperature data contains no systematic bias. Enormous effort is put into trying to identify and remove such biases from the data. This is the point of the adjustments that people on this website constantly decry.
I agree with you that you cannot assume an accurate estimate of the mean just because you have a large sample size – you might have a very precise estimate that is off because of systematic errors in your measurements.
Because they are UNKNOWABLE, and pretending they can be known and doing such to historic data is FRAUDULENT.
“Enormous effort is put into trying to identify and remove such biases from the data. “
No, there actually is *NOT*. Hubbard and Lin found clear back in 2002 that you simply can *NOT* apply regional adjustments to individual measurement stations to account for systematic bias because of the differences in the microclimate at each station.
It the grass below the station light green or dark green. What is the height of the grass. Is it even grass at all? Or is it sand? Is it concrete? Is it gravel? Is there a lake or pond nearby that increases humidity at one station but not another? Are nearby roads asphalt, concrete, gravel, or dirt? What is the elevation at each station because that will affect all kinds of variables such as prevailing winds. Is the station on the east side of a mountain or on the west side?
These are just a FEW of the variables that would have to be considered. And that doesn’t even account for the physical calibration drift of the measurement station itself, including any electronic sensor and associated circuitry!
“I agree with you that you cannot assume an accurate estimate of the mean just because you have a large sample size – you might have a very precise estimate that is off because of systematic errors in your measurements.”
Then why does climate science never include measurement uncertainty in any of their claims? An anomaly of +0.5C is far different than an anomaly properly stated as +0.5C +/- 0.7C! Properly stated climate science simply can’t tell differences of less than 0.7C!
Only if you are measuring the same quantity!
Why is this so hard to understand?
Once again, like Stokes, bgdwx and others, AlanJ demonstrates his inability to distinguish between accuracy and precision.
He tries to defend applying fraudulent “adjustments” to historic data, then whines when someone calls him out for it.
I’ll take being lumped in with those folks as a strong indication that I’m on the right track, so thanks for the affirmation.
Stokes sophistry is infamous, good job, dude.
The SEM is touted as the end-all-be-all of measurement uncertainty by the climate alarmists, but none of the trend line manufacturers bother to report values of SEM for any of the myriad averages they compute along the way. The UAH certainly doesn’t. And they invariably assume the instrumental uncertainties are zero.
“…but none of the trend line manufacturers bother to report values of SEM for any of the myriad averages they compute along the way.”
Certainly not Lord Monckton and his numerous pauses. Yet when I try to point out the uncertainty in the trend line, you call me a Marxist.
“The UAH certainly doesn’t. And they invariably assume the instrumental uncertainties are zero.”
Nobody thinks satellite data has no uncertainty. Some here will insist UAH data is the most reliable, but that might just be because it shows the least warming.
You STILL haven’t figured it out. The “uncertainty” you speak of is the best fit measurement between the data and the trend line – and it assumes the data is all 100% accurate!
Think of it this way, instead of using a fine point pencil to draw in the trend line, measurement uncertainty in the data would require you to use a magic marker with a 0.5″ width – or maybe even larger – to draw in the trend line.
You would actually have no idea where in that big, wide, black line the actual trend line is. It could be anywhere inside it, it’s location is actually UNKNOWN!
“Nobody thinks satellite data has no uncertainty. Some here will insist UAH data is the most reliable, but that might just be because it shows the least warming.”
It’s not just the satellite measuring instrument that must be considered. But that is usually what is mentioned when speaking of the uncertainties in the UAH. You must also include the fact that the satellite does *NOT* measure cloud cover at the time and location the irradiance is measured – so you have a large uncertainty in the irradiance measurement just from that. What is actually being measured is the atmosphere and how that relates to surface temp at any specific time and location is also unknown, another source of measurement uncertainty.
If the UAH was presented as an INDEX instead of a temperature, and the anomalies were calculated using propagated measurement uncertainties, then UAH would make more sense. Instead of playing all kinds of number games to convert irradiance to temperature, just present the irradiance measurements along with their associated measurement uncertainty and use *that* as the index. It would make far more sense.
“would require you to use a magic marker with a 0.5″ width – or maybe even larger – to draw in the trend line.”
Or you could do the sensible thing and actually work out the confidence interval and draw or shade the area. You know, the way I keep drawing the Pause and the way Monckton doesn’t.
Where are the SEM numbers?
I don’t see them.
What are you whining about now? What SEM? It’s a linear regression, not a mean.
And just for the record, your confidence interval assumes the UAH monthly numbers have ZERO uncertainty. The CI would look very different if computed with the actual uncertainty intervals.
“And just for the record, your confidence interval assumes the UAH monthly numbers have ZERO uncertainty”
It does not. It assumes random variance about the trend, with adjustments made for the autocorrelation. It makes no assumption as to what caused the variation.
I suppose there’s little point asking you again, if you are complaining that my uncertainty is too small or too big. As always you want to imply I’m underestimating the uncertainty, whilst ignoring it for the purpose of claiming there has been no warming.
I should be surprised that a math major has no idea what I’m talking about, but then I remember the source…
Do you have your battery car yet to prevent “warming”?
“It does not. It assumes random variance about the trend, with adjustments made for the autocorrelation. It makes no assumption as to what caused the variation.”
You are STILL assuming that all measurement uncertainty cancels leaving only the stated values. Then the variance of your stated values defines the uncertainty – i.e. the best-fit index!
If you include the measurement uncertainty of the data then what happens? You wind up having to calculate all the combinations of possible values for the data points and calculating a trend line for each possible combination – meaning your tend line becomes smeared over a wide interval. I. e. a 1/2″ wide marker!
“You are STILL assuming that all measurement uncertainty cancels leaving only the stated values.”
Stop lying. I make no such assumption.
“Then the variance of your stated values defines the uncertainty – i.e. the best-fit index!”
Just explain what you mean by a “best-fit index”.
“If you include the measurement uncertainty of the data then what happens?”
You can do that if you want. Just put the assessed uncertainty in to the equation in place of standard deviation of the residuals. You can then combine that with the uncertainty derived from the variance of the residuals. But you will be double counting the measurement uncertainty.
“You wind up having to calculate all the combinations of possible values for the data points and calculating a trend line for each possible combination”
That’s effectively what the standard equation does. But you don’t want every possible combination, just the most probable.
“meaning your tend line becomes smeared over a wide interval”
Do you want it to be even bigger than the one I used to demonstrate how meaningless the pause trend is? You can do that – just increase the confidence interval.
“I. e. a 1/2″ wide marker!”
I.e.? 1/2″? What if I just make the graph bigger?
Here’s the pause with a 99.9% confidence interval.
Do you want a cracker now?
You still assume standard stats tells you everything about uncertainty.
IT DOESN’T.
“Stop lying. I make no such assumption.”
Of course you do! You don’t even know when you do it! When you use the stated value by itself in order to calculate the best-fit index to a trend line while ignoring the associated measurement uncertainty that goes with each stated value YOU ASSUME ALL MEASUREMENT UNCERTAINTY CANCELS LEAVING ON THE STATED VALUES!
“Just explain what you mean by a “best-fit index”
A scatter plot of the residuals. Do the residuals have a pattern? What is the average residual, is it large compared to the absolute values or is it small.
Without this you can’t really even tell if a linear trend line is appropriate or not.
See the attached graphic. In the first column the residuals are pretty small and close together. A linear trend line is a good fit. In the second column the residuals appear to be sinusoidal meaning a linear trend line is *not* a good fit for the data. In the third column the residuals are all over the place and are large – there is no way to determine the significance of the slope of the trend line, it might actually really be zero.
Why does climate science never use this method on their data. They ignore it just like they ignore measurement uncertainty!
“You can do that if you want. Just put the assessed uncertainty in to the equation in place of standard deviation of the residuals. You can then combine that with the uncertainty derived from the variance of the residuals. But you will be double counting the measurement uncertainty.”
Huh? How are you double counting the measurement uncertainty? The measurement uncertainty is *NOT* the variation in the stated values. If it was you wouldn’t ever need to know the measurement uncertainty! You would be able to ignore any systematic measurement uncertainty – which of course you can’t since it can’t be identified by the variation in the stated values!
“That’s effectively what the standard equation does. But you don’t want every possible combination, just the most probable.”
No, it doesn’t. It can’t if you ignore the measurement uncertainty!
“What if I just make the graph bigger?”
Scaling doesn’t change the relationship. This is the same mathematical idiocy as assuming the variance of an anomaly is smaller than the variances of the components of the anomaly!
This is exactly what he believes, having a value plus an interval for every data point is anathema to him.
The real issue is that he STILL has no idea what measurement uncertainty; if he did, he wouldn’t write nonsense like this line.
The confidence interval you are speaking of is *NOT* based on the measurement uncertainty.
You are graphing the difference between the assumed 100% accurate data and the trend line.
You *still* haven’t figured out what measurement uncertainty is!
As always, you never seem to realize that the large uncertainty is one big reason for not taking any notice of the “pause”. You never worry when the pause is presented with zero uncertainty, and are quite happy to claim with zero uncertainty that this proves CO2 cannot be responsible for warming. Yet when I try to show how much uncertainty there is a in short time period, you start insisting there should be more uncertainty.
“The confidence interval you are speaking of is *NOT* based on the measurement uncertainty.”
It’s based on the variation in the monthly data. That variation can come from any source, including measurement errors. If the uncertainty in the monthly values was enormous (your claimed multiple degrees) then that should be reflected in the variability of the data.
Of course you’ll then talk about systematic errors. But that makes little sense in the uncertainty of the trend. Any systematic bias will just move the line up or down, the angle remains the same.
And as I’ve said before – the thing you should be looking at is the possibility of a changing systematic error. That could explain why UAH and other data sets have different trends. But it’s still nothing like the uncertainties you keep claiming.
More hand-waving, you have a special talent here.
Your lack of reading comprehension skills is showing again. It is KM and I that have been pointing out that the large measurement uncertainty in UAH makes it unfit for the purpose of identifying global average temperature differences in the hundredths digit.
That does not mean that we can’t use what CoM is doing, we just realize that the actual value is UNKNOWN. Which most in climate science don’t realize!
“It’s based on the variation in the monthly data.”
One more time. Variation in the monthly data is only useful if you assume measurement uncertainty is random, Gaussian, and cancels. Why do you think Possolo, in TN1900, makes the assumptions that he does? No systematic bias. No random error. The same instrument each time. The same environment each time. In essence he is doing what YOU do – assumed all measurement uncertainty is zero!
This is *NOT* a valid assumption in the real world of temperature measurement across the globe with multiple measurements of different things taken by different devices under differing conditions!
“If the uncertainty in the monthly values was enormous (your claimed multiple degrees) then that should be reflected in the variability of the data.”
If you don’t know what the measurement uncertainty is then how does it get included? Measurement should be given as “stated value +/- measurement uncertainty”. If you just drop the measurement uncertainty piece of it then how does it get included in the stated value? Especially if it is systematic bias?
“Of course you’ll then talk about systematic errors. But that makes little sense in the uncertainty of the trend.”
Malrky! If the data was all from multiple measurements of the same thing taken by the same instrument under the same conditions then this would be true. BUT THAT IS NOT THE CASE in real world temperature measurement!
“Any systematic bias will just move the line up or down, the angle remains the same.”
NO! Not if the systematic bias is different for each data value! You are right back to assuming that all measurement uncertainty is random, Gaussian, and cancels.
You just can’t get away from that meme, can you?
And never will.
“The “uncertainty” you speak of is the best fit measurement between the data and the trend line – and it assumes the data is all 100% accurate!”
If you actually tried to think about what you were saying instead of cut and pasting these meaningless catchphrases it would be possible to figure out what you are saying.
The trend line is the best fit to the data (where best fit in this case means minimizing the squares of the errors). But the uncertainty of the slope and the intercept are the range of possible trends that have the most likelihood of given the observed data. This depends on the variance of the data and the number of observations.
“It’s not just the satellite measuring instrument that must be considered.”
Indeed. There are lots of reasons to be skeptical about satellite data sets, including all the code needed to transform the raw data into a three dimension temperature model. But that doesn’t justify just assuming huge errors in any monthly figure. However bad UAH is, it manages to agree reasonably well with all the other data sets.
” Instead of playing all kinds of number games to convert irradiance to temperature, just present the irradiance measurements along with their associated measurement uncertainty and use *that* as the index.”
As I keep telling you and karlo, if you don’t like UAH, you should take it up with Dr Spencer. And then explain to the problem to those who keep treating UAH as the only trustworthy temperature data set.
“But the uncertainty of the slope and the intercept are the range of possible trends that have the most likelihood of given the observed data. This depends on the variance of the data and the number of observations.”
The variance of the data is only useful if you assume all measurement uncertainty is random, Gaussian, and cancels! This can only happen if you are measuring the same thing multiple times using the same instrument under the same environment with no associated systematic uncertainty.
You say you never assume that measurement uncertainty is random, Gaussian, and cancels but you use the assumption every single time you make a post!
Again, you are dong the best fit to the STATED VALUES of the observed data while assuming there is no measurement uncertainty associated with those observed data elements.
“But that doesn’t justify just assuming huge errors in any monthly figure.”
Of course it does! The uncertainty grows with every measurement added to the data set. The SEM is *NOT* the measurement uncertainty, the measurement uncertainty propagated from the individual measurements is the measurement uncertainty of the average!
“However bad UAH is, it manages to agree reasonably well with all the other data sets.”
That means nothing more than all the other data sets are just as bad!
It’s not my job to take up anything with anyone. I post my objections, that’s all I need to do. No one yet has showed how temperature measurements around the globe, be they from thermometers or satellites have zero measurement uncertainty. It is just assumed that they don’t! That allows the SEM to be used as the measurement uncertainty of the average and it can be made vanishingly small!
“The variance of the data is only useful if you assume all measurement uncertainty is random, Gaussian, and cancels!”
Would someone give Tim a push – his needles stuck again.
“Of course it does! The uncertainty grows with every measurement added to the data set.”
Only in your own bizarro world.
“That means nothing more than all the other data sets are just as bad!”
What an amazing coincidence that all these many bad data sets all follow each other so closely.
“No one yet has showed how temperature measurements around the globe, be they from thermometers or satellites have zero measurement uncertainty.”
Might have something to do with nobody thinking they do have zero uncertainty.
Yes its true, CMoB lives inside your skull, rent free.
The problem with your latest strawman, now blazing away in the night sky, is that Christopher isn’t a trend line manufacturer.
And nice whine, BTW, CMoB had you pegged right from the start line.
“is that Christopher isn’t a trend line manufacturer. ”
Really? I must have imagined all those pause trends, or the realitiometer, or all those presentations claiming there was a strong negative trend.
Hey clueless person, what air temperature data set does CMoB publish?
Idiot.
He doesn’t publish any data set. Just cherry-picks trend lines on existing ones. Do you have a point?
I love how the Green Blob gets triggered by your figures 1 & 2. 😀
What is this post about?
You say
What if the models are wrong, including modeled UAH anomalies?
Is an anomaly of +0.8777 degC at Sydney Airport the same at Marble Bar, Hobart or say Townsville? If so, how come temperatures measured at those places shows no trend?
Except for changing from 230-litre to 60-litre Stevenson screens, replacing thermometers with automatic weather station sensors, forgetting to clean dust and grime off the screen, increasing the area of tarmac, shifting the site or building a road, and fiddling the data using various forms of homogenisation, what could cause CO2 to warm the climate?
Oh that is right – changing from 230-litre to 60-litre Stevenson screens, replacing thermometers with automatic weather station sensors, forgetting to clean dust and grime off the screen, increasing the area of tarmac, shifting the site or building a road, and fiddling the data using various forms of homogenisation; that’s what.
In other words, what if the underlying, basic arguendo was bollocks?
If no medium and long-term Australian weather station datasets are able to show unequivocal trend, how can it be claimed that temperature anomalies (including UAH) are increasing?
All the best,
Dr Bill Johnston
http://www.bomwatch.com.au
Salute!
As with many previous complaints, why cannot we have actual temperatures in addition to the anomalies?
As an engineer doing data reduction and such for real world systems ( not climate models), it was not hard to see trends and such from the raw data of vibration, pressure, gee, velocity, etc of a system being tested.
The “anomaly” stuff depends on the “baseline” used.
If possible, the actual recorded stuff would be great, and we can all stand it here from when the first thermometers and weather stations recorded stuff. The anomaly crapola is too easy to skew and fool with, IMHO.
Gums whines….
Follow the links in Dr Spencer’s monthly updates to the UAH reports. UAH TLT (The Lower Stratosphere) is an estimated average of temperatures in the air above the surface but below the upper stratosphere.
These are obviously much colder than surface temperatures, which means they are in the negative on the Celsius scale. For instance, the warmest TLT month ever recorded by UAH was July this year, with an average temperature of 266.06 on the Kelvin (K) scale. Zero degrees C is 273.15 K.
So it would be fairly meaningless to publish the absolute value if you’re comparing your data with surface temperatures that are much warmer. That’s why they all use anomalies, including UAH.
Sorry, TLT is The Lower Troposphere, not stratosphere.
Correct-a-mundo.
“As an engineer doing data reduction and such for real world systems ( not climate models), it was not hard to see trends and such from the raw data of vibration, pressure, gee, velocity, etc of a system being tested.”
Your lack of reading comprehension skills is showing again. Did you not read this statement at all?
I was simply pointing out that the reason anomalies are used in global climate data sets is for ease of comparison.
Why do you think UAH use anomalies rather than absolute temperatures?
Or I should say, they all use absolute temperature (K) in their scientific work (including the dreaded models), but they all convert it to C, or sometimes F in the US, so that we mortals know what they mean in our temperature reality.
It’s not a big mystery and anyone can convert anomalies to whatever scale they like with the available information. It’s just a very confusing way to look at it.
This is a point all sides of the debate should recognize.
Dear David,
The problem remains that while UAH anomalies with their strong ElNino signal appear to show ‘trend’, minus the effect of site and instrument changes (and rainfall), surface temperature data observed at individual weather stations from across Australia does not.
All the best,
Dr Bill Johnston
http://www.bomwatch.com.au
On the topic of anomalies, and a question to anybody interested…
How are the anomaly baselines calculated, and what are their uncertainties?
Dear Old Cocky,
Depends …. I suspect on the desired answer. I suspect also that anomalies are expected not to have uncertainties.
I have multiple layers of analyses, where faults in target-site data are detected (by difference) and adjusted using anomalies calculated for up to 30 other ‘faulty’ datasets.
So, regardless of the (30-year) baseline, how can faulty data for up to 30 neighboring sites, specifically selected on the basis that their first-difference data are highly correlated (Pearsons) with that of the target site, be used to both detect and adjust faults in target-site data?
Also, how can it be that data for Sydney Observatory potentially influence homogenisation of data for Alice Springs (via a few stops in between)?
All that aside, while UAH anomalies with their strong ElNino signal appear to show ‘trend’; minus the effect of site and instrument changes (and rainfall), surface temperature data observed at individual weather stations from across Australia (including Sydney Observatory and Alice Springs) does not.
All the best,
Dr Bill Johnston
http://www.bomwatch.com.au
Only if you happen to know the numbers that were subtracted.
Anomalies inherit the very same uncertainties the base components have. Therefore they are *not* a good tool for comparisons past the last decimal place of the uncertainty.
UAH is actually not a temperature even though it is used as one. It is, at best, an index based on very poor sampling. It has at least as much measurement uncertainty as land and ocean based thermometers. So the anomalies can’t tell you anything past the decimal place of the uncertainty.
UAH has the very same problem that climate models have – it doesn’t handle clouds at all. When it reads the irradiance of the atmosphere it has no way to adjust for the water vapor and clouds in the atmosphere at the point of measurement. Therefore they parameterize things in their algorithms. hoping to get it right – but the climate models don’t get it right and it is unreasonable to expect UAH to do any better.
Most of climate science has never heard of “significant digit rules” or of measurement uncertainty. That is proven by their use of the SEM as a measure of the accuracy of an average value. The SEM simply can’t tell you how accurate the population mean is, that is the function of measurement uncertainty which is totally ignored.
If the daily mid-range value is calculated from thermometers with a measurement uncertainty of +/- 0.5C then the mid-range value will have an uncertainty of +/- 0.7. Nothing calculated from those mid-range value can have a measurement uncertainty less than +/- 0.7C – except the SEM which is *NOT* a measurement accuracy index. The SEM is not even considered to be a valid statistical descriptor of a data set. It is only a descriptor of how precisely you have located the population average – and that population average can be wildly inaccurate!
No Tim,
While well down the WUWT list by now, we have been down this rabbit hole before.
Instrument uncertainty is +/- half the index range. For a Celsius thermometer with a 0.5 deg index, it is +/- 0.25 degC, usually rounded to 0.3 degC.
The uncertainty of an observed T-estimate is instrument uncertainty plus eyeball/observer/transcription uncertainty (which is unknown).
Added to those uncertainties is the uncertainty that the observation truly reflects the temperature of the air being monitored. This uncertainty includes site effects – the state of the screen etc. whether the lawn is well-maintained (not watered) …. Also unknown.
Resolving these uniquely different sources of uncertainty would require a specific experiment where sources of uncertainty were allocated to ‘treatments’ and where replication could be used to test the magnitude of their various effects. I have found no examples of such an experiment.
All the best,
Bill Johnston
http://www.bomwatch.com.au
You ignored just about everything Tim wrote.
BS.
You don’t know what the systematic error is unless you can measure it. At this point of the discussion, think about how that could be estimated – you start.
We have discussed previously (endlessly) that an ‘accurate’ rapid-sampling electronic instrument converts variance into signal, which is different to systematic error. There is always under those circumstances a need to extract the signal from the noise typical of such instruments – remember the issue of time-constant and time-averaging.
At least discuss the issue from the point of view of someone who has been trained-in and has experience in undertaking temperature observations, and with working with electronic instruments.
Ponder my statement:
“Resolving these uniquely different sources of uncertainty would require a specific experiment where sources of uncertainty were allocated to ‘treatments’ and where replication could be used to test the magnitude of their various effects. I have found no examples of such an experiment”
All the best,
Bill Johnston
“which is different to systematic error.”
So what? It still doesn’t eliminate systematic bias or identify it.
Hubbard wrote a paper around 2002 analyzing the in-built uncertainties in PRT temperature stations. They *do* have systematic uncertainty due to component drift and therefore calibration drift. No matter how rapidly you sample it doesn’t remove calibration drift.
“At least discuss the issue from the point of view of someone who has been trained-in and has experience in undertaking temperature observations, and with working with electronic instruments.”
You *MUST* be kidding. This is just the argumentative fallacy of Ad Hominem. I *have* over 60 years experience in measuring things, including temperature, and I have used electronic instruments since I was 13 years old (60 years ago), from lecher wires to grid dip oscillators to analog oscilloscopes to analog computers to all types of modern digital measurement devices such as voltmeters, spectrum analyzers, and oscilloscopes.
I understand measurement uncertainty quite well. You learn it pretty quickly when cutting things like crown molding where a tiny error really stands out when someone is looking at the corner joints!
I still can’t figure out where this guy is coming from.
Nonsense, you have no clue about which you type.
And you assume that I don’t.
You need to look closely at the word: UNcertainty should be a clue for you, it is UNKNOWN.
You are doing the same thing all over again.
The 1/2 interval value is an estimate of the READING error for an analog instrument. It says *NOTHING* about the systematic bias that instrument might have. What is the 1/2 interval uncertainty for a digital readout?
The state of the microclimate *IS* systematic bias and cannot be accounted for with statistical analysis.
U(total) = U(random) + U(systematic)
If U(total) = +/- 0.5C then when combined with a separate measurement taken at a different time the combined total uncertainty will be +/- 0.7C.
If you don’t know ether U(random) or U(systematic), which you don’t in an unattended, uncalibrated field temperature measurement station, then all you can do is use U(total) as the propagated uncertainty.
We agree that:
However I don’t agree with you claiming: It says *NOTHING* about the systematic bias that instrument might have. What is the 1/2 interval uncertainty for a digital readout?
For a calibrated instrument there is no systematic error associated with instrument uncertainty. So what if the ice-bath was systematically biased at the 10th decimal place, which is well beyond the accuracy of the measurement scale?
Further, by definition, the uncertainty of a digital instrument is still 1/2 the interval scale of the succeeding decimal place, otherwise the value you see would be rounded internally by the instrument.
Your statement that:
is also not true.
Of course parameters of the climate can be described statistically!
You are also mistaken in hypothesizing that:
U(total) = U(random) + U(systematic)
In this case,
U(actual) = U(estimate) + U(+/-(random)) + (+/-(systematic))
For repeated estimates, as systematic is constant by definition (and unknown), it cancels.
The only control over systematic error is repeated calibration of the instrument, and in critical applications (but not weather observations), instruments may be calibrated against known standards weekly or even several times a day.
A weather thermometer is good for years and a careful observer always compares reset values with values reported at the same time by independent instruments.
For 9AM observations, this results in three estimates in total – max & min reset and 9AM dry-bulb.
An observer also checks for bubbles and the effect of wind-shake and apparent deterioration, staining within the capillary for example.
SEM also has a defined statistical meaning.
Kind regards,
Bill Johnston
Dear Tim,
Why do you rehash this stuff, run away then conduct a rearguard action on another day? Hit-fix, nothing better to do?
I don’t think you have ever been trained-in or undertaken weather observations.
Instead of crapping-on, why not just be open and honest that you are in for the ride, the stir. I have less borish people to mix with.
You have not justified anything you have said. Nevertheless I am here to help you with your misunderstandings about temperature observations. At http://www.bomwatch.com.au I have shown repeatedly how to detect and correct for systematic uncertainties. The rest is simply up to you.
(Reminds me that running the same stuff through the horse over and over generates the same quality manure!)
Yours sincerely,
Dr Bill Johnston
“I don’t think you have ever been trained-in or undertaken weather observations.”
Another ad hominem. Instead of must making ad hominem attacks why don’t you show where my assertions are wrong?
I have justified *everything* I have asserted. From the treatments of uncertainty by Taylor, Bevington, Possolo, and the GUM. You haven’t shown where any of those experts disagree with my assertions.
I just posted the applicable quote from Bevington. Here is what Taylor has to say:
“As noted before, not all types of expermiental uncertainty can be assessed by statistical analysis based on repeated measurements. For this reason, uncertainties are classified into two groups: the random uncertainties, which can be treated statistically, and the systematic uncertainties, which cannot.” (italics in the text)
Again, if the total uncertainty is a combination of random and systematic error and you don’t know the value of either, then you cannot separate them out and treat them differently. The total uncertainty in a measurement *IS* a sum of the random error and the systematic bias. There is no such thing as U(estimate). U(random) may be estimated. U(systematic) may be estimated. But there is no standalone U(estimate).
And that stuff from the horse it what you claim.
Bill’s insults are always signed with: “cheers, mate”.
“For a calibrated instrument”
This is the problem with your entire post. How many field temperature measurement stations are calibrated before making each temperature measurement?
“For repeated estimates, as systematic is constant by definition (and unknown), it cancels.”
If you are measuring different things then no, it does *NOT* cancel. It adds. If you are building a composite beam to span a foundation then the final error at the end is the sum of the systematic bias of your measuring device. If you use six boards to create the beam then the total measurement uncertainty is the sum of the systematic bias for each six boards. That bias simply can *NOT* cancel.
If you are measuring the same thing then the systematic error still doesn’t cancel. It remains constant. If you make 100 measurements, each having the same systematic bias of 1″, say x” + 1 +/- u(x)” then the sum of the measurements becomes 100x” + (100)1 +/- 100u(x)”. Divide by 100 to get the average value and you get x” + 1 +/- u(x)”. The systematic uncertainty does *NOT* cancel. The average has the same systematic bias that each individual element has.
Bevington: “Errors of this type are not easy to detect and not easily studied by statistical analysis.”
Far too many in climate science assume that measurements with systematic bias *can* be studied by statistical analysis, typically by assuming that systematic bias cancels across multiple measurements. The fact is that it does *NOT*.
Imagine what might be the result if Bill was responsible for stress-strain measurements.
Of course the same is true for Stokes, bellcurveman, blob, and all the other chicken little trendologists.
They will never acknowledge this.
Total complete unaldurterated bullshit.
Do you just make this stuff up as you go along?
Dear karlomonte and Tim,
Some things are worth replying to, and some things are not (i.e., you either understand the problems or you do not). In your haste to attack, here is a list of specific issues you have ignored or not thought about.
Instrument uncertainty is +/- half the index range. For a Celsius thermometer with a 0.5 deg index, it is +/- 0.25 degC, usually rounded to 0.3 degC; which it is by definition. It is therefore NOT +/- 0.7 degC. Remember errors are additive, but they can also be negative (e.g., 10 microns + (-0.5 microns).
You don’t know what the systematic error is unless you can measure it. At this point of the discussion, think about how systematic error could be estimated. You have provided no guidance on how this could be attempted (i.e., you are walking away from the problem that you profess to be most concerned about).
You say “Hubbard wrote a paper around 2002 analyzing the in-built uncertainties in PRT temperature stations. They *do* have systematic uncertainty due to component drift and therefore calibration drift. No matter how rapidly you sample it doesn’t remove calibration drift.”
I have alluded to such problems before, which mainly evidence as isolated spikes. Spikes are detectable as spikes.
While I have not detected significant drift in Bureau of Meteorology AWS-probes (i.e., an underlying trend or change that is unrelated to the air being measured), drift relative to a standard is detectable and is only corrected via a calibration protocol appropriate to the circumstances. You have not discussed the overarching problem, which is that ‘accurate’ rapid-sampling probes convert noise (variance) into signal.
Tim you say you have worked with these things, surely you have come across (and solved) the problem.
The scope to deal with integrating AWS-probes and LIG data comes down to changing the thermal mass of the instrument (so they behave the same as the LIG thermometers they aim to replace), use an averaging method to minimise the influence of one-off spikes, or use error-trapping at source. The BoM have experimented with all three methods. Another possible error detection technique is to reject (flag) values that are outside an error estimate envelope, say 2 times the population standard deviation, calculated using first-differences.
While they might use data, most climate scientists have never undertaken weather observations. Likewise, few of those that have observed the weather are also climate scientists. Furthermore, I have never suggested either of you are not proficient in measuring things.
Your replies to me are impolite, accusatory and openly hostile, which under the circumstances and given that our ages and levels of overall experience are probably equivalent, is not appropriate.
Uncertainty is about the instrument, not about how the instrument is used. A biased instrument can still cut perfect circles or crown moulding and it is unlikely that in a workshop every cutter is reset between every job. You say Tim “So what? It still doesn’t eliminate systematic bias or identify it“, I say, how do you know bias exists if you cannot measure it? How do you know that the guy under the bed is going to grab you when you step-out in the dark, if you don’t know he is there?
You say Tim “Nonsense, you have no clue about which you type”. In reality I have developed a series of BomWatch protocols for assessing multiple attributes of long series of daily weather observations, which you and karlomonte are free to browse at http://www.bomwatch.com.au. Perhaps you guys run a site where you have undertaken equivalent work, but then, perhaps not.
You say Tim “You need to look closely at the word: UNcertainty should be a clue for you, it is UNKNOWN.”
Uncertainty is measurable, therefore known, and can be stated in various units depending on the application. It is virtually impossible to publish a paper without providing measures of uncertainty appropriate to the estimator. Because you seem to not understand this, does not mean your assertions are correct.
In statistical parlance a mean consists of barX +/- error. Because it can’t be measured, systematic error is presumed to be minimal (i.e., it is fairly presumed that the instrument is calibrated relative to an unvarying standard). You can yell black-and-blue and run around the paddock, but that is just a fact. It is also probably true that when you start your car or truck you don’t undertake a re-calibration exercise across the instrument panel, but perhaps you do.
It seems that anything you don’t agree with you label as an ad hominem. For example, I say: “I don’t think you have ever been trained-in or undertaken weather observations.” You say without addressing the issue “Another ad hominem. Instead of must making ad hominem attacks why don’t you show where my assertions are wrong?”
While I have shown you repeatedly where your assertions are wrong, why don’t you state openly whether or not you have been trained and have experience in undertaking weather observations.
I understand from previous discussions that (from Taylor) “As noted before, not all types of experimental uncertainty can be assessed by statistical analysis based on repeated measurements. For this reason, uncertainties are classified into two groups: the random uncertainties, which can be treated statistically, and the systematic uncertainties, which cannot.” (italics in the text).
I reality, systematic uncertainties can be estimated, either relative to a standard (calibration), or statistically as a step-change in some attribute of the data. Systematic by definition is omni-directional otherwise the effect is random. Furthermore, in an ordinary least squares context, systematic (non-random) error is detectable in residuals (i.e., residuals are not iid). Although you might, how many people using Excel stats routinely examine residuals? (I don’t use Excel for stats.)
I have developed protocols that detect systematic changes related to changing from manual rainfall observations to tipping-buckets; that detect differences between observers; that detect the effect on extremes of changing from thermometers to AWS-probes; that detect the effect of metrication on T-measurements; that detect the change from 230-litre to 60-litre Stevenson screens (and plastic screens); that detect when sites are sprayed-out or scalped of topsoil; that detect when site moves occur that are not reported in metadata and so-on.
I am not claiming Tim, that in addition to your aggressive language skills, you and karlomonte don’t also possess numerical and statistical skills. However, your bluster leaves me unconvinced.
You say emphatically “Again, if the total uncertainty is a combination of random and systematic error and you don’t know the value of either, then you cannot separate them out and treat them differently. The total uncertainty in a measurement *IS* a sum of the random error and the systematic bias. There is no such thing as U(estimate). U(random) may be estimated. U(systematic) may be estimated. But there is no standalone U(estimate).” But you are wrong (see above).
Then to sound-off with his usual flair, karlomonte says “Imagine what might be the result if Bill was responsible for stress-strain measurements. (Good on you that you have, but I never said I did stress-strain measurements.)
Then he says: “Of course the same is true for Stokes, bellcurveman, blob, and all the other chicken little trendologists”. No childish ad hominems there. Why not toss in racist as well?
Oh …
cheers, mate
Dr Bill Johnston
http://www.bomwatch.com.au
Nice rant, DOCTOR bill.
You still don’t understand uncertainty.
Thanks karlomonte, I am now turning my mind to other things.
cheers, mate
Dr Bill
You should have read the standards instead of pooh-poohing them, the people on the JCGM committee devoted a lot of time and effort into writing them.
Formal metrology is a fairly new discipline, dating back to only the late-1970s or so. But you have lots of companions, climatology as a whole has no desire for the truth.
Dear karlomonte, and I suppose Tim,
You have no idea what I have read or not read. You have also studiously avoided addressing most of the issues raised. I don’t know you but while you appear to have expertise in some of the fields under discussion, I find your arrogance and boorishness to be unhelpful in the extreme. You have essentially imparted no knowledge.
You show no indication that you have personally observed the weather, yet you present as an expert. Do you seriously not understand the various components (sources) of observational error, on which this whole global warming debate depends?
I’m left to surmise that while you have a breadth of experience in fields related to measurements, you are an academic bully, but I’d prefer to be wrong. What have you actually contributed to the discussion? Where have you lodged some analysis that underpins your theory and flat-earth view of data, its acquisition and processing? How would you test the fitness of a dataset for estimating trend or change?
While you say “Formal metrology is a fairly new discipline, dating back to only the late-1970s or so”, meteorological instrumentation, including the development of standards, statistical theory including the problem of measurement error and data assessment commenced some two centuries earlier. Perhaps “formal metrology” has introduced some unique biases of its own.
In all that time, the statistical meaning of terms such as variance, SEM, standard deviation, bias, precision, ‘accuracy’, calibration, confidence intervals (for the location of a line), prediction intervals (for estimating new values), have not changed.
Unfortunately, formal (post-1974) metrology still has to use historical data as baselines against which trend and change are estimated (if that is the objective).
cheers, mate (as if)
Bill Johnston
http://www.bomwatch.com.au
Ditto.
Yet you are allowed to be arrogant and boorish. One need look no farther than your rant here, or your treatment of JM. This makes you a hypocrite.
Quite ironic that you accuse me of being a flat-earther (and tip-toeing around calling me a racist), while it is the trendologists (I won’t satisfy them with a list) who are the real flat-earthers.
What you don’t see (or want to see) is that while metrology uses statistics, it has no treatment of non-random errors that cannot be erased with averaging. This is outside of statistics.
I really don’t care if you were taking temperature measurements for James Cook back in 1636.
You made the outrageous comment: “Of course the same is true for Stokes, bellcurveman, blob, and all the other chicken little trendologists”; to which I replied: No childish ad hominems there. Why not toss in racist as well? Where have I accused YOU of being racist?
If you a such an expert: Where have you lodged some analysis that underpins your theory and flat-earth view of data, its acquisition and processing? How would you test the fitness of a dataset for estimating trend or change? How would assist JM to do the same?
Why was JM going on for a decade making claims about data that were not true? In particular, trends in data that were due to site moves and changes; sites she claimed were in the same place, when they actually had moved, and her search for overlap data that metadata showed did not exist. Why is any of this acceptable in a scientific context? While some interactions became overheated, why did you not call her out?
It is NOT true “that while metrology uses statistics, it has no treatment of non-random errors that cannot be erased with averaging. This is outside of statistics.” Read some of my reports where I outline how to detect and correct non-random (systematic) errors using covariance analysis and other statistical techniques.
If you had read the GUM (JCGM 100:2008GUM 1995 with minor corrections) Section 2 instead of ruminating about it, we would not be having this ridiculous discussion.
Alluding to some of previous commentary, Section 3.2.3 states: “Systematic error, like random error, cannot be eliminated but it too can often be reduced. If asystematic error arises from a recognized effect of an influence quantity on a measurement result, hereafter termed a systematic effect, the effect can be quantified and, if it is significant in size relative to the required accuracy of the measurement, a correction (B.2.23) or correction factor (B.2.24) can be applied to compensate for the effect. It is assumed that, after correction, the expectation or expected value of the error arising from a systematic effect is zero (not zero +/- something, but zero period – my emphasis).
How can you add value to the conversation when you have not even read the stuff you refer to?
Alternatively, which words do you NOT understand?
There is nothing in the GUM reports that I can find that relates to the evaluation of data quality i.e., whether data are fit for purpose, so what about that important question. Do you have a protocol?
Finally, I repeat: Do you seriously not understand the various components (sources) of observational error, on which this whole global warming debate depends?
Cheers, mate (not)
Dr Bill Johnston
http://www.bomwatch.com.au
No rant about Jennifer M. today? But I see that Herr Doktor has found nirvana in cherry picking through the GUM.
As for racism, it was Herr Doktor who tip-toed up to the line, here are your words:
And no, I’m not going to waste any of my time reading your rants.
/ignore nutterbill all