The New Pause lengthens by a hefty three months

By Christopher Monckton of Brenchley

On the UAH data, there has been no global warming at all for very nearly seven years since January 2015 of 2015. The New Pause has lengthened by three months, thanks to what may prove to be a small double-dip la Niña:

On the HadCRUT4 data, there has been no global warming for close to eight years, since March 2014. That period can be expected to lengthen once the HadCRUT data are updated – the “University” of East Anglia is slower at maintaining the data these days than it used to be.

Last month I wrote that Pat Frank’s paper of 2019 demonstrating by standard statistical methods that data uncertainties make accurate prediction of global warming impossible was perhaps the most important ever to have been published on the climate-change question in the learned journals.

This remark prompted the coven of lavishly-paid trolls who infest this and other scientifically skeptical websites to attempt to attack Pat Frank and his paper. With great patience and still greater authority, Pat – supported by some doughty WUWT regulars – slapped the whining trolls down. The discussion was among the longest threads to appear at WUWT.

It is indeed impossible for climatologists accurately to predict global warming, not only because – as Pat’s paper definitively shows – the underlying data are so very uncertain but also because climatologists err by adding the large emission-temperature feedback response to, and miscounting it as though it were part of, the actually minuscule feedback response to direct warming forced by greenhouse gases.

In 1850, in round numbers, the 287 K equilibrium global mean surface temperature comprised 255 K reference sensitivity to solar irradiance net of albedo (the emission or sunshine temperature); 8 K direct warming forced by greenhouse gases; and 24 K total feedback response.

Paper after paper in the climatological journals (see e.g. Lacis et al. 2010) makes the erroneous assumption that the 8 K reference sensitivity directly forced by preindustrial noncondensing greenhouse gases generated the entire 24 K feedback response in 1850 and that, therefore, the 1 K direct warming by doubled CO2 would engender equilibrium doubled-CO2 sensitivity (ECS) of around 4 K.

It is on that strikingly naïve miscalculation, leading to the conclusion that ECS will necessarily be large, that the current pandemic of panic about the imagined “climate emergency” is unsoundly founded.

The error is enormous. For the 255 K emission or sunshine temperature accounted for 97% of the 255 + 8 = 263 K pre-feedback warming (or reference sensitivity) in 1850. Therefore, that year, 97% of the 24 K total feedback response – i.e., 23.3 K – was feedback response to the 255 K sunshine temperature, and only 0.7 K was feedback response to the 8 K reference sensitivity forced by preindustrial noncondensing greenhouse gases.

Therefore, if the feedback regime as it stood in 1850 were to persist today (and there is good reason to suppose that it does persist, for the climate is near-perfectly thermostatic), the system-gain factor, the ratio of equilibrium to reference temperature, would not be 32 / 8 = 4, as climatology has hitherto assumed, but much closer to (255 + 32) / (255 + 8) = 1.09. One must include the 255 K sunshine temperature in the numerator and the denominator, but climatology leaves it out.

Thus, for reference doubled-CO2 sensitivity of 1.05 K, ECS would not be 4 x 1.05 = 4.2 K, as climatology imagines (Sir John Houghton of the IPCC once wrote to me to say that apportionment of the 32 K natural greenhouse effect was why large ECS was predicted), but more like 1.09 x 1.05 = 1.1 K.

However, if there were an increase of just 1% (from 1.09 to 1.1) in the system-gain factor today compared with 1850, which is possible though not at all likely, ECS by climatology’s erroneous method would still be 4.2 K, but by the corrected method that 1% increase would imply a 300% increase in ECS from 1.1 K to 1.1 (263 + 1.05) – 287 = 4.5 K.

And that is why it is quite impossible to predict global warming accurately, whether with or without a billion-dollar computer model. Since a 1% increase in the system-gain factor would lead to a 300% increase in ECS from 1.1 K to 4.5 K, and since not one of the dozens of feedback responses in the climate can be directly measured or reliably estimated to any useful degree of precision (and certainly not within 1%), the derivation of climate sensitivity is – just as Pat Frank’s paper says it is – pure guesswork.

And that is why these long Pauses in global temperature have become ever more important. They give us a far better indication of the true likely rate of global warming than any of the costly but ineffectual and inaccurate predictions made by climatologists. And they show that global warming is far smaller and slower than had originally been predicted.

As Dr Benny Peiser of the splendid Global Warming Policy Foundation has said in his recent lecture to the Climate Intelligence Group (available on YouTube), there is a growing disconnect between the shrieking extremism of the climate Communists, on the one hand, and the growing caution of populations such as the Swiss, on the other, who have voted down a proposal to cripple the national economy and Save The Planet on the sensible and scientifically-justifiable ground that the cost will exceed any legitimately-conceivable benefit.

By now, most voters have seen for themselves that The Planet, far from being at risk from warmer weather worldwide, is benefiting therefrom. There is no need to do anything at all about global warming except to enjoy it.

Now that it is clear beyond any scintilla of doubt that official predictions of global warming are even less reliable than consulting palms, tea-leaves, tarot cards, witch-doctors, shamans, computers, national academies of sciences, schoolchildren or animal entrails, the case for continuing to close down major Western industries one by one, transferring their jobs and profits to Communist-run Russia and China, vanishes away.

The global warming scare is over. Will someone tell the lackwit scientific illiterates who currently govern the once-free West, against which the embittered enemies of democracy and liberty have selectively, malevolently and profitably targeted the climate fraud?

4.8 68 votes
Article Rating
1.1K Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Mike
December 2, 2021 10:15 pm

There is no need to do anything at all about global warming except to enjoy it.”

And indeed I would were it to appear…
Thank you Lord Monckton.

Chaswarnertoo
Reply to  Mike
December 2, 2021 11:32 pm

Yep. I was hoping for a nice warm retirement. Maybe I’ll buy new skis instead.

Scissor
Reply to  Chaswarnertoo
December 3, 2021 5:01 am

In my area, most of the forecasters said that the high temperature record for the date set in 1885 would be challenged. In actuality, yesterday’s high fell short of the record by 3F.

Nothing to see here, and the warm front is being replaced by a cold front, so enjoying the warmth was nice while it lasted.

griff
Reply to  Mike
December 3, 2021 12:50 am

Yes, enjoy the heat dome and record 40C plus temps, enjoy the 1 in 1,000 year deluges sweeping away homes and drowning the subways and cutting off your major cities, enjoy the 100 mph winds cutting off your electricity for days.

Rod Evans
Reply to  griff
December 3, 2021 1:14 am

Do you imagine the examples of weather events you have highlighted, are something unique to the 21st century then, griff?

Ron Long
Reply to  Rod Evans
December 3, 2021 1:47 am

Yea, like a 1 in 100 year flood now happens every 3 months? griff depends on fellow trolls not bothering to fact-check his manifesto.

Scissor
Reply to  Ron Long
December 3, 2021 5:04 am

There are likely over 7 billion once in a lifetime events happening every single day.

menace
Reply to  Ron Long
December 3, 2021 7:22 am

you and griffter lack a basic understanding of statistics…

a 1 in 100 year flood in a given location occurs roughly once every 100 years

a 1 in 100 year flood across 10,000’s of different locations across the earth may very well occur once ever few months

Gary Pearse
Reply to  menace
December 3, 2021 10:09 am

Menace, there is nothing in weather or statististics that says you cant have three or more 100yr floods, droughts, etc within a year at one location. You may or may not thereafter see another for several hundred years. Your understanding of statistics (and weather) is that of the the innumerate majority.

Ron Long is a geologist and you can be sure he understands both stats and weather along with a heck of a lot more.

Rory Forbes
Reply to  Gary Pearse
December 3, 2021 10:24 am

That was my understanding as well, Gary. Most people just don’t get statistics … statistically speaking, of course.

Don
Reply to  Rory Forbes
December 4, 2021 4:15 pm

very good!

giphy.gif
Joseph Zorzin
Reply to  Gary Pearse
December 3, 2021 11:17 am

I suspect very few rivers are so well studied that it’s precisely known what the “once per century” flood might look like. But, a study of the flood plain should suffice for guidance as to what land should not be developed and if so, what sort of measures can be taken to minimize the risk. Hardly ever done of course. Instead, often wetlands in floodplains are filled in and levies are built pushing the flood downstream. Seems like more of an engineering problem, not one of man caused climate catastrophe – unless you consider bad engineering to be man caused.

Gilbert K. Arnold
Reply to  menace
December 3, 2021 11:02 am

@menace…. a 100 year flood has a 1% chance of occurring each and every year. It is possible to have more than one 100 year flood in a given year. Read up on recurrence intervals in any good fluvial hydraulics text book

Don
Reply to  menace
December 4, 2021 4:09 pm

Exactly ! As another example , proton decay. If it does decay via a positron, the proton’s half-life is constrained to be at least 1.67×1034 years , many orders of magnitude longer than the current age of the Universe . But that doesn’t stop science from spending millions of dollars on equipment and installations looking for a decay if they look at enough protons at once

Dean
Reply to  menace
December 5, 2021 2:44 am

Just no.

Your understanding of statistics is on par with Griff.

At the same location you can have several 1 in 100 floods in the same decade, even the same year.

Joe E
Reply to  menace
December 8, 2021 7:44 pm

Or also a 1% chance happening every year.

Joseph Zorzin
Reply to  Ron Long
December 3, 2021 11:12 am

I suspect the idiots think- if there is a 1 in 100 year flood SOMEWHERE on the planet most years, then that proves there is a problem. After all, such a flood should only happen once per century on the entire planet. Yes, that sounds dumb but all the climatistas that I personally know think at that level.

Michael in Dublin
Reply to  Rod Evans
December 3, 2021 3:27 am

Rod, Griff is a rabble rouser and not interested in a careful and critical evaluation of various views on climate. He needs to be totally ignored – not even given a down arrow. There are plenty of other contributors that make thoughtful contributions on this site.

Eda Rose-Lawson
Reply to  Michael in Dublin
December 3, 2021 1:14 pm

Absolutely correct, the more people respond the more he will put forward his stupid observations. Can I suggest that no one responds to him at all in the future as I belief he writes his endlessly ridiculous comments merely to evoke a response rather than intellectual argument. Let us all ignore him from now on and hope that will make him go away.

Monckton of Brenchley
Reply to  Eda Rose-Lawson
December 3, 2021 10:46 pm

Actually, it is useful that nitwits like Griff comment here: for they are a standing advertisement for the ultimate futility of climate Communism.

Chris Wright
Reply to  Monckton of Brenchley
December 4, 2021 2:55 am

Christopher,
Well said. And thank you so very much for all your tireless work.
I am confident that eventually sanity will return to the world and science. But sadly I probably won’t live to see it.

Let’s hope that at least we’ve reached peak insanity. Ironically, the one thing that may help to reverse the madness is a sustained period of cooling. It’s ironical because sceptics are familiar with the history of climate – unlike clowns such as Biden and Johnson – and they understand how devestating a significant cooling would be.

So, yes, let’s enjoy this mild warming while it lasts.
Chris

Rory Forbes
Reply to  Monckton of Brenchley
December 4, 2021 5:06 pm

You’re right once again. It’s always useful to learn what’s on the minds of your enemies, regardless how limited they may be, because, in the words of C.S. Lewis, “… those who torment us for our own good will torment us without end for they do so with the approval of their own conscience.”
I’m mindful of once well respected scientists, like Stephen Schneider who cast away professional integrity for “The Cause”.
He claimed to be in a moral dilemma where in fact there is none. He said … “we have to offer up scary scenarios, make simplified, dramatic statements, and make little mention of any doubts we might have.” committing the sin of omission and the fallacy of false dichotomy. All he ever needed to do was to follow his own words … “as scientists we are ethically bound to the scientific method, in effect promising to tell the truth, the whole truth, and nothing but”.

Once again I thank you for your integrity and hard work. If we cannot trust those who have the knowledge, were does that leave us?

Alistair Campbell
Reply to  griff
December 3, 2021 2:00 am

Are you here just for everyone else’s entertainment? It certainly seems that way.

Graemethecat
Reply to  griff
December 3, 2021 2:21 am

Griffie-poo, pray tell us when, if ever, these weather disasters did NOT occur. You can’t, of course.

You do know that the annual global death toll due to weather has been declining since the beginning of the 20th Century?

Robert Leslie Stevenson
Reply to  griff
December 3, 2021 3:02 am

Does this also mean Winter temps in England not falling below 10C; keeping at 12 to 15C say .This will be important when we can no longer heat our homes with gas central heating.

fretslider
Reply to  griff
December 3, 2021 4:44 am

griff, you do know you’re going to die?

Probably sooner rather than later, you’re that wound-up.

Scissor
Reply to  fretslider
December 3, 2021 5:09 am

That fact is used to create fear to manipulate us, and life in many ways is a struggle to come to terms with our mortality.

Philo
Reply to  Scissor
December 3, 2021 3:41 pm

I gave up on mortality around Freshman year in high school. I’d done a bit of reading I came to the conclusion just to ignore it. I believe in God because I can’t see any other way to think about the Universe. There’s really nothing t be gained by thinking to much about mortality.

It’s the truly egalitarian life.

Joao Martins
Reply to  griff
December 3, 2021 4:45 am

Easy, griff, you are pointing to the alternative:

  1. To prevent drowning subway, juast cancel the subway and use only surface transport or walking.
  2. To prevent sweeping away homes, just don’t build them where they are prone to deluge, build only on hilltops (and pray god or cross your fingers, whatever is best for you, so that no super-power will have the idea of cleansing the humanity; by the way, is that your fight, to cleanse the humanity and let live only the righteous?).
  3. To avoy electricity blackouts, just cancel electricity and return to greasy, smoky oil candels (no animal fat, please, animal farts endanger climate!).
  4. To avoid 40C plus temps, just go live in deep caves or go farther north (or south, to Antarctica).

Good luck with your climateway of life changes!…

Krishna Gans
Reply to  griff
December 3, 2021 5:38 am

All these weatherevents during the warming pause, so it’s not related to warming, but to natural variability you just have discovered.

MarkW
Reply to  griff
December 3, 2021 5:54 am

I see griff is still trying to convince people that prior to CO2 there was no such thing as bad weather.

Clyde Spencer
Reply to  MarkW
December 3, 2021 11:41 am

Prior to Adam and Eve eating the Forbidden Fruit, the weather was constant and always like a nice day in Tahiti, and there was only enough CO2 in the air to keep the existing plants in the Garden of Eden alive. It has all been down hill since then! Even the snake has to live in the grass. Alas, we are doomed! [Imaginative sarcasm primarily for the benefit of ‘griffy.’]

philincalifornia
Reply to  MarkW
December 3, 2021 3:31 pm

…. and the idiot lives in England too.

I got out on parole after 23 years. I’m thinking of moving to Costa Rica to stay warm. This Bay Area sh!t just ain’t cutting it.

Captain climate
Reply to  griff
December 3, 2021 7:34 am

Derp

Pathway
Reply to  griff
December 3, 2021 7:54 am

Please show your math.

philincalifornia
Reply to  Pathway
December 3, 2021 3:45 pm

Ditto

Maybe this will help:

comment image

Clyde Spencer
Reply to  griff
December 3, 2021 11:28 am

… enjoy the 1 in 1,000 year deluges

Which means that in the approximately 20 thousand years that modern, H. sapiens have lived in Europe, there have been at least 20 such deluges. Nothing new! And, inasmuch as most cultures have legends of greater floods, we might well be in store for similar. But, it is to be expected, not the result of slight warming.

Winds were more frequent and much more ferocious at the end of the last glaciation because of the cold ice to the north and the warming bare soil exposed by the retreat of the glaciers.

You seem to be doing your hand-waving fan dance based on what you have experienced during your short life, rather than from the viewpoint of a geologist accustomed to envisioning millions and tens of millions of years. It is no wonder that you think that the sky is falling.

Last edited 1 month ago by Clyde Spencer
Harry Passfield
Reply to  griff
December 3, 2021 11:38 am

I know many posters know this but the pillock, Griff, is laughing at the people who take the trouble to put him/her/it right on CC etc.
You must understand, this idiot’s Mother has to lean his bed sheets against the wall to crack ’em in order to fold ’em for the wash. He’s also a waste of blog space. Please ignore him – even if you enjoy the sport.

meab
Reply to  Harry Passfield
December 3, 2021 12:53 pm

It’s NOT about griffter, every regular here knows he’s been schooled time and time again about his baseless claims yet he persists. It’s about anyone who might be new here so they know that griffter is a despicable liar who parrots discredited BS.

Reply to  griff
December 3, 2021 12:00 pm

About 8000 years ago, a group of several hundred people in the Himalayas were killed in a hailstorm, leaving them with tennis ball sized holes in their skulls. Weather extremes have always happened from time to time. No evidence they’re increasing now.

Philo
Reply to  Hatter Eggburn
December 3, 2021 3:45 pm

Hadn’t heard about that one. got a search to go to?

Gene
Reply to  griff
December 3, 2021 12:39 pm

You really need to stop posting your lack of knowledge… Take some time off, and dedicate yourself to getting a real education!

Rory Forbes
Reply to  Gene
December 4, 2021 6:04 pm

Take some time off, and dedicate yourself to getting a real education!

In contemporary England? Surely you jest. England stopped doing education several decades ago.

RickWill
Reply to  griff
December 3, 2021 1:39 pm

Yes, enjoy the heat dome and record 40C plus temps, enjoy the 1 in 1,000 year deluges sweeping away homes and drowning the subways and cutting off your major cities, enjoy the 100 mph winds cutting off your electricity for days.

The stench of desperation in these words!

Philo
Reply to  griff
December 3, 2021 2:57 pm

Sorry old boy, your threatened climate “attacks” have all happened now and then for over 3000 years. There were several pueblo cultures in the current Airzona/New Mexico and they lived there for centuries and prospered to the degree possible until a super drought around 1000CE and Inca invaders broke the whole area apart and died out..

Those droughts and other climate effects have returned many times over the years.
The last one was more or less in the 1930’s, further north and east. 100mph winds occur regularly, particularly in the mountainous states.

There is no need to look for “human caused” climate change. The natural changes seem to be plenty powerful and it is difficult to find any “climate changes”.

Keep in mind, the UN set up the United Nations Environment Program SPECIFICALLY to evaluate “HUMAN-CAUSED” environmental changes. No science need apply. Apparently, despite all the history, only humans can change the climate. Forget the Sun, currently in a major low point causing many effects on earth, earthquakes, fickle winds(mostly caused by the sun) and waaay more.

Paul
Reply to  griff
December 3, 2021 4:04 pm

all of this shit has happened many times before down thru all written history.
It is nothing new, nothing catastrophic, & it sure isn’t unprecedented. No need to tell you to do some research because you won’t & besides you already know
that all you are doing is spreading bullshit & lying threw your teeth like a flim flam huckster. .

Monckton of Brenchley
Reply to  griff
December 3, 2021 10:44 pm

Climate Communists such as Griff are not, perhaps, aware that one would expect hundreds of 1-in-1000-year events every year, because there are so many micro-climates and so many possible weather records. They are also unaware that, particularly in the extratropics, generally warmer weather is more likely than not to lead to fewer extreme-weather events overall, which is why even the grim-faced Kommissars of the Brussels tyranny-by-clerk have found that even with 5.4 K warming between now and 2080 there would be 94,000 more living Europeans by that year than if there were no global warming between now and then.

Vincent Causey
Reply to  griff
December 4, 2021 1:30 am

I saw that movie too.

HotScot
Reply to  Mike
December 3, 2021 3:57 am

Now THAT’s Cognitive Dissonance if ever I saw it.

CD in Wisconsin
Reply to  HotScot
December 3, 2021 11:46 am

Exactly what I keep telling myself HotScot. Griff needs to take a course in human psychology to understand what is going on in his head.

Philo
Reply to  CD in Wisconsin
December 3, 2021 3:47 pm

I’d bet he’s getting a $1 per reply he gets, or some such. He doesn’t even make usable claims.

John Tillman
Reply to  Mike
December 3, 2021 5:45 am

Global cooling trend intact since February 2016.

Monckton of Brenchley
Reply to  Mike
December 3, 2021 10:39 pm

It’s a pleasure! On balance, one would expect global warming to continue, but these long Pauses are a visually simple demonstration that the rate of warming is a great deal less than had originally been predicted.

Philip
December 2, 2021 11:19 pm

There has been no warming but, the science, man. The science. The science of consensus says that we’ve only got a few years left on the clock before earth becomes inhabitable. Doctor of Earthican Science, Joe Biden says we have only eight years left to act before doomsday. DOOMSDAY!

Chaswarnertoo
Reply to  Philip
December 2, 2021 11:33 pm

Teh siense, you meant?

Tom
Reply to  Philip
December 3, 2021 5:33 am

Isn’t it interesting that the ones least able to fully understand ‘Science’ are the ones proclaiming it the loudest.

CanEng
Reply to  Tom
December 3, 2021 8:22 am

Yes, that is always the way. Ignorance is always demonstrated by those that lack training or rational thought.

Philip
Reply to  Tom
December 3, 2021 12:45 pm

On close inspection, Tom, it is not the science they are proclaiming. The science is untenable. So, they proclaim their virtuousness, and your/mine ignorance of the necessity to not question the science, in order to save the world from mankind’s industrial nature.

Rich Davis
Reply to  Philip
December 3, 2021 5:04 pm

Literally! I mean it. Not figuratively. You know the thing.

Chaswarnertoo
December 2, 2021 11:31 pm

More Goreball warning, as the Earth recovers from the devastating LIA. Record food crops from the delicious extra CO2.

Richard S Courtney
December 2, 2021 11:49 pm

Viscount Monckton,

Thanks for your article which I enjoyed.

Trolls make-use of minor and debateable points as distractions so I write to suggest a minor amendment to your article. Please note that this is a genuine attempt to be helpful and is not merely nit-picking because, in addition to preempting troll comments, my suggested minor amendment emphasises the importance of natural climate variability which is the purpose of your study of the ‘new pause’.

You say,

Therefore, if the feedback regime as it stood in 1850 were to persist today (and there is good reason to suppose that it does persist, for the climate s near-perfectly thermostatic), …

I write to suggest it would be more correct to say,
‘Therefore, if the feedback regime as it stood in 1850 were to persist today (and there is good reason to suppose that it does persist, for the climate is probably near-perfectly thermostatic), …’

This is because spatial redistribution of heat across the Earth’s surface (e.g. as a result of variation to ocean currents) can alter the global average temperature (GAT). The change to GAT occurs because radiative heat loss is proportional to the fourth power of the temperature of an emitting surface and temperature varies e.g. with latitude. So, GAT changes to maintain radiative balance when heat is transferred from a hot region to a colder region. Calculations independently conducted by several people (including me and more notably Richard Lindzen) indicate the effect is such that spatial redistribution of heat across the Earth’s surface may have been responsible for the entire change of GAT thought to have happened since before the industrial revolution.

Richard

Jim Gorman
Reply to  Richard S Courtney
December 3, 2021 5:39 am

You make a fine comment to go with CM’s article. Your assertion is one reason why using averages for the GAT makes no sense. An average only makes sense if the actual radiation occurs in that fashion. Otherwise, part of the earth (the equator) receives a predominate amount of the radiation and it reduces away from that point. Since temp is bases on an exponent of 4, the temps will also vary based on this factor. Simple averages and “linear” regression, homogenization, etc. simply can not follow the temps properly.

Monckton of Brenchley
Reply to  Richard S Courtney
December 3, 2021 10:53 pm

Richard Courtney asks me to add a second qualifier, “probably”, to the first, “near”, in the sentence “The climate is near-perfectly thermostatic”. However, Jouzel et al. (2007), reconstructing the past 810,000 years’ temperatures in Greenland by cryostratigraphy, concluded that in all that time (after allowing for polar amplification) global temperatures varied by little more than 3 K either side of the period mean. The climate is, therefore, near-perfectly thermostatic. Compensating influences such as Eschenbach variability in tropical afternoon convection keep the temperature within quite narrow bounds.

Don
Reply to  Monckton of Brenchley
December 4, 2021 4:29 pm

And owing to the fact that the vast majority, 99.9% of the earth’s surface is constantly exposed to deep space at near absolute zero and the Sun takes up such a small heat source in area it is remarkable that it does keep such a good control of temperature. I put that control largely down to clouds especially at night time in winter .

Monckton of Brenchley
Reply to  Don
December 7, 2021 1:24 pm

Of the numerous thermostatic processes in climate, the vast heat capacity of the ocean is probably the most influential.

marcjf
December 3, 2021 12:07 am

Most voters believe that climate change is real and dangerous because they are force fed a constant diet of media alarmism, supported by dim politicians and green activists. It is so uncool [no pun intended] to be a climate heretic when the religious orthodoxy promotes ideological purity and punishes rational thinking.

Rory Forbes
Reply to  marcjf
December 3, 2021 10:35 am

Most people are uncomfortable living outside the orthodoxy especially when one is exposed to a constant barrage of dogma reinforcing it every minute. The media, now acting as the public relations branch of “progressive” governments, tailor their reporting to suit.

decnine
December 3, 2021 12:08 am

“…the “University” of East Anglia is slower at maintaining the data these days than it used to be…”

Hey, they’ve been really busy. Those “adjustments” don’t do themselves, you know.

Robert Leslie Stevenson
Reply to  decnine
December 3, 2021 3:33 am

uEA is now redundant. Stock markets do the predictions – FTse100 is currently running at 2,7C and needs to divest itself of commodities, oil and gas to reach the magic 1.5C. I don’t know where they imagine the materials will come from for their electric cars, heat pumps, wind farms, mobile phones, double glazing etc. No doubt China will step in to save the day just before we run into the buffers.

HotScot
Reply to  Robert Leslie Stevenson
December 3, 2021 4:02 am

Wind turbines are self replicating organisms dontchaknow. That’s why the electricity they produce is so cheap.

Don
Reply to  HotScot
December 4, 2021 4:32 pm

It is only “cheap” if you live in never ending wind land , otherwise you have to keep a gas fired power plant idling in the background , MW for MW .

Clyde Spencer
Reply to  decnine
December 3, 2021 11:50 am

Truth be known, UEA is now only half-fast compared to what they used to be.

griff
December 3, 2021 12:48 am

UAH is a multiply adjusted proxy measurement of the Troposphere, which doesn’t even agree with similar proxy Tropospheric measurements (why do these pages never mention RSS these days?).

I think we have to take it as at least an outlier and quite probably not representative of what’s happening.

Reply to  griff
December 3, 2021 12:59 am

Predictable attempt to discredit the most reputable and accurate measuring system we have.
The irony is that if UAH cannot be relied on then nor can any other system of measurement.

angech
Reply to  Stephen Wilde
December 3, 2021 1:14 am

Would be interested to know the trend from the old pause at its maximum including the new pause.
I imagine it would be Roy’s 0.14 trend but. It might be lower.
So 1997? to 2021?

Bellman
Reply to  angech
December 3, 2021 3:56 am

Depends on when exactly you think the old pause started. Cherry picking the lowest and longest trend prior to 1998, the trend from April 1997 is 1.1°C / decade.

Starting just after the El Niño the trend from November 1998 is 1.6°C / decade.

fretslider
Reply to  Bellman
December 3, 2021 4:54 am

Did you enjoy Antarctica’s second coldest winter on record? I know I did.

TheFinalNail
Reply to  fretslider
December 3, 2021 9:26 am

How was the UK’s 3rd warmest autumn for you, fret? Did you manage to find your usual cloud?

Clyde Spencer
Reply to  TheFinalNail
December 3, 2021 11:55 am

What will you make of the situation if next year it is the 4th warmest Autumn? Or even if it is tied with this year? Do you really believe that the ranking has any significance when depending on hundredths of a degree to discriminate?

meab
Reply to  TheFinalNail
December 3, 2021 1:14 pm

Did you not learn anything, ToeFungalNail? When the difference in temperature is less than the measurement error any ranking of the warmest month, year, or season is bogus. Why do you persist in (fecklessly) trying to mislead?

Don
Reply to  fretslider
December 4, 2021 4:34 pm

-51 oC central Greenland last week , coldest I have ever seen that !

menace
Reply to  Bellman
December 3, 2021 7:54 am

I assume you mean 0.11 and 0.16 C/decade

the old pause started prior to 1997 El Nino spike, there was never pause starting from post-spike… indeed it is that large spike that made the long statistical “pause” possible

Bellman
Reply to  menace
December 3, 2021 9:22 am

Yes, sorry. Good catch. I was thinking of the per century values.

I’m not sure if anyone sees the irony of claiming that a pause only exists prior to a temperature spike and vanishes if you start after the spike.

Richard S Courtney
Reply to  Bellman
December 3, 2021 10:59 am

Bellman,

Your comment is lame and demonstrates you do not understand the calculation conducted by Viscount Monckton. I explain as follows.

(a)
The “start” of the assessed pause is now
and
(b) the length of the pause is the time back from now until a trend is observed to exist at 90% confidence within the assesedd time series of global average temperature (GAT).

The resulting “pause” is the length of time before now when there was no discernible trend according to the assessed time series of GAT.

I see no “irony” in the “pause” although its name could be challenged. But I do see much gall in your comment which attempts to criticise the calculation while displaying your ignorance of the calculation, its meaning, and its indication.

Richard

Bellman
Reply to  Richard S Courtney
December 3, 2021 12:14 pm

I’ve already answered this a couple of times, but no, Monckton’s pause is not based on confidence intervals, nor have I ever seen him claim that it starts at the end. Here’s Monckton’s definition

As usual, the Pause is defined as the longest period, up to the most recent month for which data are available, during which the linear-regression trend on the monthly global mean lower-troposphere temperature anomalies shows no increase.

If you think you understand how the pause is calculated better than me, feel free to calculate when the pause should have started or ended this month and share your results, along with the workings.

For my part, I just have a function in R that calculates the trend from each start date to the current date, and then I just look back to see the earliest negative value. I could get the earliest date programmatically, but it’s useful to have a list of all possible trends, just to see how much difference a small change in the start date can make.

Richard S Courtney
Reply to  Bellman
December 3, 2021 12:47 pm

Bellman,

I object to you blaming me for your inability to read.

I said,
(a)
The “start” of the assessed pause is now
and
(b) the length of the pause is the time back from now until a trend is observed to exist at 90% confidence within the assessed time series of global average temperature (GAT).

You quote Viscount Monckton as having said,
As usual, the Pause is defined as the longest period, up to the most recent month for which data are available, during which the linear-regression trend on the monthly global mean lower-troposphere temperature anomalies shows no increase.

The only difference between those two explanations is that
I state the confidence (90%) that is accepted as showing no change (or ” no increase”) normally applied in ‘climate so-called science’
but
the noble Lord assumes an interested reader would know that.

I assume your claim that you cannot read is sophistry intended to evade the need for you to apologise for having posted nonsense (i.e. you don’t want to say sorry for having attempted ‘bull sh** baffles brains’)

Richard

Bellman
Reply to  Richard S Courtney
December 3, 2021 1:11 pm

The only difference between those two explanations is that
I state the confidence (90%) that is accepted as showing no change

And that’s where your method is different from Lord Monckton’s. And no amount of personal insults will convince me you are right and I’m wrong. I said before, if you want to convince me about your 90% confidence interval approach, actually do the work, show me how that will make January 2015 the start date this month.

Here’s some of my workings, each value represents the trend in degrees per century starting at each month.

2005 2.2 2.3 2.3 2.3 2.4 2.4 2.4 2.4 2.5 2.5 2.5 2.6
2006 2.6 2.6 2.7 2.7 2.7 2.7 2.7 2.7 2.7 2.7 2.8 2.8
2007 2.8 2.9 3.0 3.0 3.1 3.1 3.1 3.2 3.2 3.2 3.3 3.3
2008 3.3 3.2 3.2 3.1 3.1 3.0 2.9 2.9 2.8 2.8 2.8 2.8
2009 2.8 2.8 2.8 2.8 2.7 2.7 2.6 2.6 2.6 2.7 2.7 2.7
2010 2.7 2.9 3.0 3.1 3.2 3.4 3.5 3.6 3.7 3.8 3.8 3.8
2011 3.9 3.8 3.7 3.6 3.6 3.5 3.6 3.6 3.6 3.6 3.5 3.5
2012 3.4 3.2 3.0 2.9 2.9 2.8 2.8 2.7 2.7 2.6 2.6 2.6
2013 2.6 2.7 2.7 2.6 2.5 2.3 2.3 2.2 2.0 2.0 1.9 1.7
2014 1.6 1.6 1.4 1.2 1.1 1.0 0.9 0.8 0.6 0.4 0.2 0.1
2015 0.0 -0.1 -0.3 -0.6 -0.9 -1.1 -1.2 -1.5 -1.8 -2.1 -2.1 -2.3
2016 -2.2 -2.1 -1.5 -1.0 -0.4 -0.1 -0.2 -0.1 0.0 0.2 0.3 0.5

The pause starts on January 2015, because that is the earliest month with a zero trend, actually -0.03.

If you wanted to go back as far as possible to find a trend that was significant at the 90% level the pause would be much longer.

Last edited 1 month ago by Bellman
Anthony Banton
Reply to  Richard S Courtney
December 4, 2021 8:46 am

“(b) the length of the pause is the time back from now until a trend is observed to exist at 90% confidence within the assessed time series of global average temperature (GAT).2”

Please provide a link to where Monckton says that he uses 90% confidence limits.
He doesn’t.
And, what’s more he doesn’t have to, as denizens don’t require it and he ignores all critics with bluster and/or ad hom.
In short he has blown his fuse and this place is the only one he can get traction for his treasured snake-oil-isms.

Forgot:
(the real driving motivation for his activities, so apparent in his spittle-filled language) …..
Accusing all critics of being communists or paid trolls.
Quite, quite pathetic.
And this is the type of science advocate (let’s not forget with diplomas in journalism and the Classics) who you support to keep your distorted view if the world and its climate scientists away from reality.

Last edited 1 month ago by Anthony Banton
Monckton of Brenchley
Reply to  Bellman
December 3, 2021 11:11 pm

Paid climate Communists such as Bellman will make up any statistic to support the Party Line and damage the hated free West. The UAH global mean lower-troposphere temperature trend from April 1997 to November 2021 was 0.1 C/decade, a rate of warming that is harmless, net-beneficial and consistent with correction of climatology’s elementary control-theoretic error that led to this ridiculous scare in the first place.

Bellman
Reply to  Monckton of Brenchley
December 4, 2021 3:39 am

Ad hominems aside I’d be grateful if you could point out where you think my statistics are wrong or “made up”.

Given that there is currently a lively discussion going on between me and Richard S Courtney about how you define the pause this would be a perfect opportunity to shed some light on the subject – given you are the only one who knows for sure. I say it is based on the longest period with a non-positive trend, whilst Courtney says it is the furthest you can go back until you see a significant trend.

It would be really easy to say Courtney is correct and here’s why, or no sorry Courtney, much as it pains me to say it, the “paid climate communist” is right on this one.

Monckton of Brenchley
Reply to  Bellman
December 7, 2021 1:26 pm

The furtively pseudonymous “Bellman” asks me to say where what it optimistically calls its “statistics” are wrong. It stated, falsely, that the world was warming at 1.1 C/decade, when the true value for the relevant period was 0.1 C/decade.

Bellman
Reply to  Monckton of Brenchley
December 7, 2021 3:04 pm

It was an honest mistake, for which I apologized several days ago when it was pointed out

https://wattsupwiththat.com/2021/12/02/the-new-pause-lengthens-by-a-hefty-three-months/#comment-3402617

Maybe, if you had just asked if it was correct, instead of making snide innuendos, I could have set the record straight to you as well. Unfortunately the comment system here doesn’t allow you to make corrections after a short time, and any comment I add will appear far below the original mistake.

For the record, here’s what the comment should have said

Depends on when exactly you think the old pause started. Cherry picking the lowest and longest trend prior to 1998, the trend from April 1997 is 0.11°C / decade.

Starting just after the El Niño the trend from November 1998 is 0.16°C / decade.

Bellman
Reply to  Monckton of Brenchley
December 4, 2021 3:37 pm

As an aside, I like the fact that Monckton is accusing me of making up statistics to “support the Party Line and damage the hated free West”, when I’m actually using the statistics to support his start date for the pause.

Mark BLR
Reply to  angech
December 3, 2021 5:01 am

Would be interested to know the trend from the old pause at its maximum including the new pause.

Unclear what you had in mind when you wrote the phrase “the old pause at its maximum” here …
… I’ll take it as “(the start of) the longest zero-trend period in UAH (V6)”, which is May 1997 to December 2015.

I haven’t updated my spreadsheet with the UAH value for November yet (I’ll get right on that, promise !), but the values for “May 1997 to latest available value” are included in the “quick and dirty” graph I came up with below.

Notes

1) UAH can indeed be considered as “an outlier” (along with HadCRUT4).

2) UAH trend since (May) 1997 is between 1.1 and 1.2 (°C per century), so your 1.4 “guesstimate” wasn’t that far off.
HadCRUT4 trend is approximately 1.45.
The other “surface (GMST) + satellite (LT)” dataset trends are all in the range 1.7 to 2.15.

3) This graph counts as “interesting”, to me at least, but people shouldn’t try to “conclude” anything serious from it !

Trends-from-1997_1.png
Carlo, Monte
Reply to  Stephen Wilde
December 3, 2021 8:23 am

It is important to remember that the UAH lower troposphere temperature is a complex convolution of the 0-10km temperature profile which decreases exponentially with altitude; it is not the air temperature at the surface.

Dave Fair
Reply to  Carlo, Monte
December 3, 2021 9:35 am

And the GHE occurs at altitude in the Troposphere.

Carlo, Monte
Reply to  Dave Fair
December 3, 2021 2:28 pm

Ergo the IPCC tropospheric hotspot.

Gary Pearse
Reply to  Stephen Wilde
December 3, 2021 10:30 am

Stephen, actually, GISS and UAH used to be very closely in agreement until Karl (from which I coined the term “Karlization of temperatures” ‘adjusted’ us out of the Dreaded Pause^тм in 2015 on the eve if his retirement. Mearns, who does GISS’s satellite Ts then responded with his complementary adjustments. It bears mentioning that Roy Spencer invented the method and was commended for it by NASA at the time.

Dave Fair
Reply to  Gary Pearse
December 3, 2021 3:07 pm

Karl added 0.12 C to the ARGO data to “be consistent with the [lousy] ship engine intake data.” This adds an independent warming trend over time as more and more ARGO floats come into the datasets, replacing the use of engine intakes in ongoing data collection.

Karl also used Night Marine Air Temperatures (NMAT) to adjust SSTs. Subsequent collected data have shown NMAT diverging from SST significantly. Somebody should readdress his “work.”

ironicman
Reply to  griff
December 3, 2021 1:17 am

UAH is an honest broker that both sides can agree upon.

Simon
Reply to  ironicman
December 3, 2021 1:23 am

Yep but it is not a measurement of the earths surface, so useful but not the full picture.

Krishna Gans
Reply to  Simon
December 3, 2021 5:45 am

Cooling earth is to see 2-3 month later in the lower troposphere.

Derg
Reply to  Simon
December 3, 2021 7:11 am

Exactly Simon, we need to include the temps inside a volcano to get the full picture.

Simon
Reply to  Derg
December 3, 2021 11:09 am

You might think that but I am for going with all the recognised data sets to get the full picture.

Derg
Reply to  Simon
December 3, 2021 11:23 am

Just like Russia colluuuusion 😉

Simon
Reply to  Derg
December 3, 2021 11:32 am

Duh… the one trick pony is now a no trick ass.

Derg
Reply to  Simon
December 3, 2021 1:52 pm

Are you calling yourself an ass?

Russia colluuuusion indeed. Along with your Xenophobia..that one always cracks me up.

Last edited 1 month ago by Derg
Dave Fair
Reply to  Simon
December 3, 2021 3:12 pm

No quarrel with that, but realize the limitations of each dataset. The “Karlized” set should not be used for scientific analyses. Also, RSS needs to explain its refusal to dump obviously bad data. Additionally, their method for estimating drift is model based as opposed to UHA’s empirical method.

Captain climate
Reply to  Simon
December 3, 2021 7:40 am

Why would you measure the earths surface? Asphalt can get up to enormous temperatures not representative of the air on a sunny day. The entire point of measuring the lower troposphere is that it won’t have UHI.

Dave Fair
Reply to  Captain climate
December 3, 2021 9:37 am

That’s how they originally sold the satellites to Congress.

Simon
Reply to  Captain climate
December 3, 2021 11:10 am

Why would you measure the earths surface? “
Umm because we live here, or at least I do.

Jim Gorman
Reply to  Simon
December 3, 2021 1:36 pm

Ummm, no. You live in the lower part of the atmosphere, not in the surface! The surface is the ocean and land, e.g.. the “solid” part of the planet. The atmosphere is the gaseous part of the planet. The atmosphere is an insulator and it has a gradient from the boundary with the surface and toward space. The surface has a gradient in two directions downward and upward into the atmosphere.

Dave Fair
Reply to  Simon
December 3, 2021 3:15 pm

Do you live in a rural location or in a city? It makes a big difference in measured temperatures.

Gary Pearse
Reply to  Simon
December 3, 2021 10:44 am

Simon: Happily for you, UAH does agree with the direct measurements of balloon sondes. Agreement with independent measures is, of course the highest order of validation. The good fellow who invented it, Dr. Roy Spencer received a prestigious commendation of NASA back in the days when that meant a lot.

Simon
Reply to  Gary Pearse
December 3, 2021 11:11 am

Look I have no issue with UAH, it is just not the complete picture. It has also had a lot of problems going back so anyone who thinks it is the be all and end all is, well, wrong.

Jim Gorman
Reply to  Simon
December 3, 2021 1:39 pm

And you think any of the lower atmosphere temperature data sets don’t have problems going back? They are probably less reliable because of coverage issues and the methods used to infill.

Mike
Reply to  Simon
December 3, 2021 7:41 pm

Look I have no issue with UAH”……”( I’m just uncomfortable with what it’s showing)”

Vuk
Reply to  ironicman
December 3, 2021 2:07 am

…. and it is due to natural variability.
When solar magnetic activity is high TSI goes up and warms the land and oceans. When magnetic activity goes down there is flood in of energetic GCRs which enhances cloud formation. Clouds increase albedo which should reduce atmospheric warming, but clouds also reduce heat re-radiation back into space.
Balance between two is important factor for the atmospheric temperature status, and at specific levels of reduction of solar activity the balance is tilted towards clouds warming effect.
Hence, we find that when global temperature is above average and amount of ocean evaporation is also above average, during falling solar activity there will be mall increase in the atmospheric temperature.

http://www.vukcevic.co.uk/UAH-SSN.gif

After prolonged period of time (e.g. Grand solar minima) oceans will cool, evaporation will fall and the effect will disappear.

Last edited 1 month ago by Vuk
Bellman
Reply to  ironicman
December 3, 2021 3:58 am

“UAH is an honest broker that both sides can agree upon.”

Yet ever since UAH showed a warmer month one side keeps claiming satellite data including UAH is not very reliable. See carlo, monte’s analysis

https://wattsupwiththat.com/2021/12/02/uah-global-temperature-update-for-november-2021-0-08-deg-c/#comment-3401727

according to him the monthly uncertainty in UAH is at least 1.2°C.

Last edited 1 month ago by Bellman
bdgwx
Reply to  Bellman
December 3, 2021 7:24 am

Bellman said: “according to him the monthly uncertainty in UAH is at least 1.2°C.”

Which is odd because both the UAH and RSS groups using wildly different techniques say the uncertainty on monthly global mean TLT temperatures is about 0.2. [1] [2]

Carlo, Monte
Reply to  bdgwx
December 3, 2021 8:26 am

Why do you continue to use the word “uncertainty” when you don’t understand what it means?

Comparisons of line regressions against those from radiosondes is NOT an uncertainty analysis.

bdgwx
Reply to  Carlo, Monte
December 3, 2021 8:48 am

I use the word uncertainty because we don’t know what the error is for each month, but we do know that the range in which the error lies is ±0.2 (2σ) according to both UAH and RSS.

Note that I have always understood “error” to be the difference between measurement and truth and “uncertainty” to be the range in which the error is likely to exist.

Carlo, Monte
Reply to  bdgwx
December 3, 2021 9:01 am

Which is a Fantasy Island number, demonstrating once again that you still don’t understand what uncertainty is.

bdgwx
Reply to  Carlo, Monte
December 3, 2021 10:04 am

I compared UAH to RATPAC. The monthly differences fell into a normal distribution with σ = 0.17. This implies the an individual uncertainty of each of 2σ = 2*√(0.17^2/2) = 0.24 C which is consistent with the Christy and Mears publications. Note that this is despite UAH having a +0.135 C/decade trend while RATPAC is +0.212 C/decade so the differences increase with time due to one or both of these datasets having a systematic time dependent bias. FWIW the RSS vs RATPAC comparison implies an uncertainty of 2σ = 0.18 C. It is lower because differences do not increase with time like what happens with UAH. The data is inconsistent with your hypothesis that the uncertainty is ±1.2 C by a significant margin.

Jim Gorman
Reply to  bdgwx
December 3, 2021 1:50 pm

tell us again how temps recorded in integers can be averaged to obtain 1/100th of a degree. The uncertainty up to at least 1980 was a minimum of ±0.5 degrees. It is a matter of resolution of the instruments used and averaging simply can not reduce that uncertainty.

As Carlo, Monte says, “Comparisons of line regressions against those from radiosondes is NOT an uncertainty analysis.”

Linear regression of any kind ignores cyclical phenomena from 11 year sunspots, to 60 years cycles of ocean currents, to orbital variation.

Even 30 years for “climate change” ignores the true length of time for climate to truly change. Tell what areas have become deserts in the 30 to 60 years. Have any temperate boundaries changed? Have savannahs enlarged or shrunk due to temperature? Where in the tropics has become unbearable due to temperature increases?

Bellman
Reply to  Carlo, Monte
December 3, 2021 9:43 am

Uncertainty: parameter, associated with the result of a measurement, that characterizes the dispersion of the values that could reasonably be attributed to the measurand.

If the uncertainty of a monthly UAH measurement is 1.2°C, then say if the measured value is 0.1°C, you are saying it’s reasonable to say the actual anomaly for that month could be between -1.1 and +1.3. If it’s reasonable to say this, you would have to assume that at least some of the hundreds of measured values differ from the measurand by a at least one degree. If you compare this with an independent measurement, say radiosondes or surface data, there would be the occasional discrepancy of at least one degree. The fact you don;t see anything like that size of discrepancy is evidence that your uncertainty estimate is too big.

Carlo, Monte
Reply to  Bellman
December 3, 2021 12:17 pm

UNCERTAINTY DOES NOT MEAN RANDOM ERROR!

Jim Gorman
Reply to  Bellman
December 4, 2021 6:34 am

You still have no idea of the difference between error and uncertainty. Uncertainty is NOT a dispersion of values that could reasonably be attributed to the measurand. That is random error. Each measurement you make of that measurand has uncertainty. Each and every measurement has uncertainty. You simply can not average uncertainty away as you can with random errors. What that means is that your “true value” also has an uncertainty that you can not remove by averaging.

As to your comparison. You are discussing two measurands using different devices. You CAN NOT compare their uncertainties nor assume that measurements will range throughout the range.

Repeat this 1000 times.

“UNCERTAINTY IS WHAT YOU DON’T KNOW AND CAN NEVER KNOW!”

Why do you think standard deviations are accepted as an indicator of what uncertainty can be. Standard deviations tell you what the range of values were while measuring some measurand. One standard deviation means that 68% of the values fell into that range. It means your measured values of the SAME MEASURAND will probably fall within that range. It doesn’t define what your measurement will be, only what range it could fall in.

Your assertion is a fine example of why scientific measurements should never be stated with including an uncertainty range. Not including this information leads people into the mistaken view that measurements are exact.

Bellman
Reply to  Jim Gorman
December 4, 2021 1:14 pm

Uncertainty is NOT a dispersion of values that could reasonably be attributed to the measurand.

That is literally how the GUM defines it

parameter, associated with the result of a measurement, that characterizes the dispersion of the values that could reasonably be attributed to the measurand.

Bellman
Reply to  Jim Gorman
December 4, 2021 1:31 pm

As to your comparison. You are discussing two measurands using different devices. You CAN NOT compare their uncertainties nor assume that measurements will range throughout the range.”

I’m not saying compare their uncertainties, I’m saying having two results will give you more certainty. I’m really not sure why you wouldn’t want a second opinion if the exact measurement is so important. You know there’s an uncertainty associated with your first measurement, how can double checking the result be a bad thing?

““UNCERTAINTY IS WHAT YOU DON’T KNOW AND CAN NEVER KNOW!””

I don’t care how many times you repeat this, you are supposed to know the uncertainty. Maybe you mean you can never know the error, but as you keep saying error has nothing to do with uncertainty I’m not sure what you mean by this.

Last edited 1 month ago by Bellman
Tim Gorman
Reply to  Bellman
December 5, 2021 9:35 am

I’m saying having two results will give you more certainty.”

Only if you are measuring the SAME THING. This will *usually* generate a group of stated values plus random error where the random errors will follow a gaussian distribution and will tend to cancel out. Please note carefully that uncertainty is made up of two factors, however. One factor is random error and the other is systemic error. Random error will cancel, e.g. reading errors, systemic error will not.

If you are measuring *different* things then the errors will most likely not cancel. When measuring the same thing the stated values and uncertainties cluster around a true value. When measuring different things, the stated values and uncertainties do not cluster around a true value. There is no true value. In this case no number of total measurements will lessen the uncertainty associated with the elements themselves or the uncertainties associated with the calculated mean.

“I don’t care how many times you repeat this, you are supposed to know the uncertainty.”

Uncertainty is not error. That is a truism. Primarily because uncertainty is made up of more than one factor. If I tell you that the uncertainty of a measurement is +/- 0.2 can you tell me how much of that uncertainty is made up of random error and how much is made up of other factors (e.g. hysteresis, drift, calibration, etc)?

If you can’t tell me what each factor contributes to the total uncertainty then you can’t say that uncertainty *is* error because it is more than that. Uncertainty is not error.

Bellman
Reply to  Tim Gorman
December 5, 2021 10:31 am

Only if you are measuring the SAME THING.

In this case, you are measuring the same thing.

Random error will cancel, e.g. reading errors, systemic error will not.

Which is why it’s a good thing you are using different instruments.

If you are measuring *different* things then the errors will most likely not cancel.

Why not.

When measuring different things, the stated values and uncertainties do not cluster around a true value

Of course they do. The true value is the mean, each stated value is a distance from the mean.

Uncertainty is not error.

You don’t know the error, you do know the uncertainty.

If I tell you that the uncertainty of a measurement is +/- 0.2 can you tell me how much of that uncertainty is made up of random error

When you are stating uncertainty you should explain how it was established.

Bellman
Reply to  Bellman
December 5, 2021 10:44 am

Also note, that systematic error’s are at least as much of a problem if you are measuring the same thing, than if you are measuring different things.

Carlo, Monte
Reply to  Bellman
December 5, 2021 12:04 pm

Why are you so desperate to make uncertainty as small as possible?

Bellman
Reply to  Carlo, Monte
December 5, 2021 12:12 pm

I’d have thought it was always going to be a good idea to be as certain as possible. Why would you want to be less certain?

Carlo, Monte
Reply to  Bellman
December 5, 2021 12:24 pm

How in the world did you jump to this idea? I never said or implied this. Instead I’ve been trying to show you how temperature uncertainties used in climastrology are absurdly small, or just ignored completely.

Bellman
Reply to  Carlo, Monte
December 5, 2021 12:47 pm

I was being flippant with your question, “Why are you so desperate to make uncertainty as small as possible?”

Carlo, Monte
Reply to  Bellman
December 5, 2021 1:03 pm

And yet the fact remains, that temperature uncertainties used in climastrology are absurdly small, or just ignored completely. Subtracting baselines does NOT remove uncertainty.

Bellman
Reply to  Carlo, Monte
December 5, 2021 4:59 pm

But it isn’t a fact, just your assertion. You say they are small because you can’t believe they could be so small, and in contrast give what to me seem absurdly large uncertainties.

Then you again make statements like “Subtracting baselines does NOT remove uncertainty”, as if merely you saying it makes it so.

Carlo, Monte
Reply to  Bellman
December 5, 2021 5:59 pm

I’m finished trying to educate you lot, enjoy life on Mars.

Tim Gorman
Reply to  Bellman
December 6, 2021 3:34 am

Also note, that systematic error’s are at least as much of a problem if you are measuring the same thing, than if you are measuring different things”

So what? In one case you will still get clustering around a “true value” helping to limit random error impacts. In the other you won’t.

Bellman
Reply to  Tim Gorman
December 6, 2021 7:42 am

The so what, is it’s a good idea to measure something with different instruments using different eyes.

Jim Gorman
Reply to  Bellman
December 6, 2021 4:42 am

Look at the word you used — error. ERROR IS NOT UNCERTAINTY!

I can use a laser to get 10^-8 precision. Yet the uncertainty still lies with at least +/-10^-9. A systematic ERROR will still give good precision but it will not be ACCURATE!

Bellman
Reply to  Jim Gorman
December 6, 2021 6:12 am

Sorry if I’ve offended you again, but it was the Other Gorman who used the dreaded word.

“Uncertainty is not error. That is a truism. Primarily because uncertainty is made up of more than one factor. If I tell you that the uncertainty of a measurement is +/- 0.2 can you tell me how much of that uncertainty is made up of random error and how much is made up of other factors (e.g. hysteresis, drift, calibration, etc)?”

Rather than endlessly shout UNCERTAINTY IS NOT ERROR to a disinterested universe, it would be a lot more useful if you explain what you think uncertainty is. I’ve given you the GUM definition and you rejected that. I’ve tried to establish without success, is you definition has any realistic use. All you seem to want is for it to be a word that can mean anything you want. You can tell me the uncertainty in global temperatures is 1000°C, but any attempt to establish if that means you realistically think gloabal temperatures could be as much as 1000°C is just met with UNCERTAINTY IS NOT ERROR.

Tim Gorman
Reply to  Bellman
December 6, 2021 12:46 pm

 I’ve given you the GUM definition and you rejected that.”

Sorry, that’s just not so. If uncertainty was error then the GUM definition wouldn’t state that after error is analyzed there still remains an uncertainty about the stated result.

It is *you* that keeps on rejecting that.

“uncertainty: “The concept of uncertainty as a quantifiable attribute is relatively new in the history of measurement,

although error and error analysis have long been a part of the practice of measurement science or metrology.
It is now widely recognized that, when all of the known or suspected components of error have been
evaluated and the appropriate corrections have been applied, there still remains an uncertainty about the
correctness of the stated result, that is, a doubt about how well the result of the measurement represents the
value of the quantity being measured.”

This is the GUM definition. Please note carefully that it specifically states that uncertainty is not error. All the suspected components of error can be corrected or allowed for and you *still* will have uncertainty in the result of the measurement.

You can tell me the uncertainty in global temperatures is 1000°C, but any attempt to establish if that means you realistically think gloabal temperatures could be as much as 1000°C is just met with UNCERTAINTY IS NOT ERROR.”

All this means is that once the uncertainty in your result exceeds physical limits that you need to re-evaluate your model. Something is wrong with it! In fact, if you are trying to model the global temperature to determine a projected anomaly 100 years in the future, and the uncertainty in your projection exceeds the value of the current temperature then you need to stop and start over again. For that means your model is telling you something you can’t measure! Anyone can stand on the street corner with a sign saying the world will end tomorrow. Just how much uncertainty is there in such a claim? If the sign says it will be 1C hotter tomorrow just how much uncertainty is there in such a claim?

Carlo, Monte
Reply to  Tim Gorman
December 6, 2021 1:07 pm

All the suspected components of error can be corrected or allowed for and you *still* will have uncertainty in the result of the measurement.

And what the climatologists fail to recognize is that the correction factors themselves also have uncertainty that must be accounted for.

Tim Gorman
Reply to  Carlo, Monte
December 6, 2021 1:15 pm

Hadn’t thought of that! Just keep going down the rabbit hole!

Bellman
Reply to  Tim Gorman
December 6, 2021 3:29 pm

“ I’ve given you the GUM definition and you rejected that.”

“Sorry, that’s just not so.”

I was talking to Jim, reminding him he rejected the error free definition of measurement uncertainty, as well as the definitions based on error.

All this means is that once the uncertainty in your result exceeds physical limits that you need to re-evaluate your model.

Really? You can’t accept the possibility that what it tells you is your uncertainty calculations are wrong?

Bellman
Reply to  Jim Gorman
December 6, 2021 6:16 am

I can use a laser to get 10^-8 precision. Yet the uncertainty still lies with at least +/-10^-9.

You keep confusing precision with resolution. If that’s not clear let me say RESOLUTION IS NOT PRECISION.

A systematic ERROR will still give good precision but it will not be ACCURATE!”

Yes, that’s why I’m saying it’s useful to measure something twice with different instruments, even if their resolution is too low to detect random errors.

Carlo, Monte
Reply to  Bellman
December 6, 2021 6:40 am

Oh yeah, you’re the world’s expert on all things metrology, everyone needs to listen up.

How many of those dozens of links that Jim has provided to you on a silver platter have you studied? Any?

No, you are just like Nitpick Nick Stokes, who picks at any little thing to attack anyone who threatens the global warming party line.

Go read his web site, he’ll tell you what you want to hear.

Bellman
Reply to  Carlo, Monte
December 6, 2021 7:33 am

Oh yeah, you’re the world’s expert on all things metrology, everyone needs to listen up.

I am absolutely not an expert on anything – especially metrology. If I appear to be it’s because I’m standing on the shoulders of midgets.

How many of those dozens of links that Jim has provided to you on a silver platter have you studied? Any?

Enough to know that any he posts directly contradict his argument. Honestly the difference between SD and SEM is well documents, well known and Jim is just wrong, weirdly wrong, about them.

No, you are just like Nitpick Nick Stokes, who picks at any little thing to attack anyone who threatens the global warming party line.

I’m flattered that you compare me with Stokes.

Carlo, Monte
Reply to  Bellman
December 6, 2021 7:40 am

The truth finally emerges from behind the greasy green smokescreen…

So Señor Experto, do tell why machine shops all don’t have a battalion of people in the backroom armed with micrometers to recheck each and every measurement 100-fold so that nirvana can be reached through averaging?

Bellman
Reply to  Carlo, Monte
December 6, 2021 10:43 am

Finally, a sensible question, though asked in a stupid way.

Why don;t machine shops all make hundreds of multiple readings to increase the precision of their measurements? I think Bevington has a section on this that sums to up well. But the two obvious reasons are

1) it isn’t very efficient. Uncertainty decreases with the square root of the number of samples, s the more you do the less of a help it is. Take four measurements and you might have halved the uncertainty, but to get to a tenth the uncertainty, and hence that all important extra digit would require 100 measurements, and to get another digit is going to require 10000 measurements. I can’t speak for how machine shops are organized, but I can’t imagine it’s worth employing that many people just to reduce uncertainty by a hundredth. If you need that extra precision it’s probably better to invest in better measuring devices.

2) as I keep having to remind you, the reduction in uncertainty is a theoretical result. Taking a million measurements won’t necessarily give you a result that is 1000 times better. The high precision is likely to be swamped out by any other small inaccuracies.

Jim Gorman
Reply to  Bellman
December 6, 2021 12:46 pm

Efficiency isn’t the point, you are running around the bush to try and and not answer the question. The question is whether it can be done with more measurements.

Carlo, Monte
Reply to  Jim Gorman
December 6, 2021 1:10 pm

And he ran away from the inconvenient little fact is that no machine shop has a backroom filled with any people who do nothing repeat others’ measurements, not 100, not 20, not 5, not 1.

Bellman
Reply to  Jim Gorman
December 6, 2021 2:21 pm

Efficiency isn’t the point

Really, wasn’t the question

do tell why machine shops all don’t have a battalion of people in the backroom armed with micrometers to recheck each and every measurement 100-fold so that nirvana can be reached through averaging?

I’m really not convinced by all these silly hypothetical questions, all of which seem to be distracting from the central question, which is does in general uncertainty increase or decrease with sample size.

Jim Gorman
Reply to  Bellman
December 6, 2021 12:14 pm

Honestly the difference between SD and SEM is well documents, well known and Jim is just wong, weirdly wrong, about them.

If I am so wrong why don’t you refute the references I have made and the inferences I have taken from them. Here is the first one to refute.

SEM = SD / √N, where:

SEM is the standard deviation of the sample means distribution

SD is the standard deviation of the population being samples

N is the sample size taken from the population

What this means is you need to decide what you have in a temperature database. Do you have a group of samples or do you have a population of temperatures.

This is a simple decision to make, which is it?

Bellman
Reply to  Jim Gorman
December 6, 2021 12:41 pm

I don’t need to refute the references because they agree with me.

“SEM = SD / √N, where:
SEM is the standard deviation of the sample means distribution
SD is the standard deviation of the population being samples
N is the sample size taken from the population”

See, that’s what I’m saying and not what you are saying. You take the standard deviation of the sample, which is an estimate of the SD of the population, and then think that is the standard error of the mean. Then you multiply the sample standard deviation by √N in the mistaken believe that this will give you SD.

In reality you divide the standard standard deviation by √N to get the SEM. This is on the assumption that the sample standard deviation is an estimate of SD.

Here’s a little thought experiment to see why this doesn’t work. You took a sample of size 5, took it’s standard deviation and multiplied by √5 to get a population standard deviation more than twice as big as the sample standard deviation. But what if you’d taken a sample of size 100, or 1000 or whatever. Using your logic you would multiply the sample deviation by √100 or √1000. This would make the population deviation larger the bigger your sample size. But the population standard deviation is fixed. It shouldn’t change depending on what size sample you take. Do you see the problem?

Bellman
Reply to  Bellman
December 6, 2021 12:45 pm

What this means is you need to decide what you have in a temperature database. Do you have a group of samples or do you have a population of temperatures.

This is a simple decision to make, which is it?

Of course you don;t have the population of temperatures. The population is all temperatures across the planet over a specific time period. It’s a continuous function and hence infinite. You are sampling the population in order to estimate what the population mean is.

Tim Gorman
Reply to  Bellman
December 6, 2021 1:13 pm

They use temperatures from one location to infill temps at other locations. So what components of the population don’t you have? Are you saying you need a grid size of 0km,0km in order to have a true population?

Sampling only works if you have a homogenous population to sample. Are the temps in the northern hemisphere the same as the temps in the southern hemisphere? How does this work with anomalies?

Bellman
Reply to  Tim Gorman
December 6, 2021 1:48 pm

Well yes, that’s what sampling is. Taking some elements as an estimate of what the population is. Of course, as I’ve said before the global temperature is not a random sample, and you do have to do things like infilling and weighting which is why estimates of uncertainty are complicated.

Carlo, Monte
Reply to  Bellman
December 6, 2021 1:15 pm

Of course you don;t have the population of temperatures. The population is all temperatures across the planet over a specific time period.

This is just more jive-dancing bullsh!t, you are not sampling the same quantity. You get one chance and it is gone forever.

Tim Gorman
Reply to  Bellman
December 6, 2021 1:07 pm

In reality you divide the standard standard deviation by √N to get the SEM. This is on the assumption that the sample standard deviation is an estimate of SD.”

If you already have the SD of the population then why are you trying to calculate the SEM? The SEM is used to calculate the SD of the population! If you already have the SD of the population then the SEM is useless!

You have to know the mean to calculate the SD of the population and you have to know the size of the population to calculate the mean. That implies you know the mean exactly. it is Σx/n where x are all the data values and n is the number of data values. Thus you know the mean exactly. And if you know the mean exactly and all the data values along with the number of data values then you know the SD exactly.

The SEM *should* be zero. You can’t have a standard deviation with only one value – i.e. the mean which you have already calculated EXACTLY!

This is why you keep getting asked whether your data set is a sample of if it is a population!

Carlo, Monte
Reply to  Bellman
December 6, 2021 1:11 pm

I don’t need to refute the references because they agree with me.

DON’T confuse me with facts, my mind is MADE UP!

Bellman
Reply to  Carlo, Monte
December 6, 2021 1:53 pm

What facts? Someone makes a claim that the sample standard deviation is the standard error of the mean. Something which anyone with an elementary knowledge of statistics know to be wrong and can easily be shown to be wrong. But I’m always prepared to be proven wrong, and that someone has given me an impressive list of quotes, from various sources, except that none of the quotes says that the sample standard deviation is the same as the standard error of the mean, and most say the exact opposite.

Why do I need to read the full contents of all the supplied documents. If you are making an extraordinary claim don’t throw random quotes at me – show me something that supports your claim.

Jim Gorman
Reply to  Bellman
December 6, 2021 6:56 pm

“The standard error of the sample mean depends on both the standard deviation and the sample size, by the simple relation SE = SD/√(sample size). The standard error falls as the sample size increases, as the extent of chance variation is reduce”
From:
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1255808/

“However, the meaning of SEM includes statistical inference based on the sampling distribution. SEM is the SD of the theoretical distribution of the sample means (the sampling distribution).”
From:
https://www.investopedia.com/ask/answers/042415/what-difference-between-standard-error-means-and-standard-deviation.asp

Here is an image from:
https://explorable.com/standard-error-of-the-mean

Polish_20211206_205049580.png
Bellman
Reply to  Jim Gorman
December 6, 2021 8:06 pm

Yes all three posts are saying exactly what I’m saying. Why do you think I’m wrong?

Jim Gorman
Reply to  Bellman
December 7, 2021 11:56 am

Someone makes a claim that the sample standard deviation is the standard error of the mean. Something which anyone with an elementary knowledge of statistics know to be wrong and can easily be shown to be wrong. “

You say in a prior post that the “sample standard deviation IS NOT the standard error of the mean. Did you read what I posted?

“However, the meaning of SEM includes statistical inference based on the sampling distribution. SEM is the SD of the theoretical distribution of the sample means (the sampling distribution).”

Let me paraphrase, the Standard Error of the (sample) Mean, i.e., the SEM, is the Standard Deviation of the sample means distribution.

Look at the image.

Note 2 The expression (s(qk) / sqrt n) is an estimate of the standard deviation of the distribution of (qbar) and is called the experimental standard deviation of the mean.

If (qbar) has a distribution then there must be multiple samples, each with their own (qbar). By subtracting (qbar) from each value you are in essence isolating the error component.

Lastly, this is dealing with one and only one measurand. What the GUM is trying to do here is find the interval within which the true value may lay. This is important because it acknowledges that random errors quite probably won’t be removed by doing only a few measurements. If they did, the experimental standard deviation of the mean would be zero thereby indicating that there is no error left. The distribution of (qbar) would be exactly normal.

One should keep in mind that this is only dealing with measurements and error and in no way assesses the uncertainty of each measurement.

GUM experimental standard deviation.jpg
Bellman
Reply to  Jim Gorman
December 8, 2021 6:05 am

You say in a prior post that the “sample standard deviation IS NOT the standard error of the mean. Did you read what I posted?

Yes based on this comment, where you multiplied the standard deviation of a set of 5 numbers by √5 to calculate the standard deviation of the population.

The sample standard deviation is not the same thing as the standard error of the mean.

Let me paraphrase, the Standard Error of the (sample) Mean, i.e., the SEM, is the Standard Deviation of the sample means distribution.

Correct, but standard deviation of the sample means, is not the same as the standard deviation of the sample.

If (qbar) has a distribution then there must be multiple samples, each with their own (qbar).

No, there do not have to be literal samples. The distribution exists an abstract idea. If you took an infinite number of samples of a fixed size there would be the required distribution, but you don’t need to physically take more than one sample to know that the distribution would exist, and you can estimate it from your one sample.

It would in any event be a pointless exercise becasue if you have a large number of separate samples, the mean of their means would be much closer to the true mean. There would be no point in working out how uncertain each sample mean was, when you’ve now got a better estimate of the mean.

Carlo, Monte
Reply to  Bellman
December 8, 2021 6:31 am

This nonsense constitutes “debunking” in your world?

Tim Gorman
Reply to  Bellman
December 8, 2021 9:55 am

jg – “If (qbar) has a distribution then there must be multiple samples, each with their own (qbar).

bell – No, there do not have to be literal samples. 

You *HAVE* to be trolling, right? How do you get a DISTRIBUTION without multiple data points?

The distribution exists an abstract idea. If you took an infinite number of samples of a fixed size there would be the required distribution, but you don’t need to physically take more than one sample to know that the distribution would exist, and you can estimate it from your one sample.”

Total and utter malarky! Again, with one data point how do you define a distribution, be it literal or virtual?

Multiple samples allow you to measure how good your estimate is. A single sample does not! There is no guarantee that one sample consisting of randomly chosen points will accurately represent the population. And with just one sample you have no way to judge how representative the sample mean and standard deviation is of the total population.

Look at what *YOU* said: “The sample standard deviation is not the same thing as the standard error of the mean.”

When you have only one sample you are, in essence, saying the sample standard deviation *is* the standard deviation of the sample means. You can only have one way. Choose one or the other.

It would in any event be a pointless exercise becasue if you have a large number of separate samples, the mean of their means would be much closer to the true mean.”

“True mean”? You *still* don’t get it, do you?

There would be no point in working out how uncertain each sample mean was, when you’ve now got a better estimate of the mean.”

And, once again, you advocate for ignoring the uncertainty of the data points. If your samples consist only of stated values and you ignore their uncertainty then you have assumed the stated values are 100% accurate.

If you data set consists of the total population, with each data point consisting of “stated value +/- uncertainty” then are you claiming that the mean of that total population has no uncertainty? That each stated value is 100% accurate? If so then why even include uncertainty with the data values?

If mean of the total population has an uncertainty propagated from the individual components then why doesn’t samples from that population have an uncertainty propagated from the individual components making up the sample? How can the population mean have an uncertainty while the sample means don’t?

Bellman
Reply to  Tim Gorman
December 8, 2021 10:16 am

You *HAVE* to be trolling, right? How do you get a DISTRIBUTION without multiple data points?

Total and utter malarky! Again, with one data point how do you define a distribution, be it literal or virtual?

So you didn’t read any of the links I gave you?

Tim Gorman
Reply to  Bellman
December 9, 2021 6:15 pm

I did read your links. And I told you what the problems with them were. And you *still* haven’t answered the question. How do you get a distribution without multiple data points.?

Bellman
Reply to  Tim Gorman
December 9, 2021 7:05 pm

Fine, disagree with every text book on the subject, because they don’t understand it’s impossible to work out the SEM from just one sample. Just don’t expect me to follow through your tortured logic.

And you *still* haven’t answered the question. How do you get a distribution without multiple data points.?

The distribution exists, just because you haven’t sampled it. It’s what would happen if, and I repeat for the hard of understanding, if, you took an infinite number of samples of a specific size. You don’t actually need to take an infinite number of samples to know it exists – it exists as a mathematical concept.

Carlo, Monte
Reply to  Tim Gorman
December 8, 2021 10:25 am

He’s still pushing this “standard error of the mean(s)” asserting this is the “uncertainty” a temperature average.

He will never let go of this.

Bellman
Reply to  Tim Gorman
December 8, 2021 11:07 am

When you have only one sample you are, in essence, saying the sample standard deviation *is* the standard deviation of the sample means. You can only have one way. Choose one or the other.

You are really getting these terms confused. The sample standard deviation is absolutely, positively, not the standard deviation of the sample means. One is the deviation of all elements in the sample, the other is the deviation expected from all sample means of that sample size.

This is why I prefer to call it the error of the mean, rather than the standard deviation of the mean, (what ever GUM says), simply because it avoid the confusion of what particular deviation we are talking about.

Tim Gorman
Reply to  Jim Gorman
December 8, 2021 8:23 am

“This is important because it acknowledges that random errors quite probably won’t be removed by doing only a few measurements.”

Bingo!

Jim Gorman
Reply to  Bellman
December 6, 2021 11:16 am

“You keep confusing precision with resolution. If that’s not clear let me say RESOLUTION IS NOT PRECISION.”

OMG! No wonder you have a difficult time. Resolution, precision, and repeatability are intertwined. Resolution lets you make more and more precise measurements, i.e., precision. Higher precision allows better repeatability. Why do you think people spend more and more money on devices with higher resolution if they don”t give better precision?

Bellman
Reply to  Jim Gorman
December 6, 2021 3:40 pm

Intertwined, not the same thing. You know like error and uncertainty are intertwined.

Jim Gorman
Reply to  Bellman
December 6, 2021 7:38 pm

OMG. You still refuse to learn that error and uncertainty are two separate things. The only thing the same is that the units of measure are the same.

Bellman
Reply to  Jim Gorman
December 7, 2021 5:16 am

Explain how there can be uncertainty without error.

Even Tim points out that uncertainty is made up from random error and systematic error. Just because you can have a definition of uncertainty in terms that doesn’t use the word error, doesn’t mean that error isn’t the cause of uncertainty.

Carlo, Monte
Reply to  Bellman
December 7, 2021 8:10 am

Explain how there can be uncertainty without error.

There is no possible explanation that you might accept, so why bother?

Bellman
Reply to  Carlo, Monte
December 7, 2021 9:08 am

Don’t whine.

Tim Gorman
Reply to  Bellman
December 7, 2021 9:06 am

From the GUM: “0.2 The concept of uncertainty as a quantifiable attribute is relatively new in the history of measurement, although error and error analysis have long been a part of the practice of measurement science or metrology. It is now widely recognized that, when all of the known or suspected components of error have been evaluated and the appropriate corrections have been applied, there still remains an uncertainty about the correctness of the stated result, that is, a doubt about how well the result of the measurement represents the value of the quantity being measured.

Knowing error exists doesn’t mean you know what it is, how large it is, or how it affects the measurement.

Uncertainty is *NOT* error.

Why do you keep ignoring what you are being told? Go away troll.

Bellman
Reply to  Tim Gorman
December 7, 2021 9:34 am

Which is not saying that uncertainty is not caused by error, it’s saying there will always be other reasons for uncertainty as well as error.

Jim Gorman
Reply to  Bellman
December 8, 2021 6:47 am

I dIdn’t say that standard deviation isn’t made up of several different things. But error and uncertainty are not directly related. For example, if there was no error, i.e., the errors canceled out because they were in a normal distribution, you can still have uncertainty. That is where resolution comes in. There is always a digit beyond the one you can measure with precision.

Bellman
Reply to  Jim Gorman
December 8, 2021 9:29 am

But isn’t that just another error? Depending on your resolution and what you are measuring it might be random or systematic, but it’s still error. A difference between your measure and what you are measuring.

Carlo, Monte
Reply to  Bellman
December 8, 2021 9:40 am

But isn’t that just another error?

NO! It is uncertainty, the limit of what can be known.

Why is this so hard?

You and bwx are now the world’s experts on uncertainty, but still can’t find the barn.

Bellman
Reply to  Carlo, Monte
December 8, 2021 10:52 am

What do you think error means?

Carlo, Monte
Reply to  Bellman
December 8, 2021 11:03 am

Why do you insist on treating it as error?

Bellman
Reply to  Carlo, Monte
December 8, 2021 1:53 pm

Why do you answer with a question?

Tim Gorman
Reply to  Bellman
December 9, 2021 5:54 pm

AS C,M points out limited resolution is UNCERTAINTY, not error.

Repeat 1000 times, “UNCERTAINTY IS NOT ERROR.

Tim Gorman
Reply to  Bellman
December 6, 2021 12:51 pm

As I’ve pointed out to you already, using different instruments won’t help if the uncertainty in each is higher than what you are trying to measure. The only answer is to calibrate one of them and then use it to measure. If you use two instruments you have 2 chances out of three that both will be either high or low and only 1 chance out of three that one will be high and the other low thus leading to a cancellation of error. Would you bet your rent money on a horse with only a 30% chance to win?

Bellman
Reply to  Tim Gorman
December 6, 2021 3:22 pm

Again, nobody specified in this question that there was a machine that guaranteed there would be zero uncertainty in the first measurement. If such a thing were possible and you could also rule out human error, than no, why would you ever need to discuss uncertainty, everything would be perfect, and getting a second opinion from Mike who also has a zero uncertainty device would not help, though it wouldn’t hurt either.

And again, you need to brush up on your probability if you think there is a one in three chance of the two cancelling out.

But, yet again, all this talks about the specifics of a work shop it just distraction. I’m not trying to set up a time and motion study, just answering the question would having a second measurement improve the uncertainty.

Carlo, Monte
Reply to  Bellman
December 5, 2021 12:03 pm

When you are stating uncertainty you should explain how it was established.

As I’ve tried to tell before, this is what a formal uncertainty analysis does. Try applying for accreditation as a calibration lab and you’ll learn PDQ what I’m talking about.

Bellman
Reply to  Carlo, Monte
December 5, 2021 12:16 pm

Are you disagreeing with me or Tim here? He was implying you couldn’t know how much of the uncertainty was due to random error.

Carlo, Monte
Reply to  Bellman
December 5, 2021 12:19 pm

Did you actually read what I wrote? Apparently not—a formal uncertainty analysis is how you “explain how it was established“…

Tim has been trying to help you understand that which you do not understand.

Last edited 1 month ago by Carlo, Monte
Tim Gorman
Reply to  Bellman
December 6, 2021 3:27 am

“If you compare this with an independent measurement, say radiosondes or surface data, there would be the occasional discrepancy of at least one degree.”

In this case, you are measuring the same thing.

Really?

Which is why it’s a good thing you are using different instruments.”

Just how does that help you get to a more accurate answer by averaging their readings? You must *still* propagate the uncertainty – meaning the uncertainty will grow. It will not “average out”.

Why not.”

We’ve been over this multiple times. It’s because they do not represent a cluster around a true value. The measurements of the same thing are related by the thing being measured. Each one gives you an expectation of what the next measurement will be. Measurements of different things are not related by the things being measured. The current measurement does not give you any expectation of what the next measurement will be. Measurements of the same thing give you a cluster around the true value. Measurements of different things do not give you a cluster around a true value, the measurements may give you a mean but it is not a true value.

Bellman
Reply to  Tim Gorman
December 6, 2021 7:09 am

Yes, really. The scenario was “This is like measuring the run out on a shaft with your own caliper and then asking Mike to come over and use his.”

You must *still* propagate the uncertainty – meaning the uncertainty will grow. It will not “average out”.

Obviously nothing I can say will convince you that you are wrong on this point, not even quoting the many sources you point me to. You have a mind that is incapable of being changed, which is a problem in your case becasue the ideas you do have are generally wrong.

But I’m really puzzled why you cannot see the issue in this simple case. You’ve measured something, you have a measurement, you know there’s uncertainty in that measurement. That uncertainty means your measurement may not be correct. It may not be correct due to any number of factors, including random errors in the instrument, defects in the instrument, mistakes made by yourself or any number of other reasons. Why on earth would you consider it a bad idea to get someone else to double check your measurements? The second measurement will also have uncertainty, with all the same causes, but now you have a bit more confidence in your result, because you’ve either got two nearly identical results, or you have two different results. You either have more confidence that the first result was correct, or confidence that you have identified an error in at least one reading. Why would you prefer just to assume your result is correct and refuse to have an independent check. Remember how you keep quoting Feynman at me – you’re the easiest person to fool.

Whether you would actually just use an average of two as the final measurement, I couldn’t tell you. It’s going to depend on why you want the measurement in the first place, and how different the results are. But, on the balance of probabilities, and ignoring any priors, you can say that the average is your “best estimate” of the true value.

But I still can’t fathom is how you think having two reading actual increases the uncertainty. Maybe you are not using uncertainty in the sense of measurement uncertainty. “I was certain I had the right value, but then someone got a different result and now I’m less certain.”

Carlo, Monte
Reply to  Bellman
December 6, 2021 7:14 am

“Unskilled, and Unaware”

Bellman
Reply to  Bellman
December 6, 2021 7:15 am

We’ve been over this multiple times. It’s because they do not represent a cluster around a true value.

And you still haven’t figured out that the mean of a population is a true value. And the individual members are dispersed around that true value.

Carlo, Monte
Reply to  Bellman
December 6, 2021 7:42 am

Why do you need to defend the shoddy and dishonest numbers associated with climastrology?

Hey! Kip Hansen just posted an article about SLR uncertainty, you better head over there and nitpick him, show him what’s what.

Tim Gorman
Reply to  Bellman
December 6, 2021 3:33 am

Of course they do. The true value is the mean, each stated value is a distance from the mean.”

Really? You *still* think the mean value of the measurements of different things gives you a true value? The mean of the measurements of a 2′ board and an 8′ board will give you a “true value”?

You truly are just a troll, aren’t you?

You don’t know the error, you do know the uncertainty.”

You don’t seem to understand what you are writing.

“When you are stating uncertainty you should explain how it was established.”

The uncertainty interval can be established in many ways. Resolution limits, instrument limits, hysteresis impacts, response time, etc. Why don’t climate scientists explain their uncertainty intervals in detail – if they even mention them at all?

Carlo, Monte
Reply to  Tim Gorman
December 6, 2021 4:18 am

Before they showed up here and were told that uncertainty used by climatastrology was ignored or absurdly small, they had no idea the word even existed. And now they are the experts, believing that averaging reduces something they don’t understand. The NIST web site tells them what they want to hear.

Bellman
Reply to  Tim Gorman
December 6, 2021 7:41 am

Really? You *still* think the mean value of the measurements of different things gives you a true value?

Yes I do. Do you still not think they are?

The mean of the measurements of a 2′ board and an 8′ board will give you a “true value”?

Yes, they will give you the true value of the mean of those two boards. I suspect your problem is in assuming a true value has to represent a physical thing.

You truly are just a troll, aren’t you?

No.

You don’t seem to understand what you are writing.”

What bit of “You don’t know the error, you do know the uncertainty.” do you disagree with. You don;t know what the error is because you don’t know the true value, you do know the uncertainty, or at least have a good estimate of it, or else there would be no point in all these books explaining how to analyze uncertainty.

Carlo, Monte
Reply to  Bellman
December 6, 2021 7:47 am

What bit of “You don’t know the error, you do know the uncertainty.” do you disagree with. You don;t know what the error is because you don’t know the true value, you do know the uncertainty, or at least have a good estimate of it, or else there would be no point in all these books explaining how to analyze uncertainty.

This is some fine technobabble word salad here.

Jim Gorman
Reply to  Bellman
December 6, 2021 10:25 am

“Yes, they will give you the true value of the mean of those two boards. I suspect your problem is in assuming a true value has to represent a physical thing.”

STOP You have reached the point where you are spouting gibberish.

Bellman
Reply to  Jim Gorman
December 6, 2021 1:36 pm

You should know by now that telling me to STOP, and saying I’m talking nonsense without telling me what you disagree with isn’t going to make me stop. Do you, or do you not think that an average of two boards is a true value – if not why not?

Carlo, Monte
Reply to  Bellman
December 6, 2021 1:51 pm

It’s a leprechaun value, duh.

Captain climate
Reply to  Bellman
December 3, 2021 7:41 am

That’s roughly what the 2 sigma global average temperature uncertainty for surface records is. So what’s your point???

Carlo, Monte
Reply to  Captain climate
December 3, 2021 8:27 am

My back-of-the-envelope estimate was quite generous, the reality could be much greater.

bdgwx
Reply to  Captain climate
December 3, 2021 8:53 am

Surface records are on the order of ±0.05 to 0.10 (2σ) after WWII for monthly global mean temperatures. See Rhode et al. 2013 and Lenssen et al. 2019 for details. BEST and GISTEMP both publish there uncertainties annual averages as well.

Carlo, Monte
Reply to  bdgwx
December 3, 2021 9:21 am

Bull-pucky.

Captain climate
Reply to  bdgwx
December 3, 2021 9:25 am

We’ve been over this bullshit. You can’t reduce the uncertainty in a global average temperature to ±0.05 with thermometers that have a representative lower limit of u certainty of ±0.46C. The fact dipshits got published alleging so shows the science is a joke.

bdgwx
Reply to  Captain climate
December 3, 2021 10:46 am

That 0.46 C figure comes from Frank 2010. It’s actually not the thermometer uncertainty. Instead it is the result of Frank’s propagation of uncertainty from the Folland 2001 and Hubbard 2002 thermometer uncertainties.

Folland 2001 is σ = 0.2 plugged into (1a)

Hubbard 2002 is σ = 0.25 though Frank computes the Guassian distribution based on Hubbard’s research as 0.254 which is plugged into (2a).

Note that N is a large number in the calculations below.

(1a) sqrt(N * 0.2^2 / (N-1)) = 0.200

(1b) sqrt(0.200^2 + 0.200^2 = 0.283

(2a) sqrt(N * 0.254^2 / (N-1)) = 0.254

(2b) sqrt(0.254^2 + 0.254^2) = 0.359

(3) sqrt(0 283^2 + 0 359^2) = 0.46

If you jump down to Bellman’s post below for links to commentary of why Frank’s analysis cannot be correct.

It is also important to mention that I tested Frank’s σ = 0.46 hypothesis by comparing HadCRUT, BEST, ERA, and GISTEMP to each other to see if the differences were consistent with an uncertainty that high. What I found was that the differences formed into a normal distribution with σ = 0.053 implying an individual uncertainty of σ = 0.037. Out of the 3084 comparisons not a single one came even close to approaching 0.65 which should have been exceeded 32% of the time based on an individual uncertainty of 0.46. Pat’s hypothesis is inconsistent with the data by a very wide margin.

Carlo, Monte
Reply to  bdgwx
December 3, 2021 12:21 pm

What you are doing is NOT testing uncertainty!

Gah!

Captain climate
Reply to  Carlo, Monte
December 3, 2021 1:19 pm

This has been explained to him ad nauseam. He’s incapable of understanding the difference between standard deviation is a sample mean and uncertainty of an average statistic, which propagates from the underlying measurements and which aren’t reduced with N.

bdgwx
Reply to  Captain climate
December 3, 2021 2:09 pm

You missed a lot of the conservation last month. Bellman and I showed that the GUM equation (10) and NIST uncertainty calculator both confirm that uncertainty of the mean is reduced to 1/√N from the uncertainty of the individual measurements that went into the mean.

Carlo, Monte
Reply to  bdgwx
December 3, 2021 2:39 pm

And as you’ve told again and again, the GUM is not the end-all-be-all of uncertainty, it is just a guide.

There are many ways of combining variances that are not documented in the GUM.

But do go ahead and continue pounding your root-N square peg into the same round hole.

bdgwx
Reply to  Carlo, Monte
December 3, 2021 6:03 pm

You’re the one who introduced to me to the GUM specifically concerning the uncertainty of the mean and you’re the one who said Without a formal uncertainty analysis that adheres to the language and methods in the GUM, the numbers are useless.” So if don’t use the GUM anything I present here is useless, but when I do use the GUM you dismiss it. And when I use the definition of “uncertainty” as stated in the GUM like “a measure of the possible error in the estimated value of the measurand as provided by the result of a measurement” your response is some variation of “uncertainty is not error”. And when I express uncertainty using a standard deviation which is completely consistent with the language and methods in the GUM you get bent out of shape. So which is it? Do you accept the GUM including the language and methods contained within?

Carlo, Monte
Reply to  bdgwx
December 4, 2021 6:35 am

Just for the record, I wrote this without first doing the exercise myself. If I had known that blinding applying the GUM partial differentiation summation would lead to the root-N division that you and bellcurveman are so enamoured with, I would have done something else.

I then used reducto absurdity to show that if one simply increases the temperature sampling rate, that the uncertainty using this approach becomes vanishingly small. This went straight over the heads of you and bellcurveman.

Neither of you could come up with a rational explanation of what the standard deviation of a month of temperature measurements from a single location means, yet the lot of you are self-proclaimed experts on statistics.

And BTW, keeping records, files, and quotes from months past of what other posters have written just shows the depth of your obsession.

Bellman
Reply to  Carlo, Monte
December 4, 2021 2:30 pm

If I had known that blinding applying the GUM partial differentiation summation would lead to the root-N division that you and bellcurveman are so enamoured with, I would have done something else.

That’s quite revealing.

Carlo, Monte
Reply to  Bellman
December 4, 2021 2:57 pm

Of what exactly?

Bellman
Reply to  Carlo, Monte
December 4, 2021 3:18 pm

I thought it was obvious. Of a mind set that refers to something as the guiding authority on a subject, expects every one else to follow the rules set out by that guide, but then finds it doesn’t say what they want to believe so immediately rejects it.

By all means claim you know more about uncertainty than the GUM, but don’t expect anyone else to accept what you say without evidence.

Carlo, Monte
Reply to  Bellman
December 4, 2021 6:19 pm

By all means claim you know more about uncertainty than the GUM

Yet another dishonest, deceptive assessment.

I never “rejected” the GUM. It was you who was claiming up down and sideways that the uncertainty of any average is sigma/root-N, and I called you out on this bullsh!t.

Backpedal away from this all you like, but this is what you were saying when I challenged the absurdly small “error bars” you attach to your Holy Trend charts that supposedly debunked the pause.

Later I demonstrated conclusively that the answer according to the GUM is in fact something entirely different, i.e. RSS[u_i(T)]/root-N. You ignorantly pooh-poohed this, obviously completely misunderstanding the difference in the two.

When I tried to explain the GUM is not end-all-be-all, and that there are many other way to combine variances, this all went over your head and you ignored it.

That you think these tiny uncertainties are physically realisable is glaringly apparent evidence of your lack any understanding about the realities of metrology.

No doubt you’ll post yet another deceptive backpedal.

Bellman
Reply to  Carlo, Monte
December 4, 2021 6:56 pm

It was you who was claiming up down and sideways that the uncertainty of any average is sigma/root-N

Show me where I said that applied to any average, and I’ll apologize for misleading you.

but this is what you were saying when I challenged the absurdly small “error bars” you attach to your Holy Trend charts that supposedly debunked the pause

I don;t think I’ve made any personal claims about the uncertainty of “trend charts”.

When I tried to explain the GUM is not end-all-be-all, and that there are many other way to combine variances, this all went over your head and you ignored it.

No. I asked you to provide a reference for these “other ways”.

That you think these tiny uncertainties are physically realisable

I specifically said that I didn’t think they are realisable, assuming you are again talking about those monthly minute by minute averages. I don;t know how many times I’ve had to explain to you that I doubt the measurement uncertainty of that instrument will be independent, that the standard error of the mean is irrelevant to actual monthly average is irrelevant as they are not random samples, and that obsessing over how much uncertainty there is in one station is pointless compared with the uncertainty caused by sampling across the globe.

No doubt you’ll post yet another deceptive backpedal.

No doubt you’ll claim I’m being deceptive, because you never read what I say, and if I try to search for my previous comments you’ll claim that means I’m obsessively keeping records.

Carlo, Monte
Reply to  Bellman
December 4, 2021 10:05 pm

No. I asked you to provide a reference for these “other ways”.

I can’t give you a reference because I’m not an expert on the subject; the one I know of was given to me by a mathematician friend when I was faced with this same exact problem with averaging:

σ(<X>)^2 = Σ{ (X_i)^2 + w_i * σ(X_i)^2 } (1/N)

This is a weighted variance technique; as I’ve tried to tell you multiple times without success, uncertainty analysis is not a cut-and-dried effort, not all the answers are found in a book or in a web page, or even in the GUM.

Bellman
Reply to  Carlo, Monte
December 5, 2021 12:57 pm

as I’ve tried to tell you multiple times without success, uncertainty analysis is not a cut-and-dried effort, not all the answers are found in a book or in a web page, or even in the GUM.

Maybe I missed you saying it multiple times, because of all the times you insisted I had to go through the GUM equations, and anything else would be unacceptable.

I’m really not sure I follow your equation. What problem were you trying to solve? Are the X_i individual elements, or are you pooling different samples? Should the first term be (X_i – <X>)?

Carlo, Monte
Reply to  Bellman
December 5, 2021 3:25 pm

The variance from averaging X_i values each with individual u(X_i)

Bellman
Reply to  Carlo, Monte
December 5, 2021 4:15 pm

So I take it the σ is the uncertainty. How are the weights determined? And again if the first term meant to be (X_i – <X>) rather than X_i?

If so, I think all the equation is doing is combining the standard error of the mean from sampling with the standard error of the mean caused by measurement uncertainty, which is what I’ve suggested before. The result is that as long as the measurement uncertainties are relatively small compared with the standard deviation of the population, they will be largely irrelevant, due to the addition in quadrature.

Carlo, Monte
Reply to  Bellman
December 5, 2021 6:01 pm

Use your NIST Ouija board it has all the answers, bwx says so.

Bellman
Reply to  Carlo, Monte
December 5, 2021 6:27 pm

So you didn’t understand the equation you were using and have to resort to childish insults again. You keep saying you want to “educate” me, yet won’t engage with any questions. You claim not to be an expert, yet won’t allow for the possibility you might be wrong about anything.

Carlo, Monte
Reply to  Bellman
December 5, 2021 8:26 pm

“Stop whining” — CMoB

Of course I understand it; you lot are the genius climastrolgers who can divine the future, figure it out yourself.

Tim Gorman
Reply to  Carlo, Monte
December 6, 2021 7:41 am

The one big factor being missed here by the supposed statistics experts is that the GUM procedure is basically addressing data collected from measurements of the same thing. Nor do they understand the concept of random error and systematic error. Statistical methods can be used for minimizing random error (if it meets certain conditions) but not for systematic error.

The following excerpts from iso.org are applicable.

——————————————————
From iso.org

B.2.17
experimental standard deviation

for a series of n measurements of the same measurand, the quantity s(qk) characterizing the dispersion of the results and given by the formula:
comment image

qk being the result of the kth measurement and q‾‾ being the arithmetic mean of the n results considered

NOTE 1   Considering the series of n values as a sample of a distribution, q‾‾ is an unbiased estimate of the mean μq, and s2(qk) is an unbiased estimate of the variance σ2, of that distribution.

NOTE 2   The expression s(qk)⁄√n is an estimate of the standard deviation of the distribution of q‾‾ and is called the experimental standard deviation of the mean.

NOTE 3   “Experimental standard deviation of the mean” is sometimes incorrectly called standard error of the mean.
——————————————-

——————————————–
From iso.org:

6.2   Expanded uncertainty

6.2.1   The additional measure of uncertainty that meets the requirement of providing an interval of the kind indicated in 6.1.2 is termed expanded uncertainty and is denoted by U. The expanded uncertainty U is obtained by multiplying the combined standard uncertainty uc(y) by a coverage factor k:

comment image

(18)

The result of a measurement is then conveniently expressed as Y = y ± U, which is interpreted to mean that the best estimate (italics and underline mine, tpg) of the value attributable to the measurand Y is y, and that y − U to y + U is an interval that may be expected to encompass a large fraction of the distribution of values that could reasonably be attributed to Y. (italics and underline mine, tpg) Such an interval is also expressed as y − U ≤ Y ≤ y + U.

6.2.2   The terms confidence interval (C.2.27C.2.28) and confidence level (C.2.29) have specific definitions in statistics and are only applicable to the interval defined by U when certain conditions are met, including that all components of uncertainty that contribute to uc(y) be obtained from Type A evaluations. Thus, in this Guide, the word “confidence” is not used to modify the word “interval” when referring to the interval defined by U; and the term “confidence level” is not used in connection with that interval but rather the term “level of confidence”. More specifically, U is interpreted as defining an interval about the measurement result that encompasses a large fraction p of the probability distribution characterized by that result and its combined standard uncertainty, and p is the coverage probability or level of confidence of the interval.

6.2.3   Whenever practicable, the level of confidence p associated with the interval defined by U should be estimated and stated. It should be recognized that multiplying uc(y) by a constant provides no new information but presents the previously available information in a different form. However, it should also be recognized that in most cases the level of confidence p (especially for values of p near 1) is rather uncertain, (italics and underline mine, tpg) not only because of limited knowledge of the probability distribution characterized by y and uc(y) (particularly in the extreme portions), but also because of the uncertainty of uc(y) itself (see Note 2 to 2.3.56.3.26.3.3 and Annex G, especially G.6.6).

NOTE   For preferred ways of stating the result of a measurement when the measure of uncertainty is uc(y) and when it is U, see 7.2.2 and 7.2.4, respectively.

————————————————-

————————————————–
B.2.15
repeatability (of results of measurements)

closeness of the agreement between the results of successive measurements of the same measurand (bolding mine, tpg) carried out under the same conditions of measurement
NOTE 1   These conditions are called repeatability conditions.
NOTE 2   Repeatability conditions include:

  • the same measurement procedure
  • the same observer
  • the same measuring instrument, used under the same conditions (bolding mine, tpg)
  • the same location
  • repetition over a short period of time.

NOTE 3   Repeatability may be expressed quantitatively in terms of the dispersion characteristics of the results.
[VIM:1993, definition 3.6]

B.2.16
reproducibility (of results of measurements)

closeness of the agreement between the results of measurements of the same measurand carried out under changed conditions of measurement
NOTE 1   A valid statement of reproducibility requires specification of the conditions changed.
NOTE 2   The changed conditions may include:

  • principle of measurement
  • method of measurement
  • observer
  • measuring instrument (bolding mine, tpg)
  • reference standard
  • location
  • conditions of use
  • time.

NOTE 3   Reproducibility may be expressed quantitatively in terms of the dispersion characteristics of the results.
NOTE 4   Results are here usually understood to be corrected results.
[VIM:1993, definition 3.7]

bdgwx
Reply to  Tim Gorman
December 6, 2021 7:20 pm

TG said: “The one big factor being missed here by the supposed statistics experts is that the GUM procedure is basically addressing data collected from measurements of the same thing.”

Yet another patently false statement. The example for the combined uncertainty in section 5 literally combines measurements of completely different things.

Bellman
Reply to  Carlo, Monte
December 4, 2021 2:41 pm

I then used reducto absurdity to show that if one simply increases the temperature sampling rate, that the uncertainty using this approach becomes vanishingly small. This went straight over the heads of you and bellcurveman.

That’s not a reductio ad absurdum. You are simply claiming that vanishingly small uncertainties are impossible, not showing that they are. What you have is an argument by personal incredulity, and also a strawman. It’s a strawman because whilst in a perfect world it may be possible to get zero uncertainty with infinite samples, that is simply an abstraction. No one actually thinks that uncertainties can be reduced to that level because other uncertainties will also be present.

Neither of you could come up with a rational explanation of what the standard deviation of a month of temperature measurements from a single location means, yet the lot of you are self-proclaimed experts on statistics.

A) I do not claim to be any sort of expert.

B) I’ve given you lots of explanations of what the standard deviation meant, you just refused to accept them, but never explained what sort of an answer you were after.

Also,

Given that others keep calling me bellhop and bellend, I think you could do better than bellcurveman.

Carlo, Monte
Reply to  Bellman
December 4, 2021 2:58 pm

No one actually thinks that uncertainties can be reduced to that level …

More deception.

Bellman
Reply to  Carlo, Monte
December 4, 2021 3:30 pm

Fair enough, it was a figure of speech, but if you want to be pedantic – I cannot speak for everyone, and I’m sure there exists some people on this planet who think it’s possible to measure temperatures with zero uncertainty. I’ll just change it to “I do not actually think that uncertainties can be reduced to that level…”.

Happy?

Jim Gorman
Reply to  Bellman
December 4, 2021 4:51 pm

Before you can even decide what to do with with that N, you need to decide what the sample size is and what the population is.

Bellman
Reply to  Jim Gorman
December 4, 2021 5:25 pm

It’s carlo’s problem, you would have to ask him.

Carlo, Monte
Reply to  Bellman
December 4, 2021 6:21 pm

No, it is YOUR problem, YOU were the one claiming the uncertainty of ANY average is sigma/root-N.

Jim and Tim attempted many times to give an education, and you stupidly pooh-poohed everything.

Bellman
Reply to  Carlo, Monte
December 4, 2021 6:44 pm

I am not, and hope I have never given the impression, that the uncertainty of ANY average is sigma / root N. I’ve just pointed out that it is the general formula for the standard error of the mean, and hence how independent uncertainties propagate when taking a mean. This is in contrast to my would be educators who insist that they increase by sigma * root N.

Carlo, Monte
Reply to  Bellman
December 4, 2021 9:47 pm

I will admit that it is possible I have you confused with bwx, who most certainly believes this. That you support each other makes this confusion likely.

Tim Gorman
Reply to  Bellman
December 6, 2021 7:57 am

I’ve just pointed out that it is the general formula for the standard error of the mean,”

Except σ/√N is not the standard error of the mean. It is the experimental standard deviation of the mean. See note 2 below. Actually understanding this difference might help you see what we have been trying to tell you.

B.2.17
experimental standard deviation
for a series of n measurements of the same measurand, the quantity s(qk) characterizing the dispersion of the results and given by the formula:
comment image

qk being the result of the kth measurement and q‾‾ being the arithmetic mean of the n results considered

NOTE 1   Considering the series of n values as a sample of a distribution, q‾‾ is an unbiased estimate of the mean μq, and s2(qk) is an unbiased estimate of the variance σ2, of that distribution.

NOTE 2   The expression s(qk)⁄√n‾‾‾ is an estimate of the standard deviation of the distribution of q‾‾ and is called the experimental standard deviation of the mean.

NOTE 3   “Experimental standard deviation of the mean” is sometimes incorrectly called standard error of the mean.

Bellman
Reply to  Tim Gorman
December 6, 2021 11:04 am

I think you mean note 3. I don;t know the GUM says things like that. I’m pretty sure they are not saying the standard deviation of the mean is a different thing to the standard error of the mean. They just don’t like that name. Really the two terms are just different names for the same thing, calculated in the same way.

The main advantage of calling it the error rather than the deviation is it makes clear it’s a different thing to the population standard deviation. GUM insisting you call it the deviation of the mean illustrates the problem, as you seem to keep getting the terms confused.

Carlo, Monte
Reply to  Bellman
December 6, 2021 11:53 am

I can assure you that where the confusion lies is obvious to all, and that it isn’t Tim.

Tim Gorman
Reply to  Bellman
December 6, 2021 12:24 pm

I don;t know the GUM says things like that. I’m pretty sure they are not saying the standard deviation of the mean is a different thing to the standard error of the mean. “

Didn’t bother to go look it up, did you? It’s *exactly* what they are saying and it’s what we’ve been trying to explain.

Really the two terms are just different names for the same thing, calculated in the same way.”

ROFL!! Maybe you should write the Joint Committee for Guides on Metrology and let them know they are wrong with their definitions and they should follow your definition instead!

The main advantage of calling it the error rather than the deviation is it makes clear it’s a different thing to the population standard deviation. GUM insisting you call it the deviation of the mean illustrates the problem, as you seem to keep getting the terms confused.”

There isn’t any confusion except in *your* mind. The population mean itself doesn’t have a deviation, it *is* the population mean, ZERO deviation. If you want to call it the deviation of the sample means then that’s ok – it *is* what it is. And that’s what we’ve been trying to get across. But the deviation of the sample means is *NOT* the standard error of the mean and it is not the uncertainty of the mean. The actual mean doesn’t even *have* to lie within the interval defined by the standard deviation of the sample means

Bellman
Reply to  Tim Gorman
December 6, 2021 2:50 pm

Didn’t bother to go look it up, did you? It’s *exactly* what they are saying and it’s what we’ve been trying to explain.

I’ve searched all the way through the GUM and that is the only place they mention the standard error of the mean. If you can show me where they say standard deviation of the mean is different to standard error of the mean, let me know. Also explain why Taylor and Bevington both say they are the same thing.

But lets assume you are correct and experimental standard deviation of the mean is completely different to the standard error of the mean. That still does not mean that experimental standard deviation of the mean is the same thing as sample standard deviation, as Jim is claiming. As note 2 explains it is the standard deviation divided by √N. (Which by an extraordinary coincidence is the same formula for the standard error of the mean.)

The population mean itself doesn’t have a deviation, it *is* the population mean, ZERO deviation.

Correct.

The actual mean doesn’t even *have* to lie within the interval defined by the standard deviation of the sample means

Correct.

Still not sure what point you are trying to make.

Bellman
Reply to  Bellman
December 6, 2021 2:33 pm

That should have been “I don’t know why the GUM says things like that.”.

Tim Gorman
Reply to  Bellman
December 6, 2021 7:49 am

You are simply claiming that vanishingly small uncertainties are impossible, not showing that they are.”

Vanishingly small uncertainties, based on statistical calculations *are* impossible. Uncertainty has two components, random and systematic. You can use statistical tools on random components but not on systematic components. You can develop *corrections* for systematic issues but you still can’t decrease the uncertainty of the base measurement.

I’ve given you lots of explanations of what the standard deviation meant, you just refused to accept them, but never explained what sort of an answer you were after.”

yet you refuse to admit that experimental standard deviation of the mean is not the same as standard error of the mean.

Jim Gorman
Reply to  Tim Gorman
December 6, 2021 9:09 am

Experimental standard deviations are calculated from a few (even in the thousands) samples of the total population of what might be expected if the whole population was measured. The key word is “samples”.

Samples and their statistics are not statistical parameters of a population. The SEM (standard deviation of the sample means) can only show the interval surrounding the mean value of the sample means. The mean value of the sample means then becomes an estimate of the population mean with a width of the SEM .

The SEM is is in no fashion THE estimate of the SD (standard deviation) of the population. It must be multiplied by the sqrt of the sample size to obtain an estimate of the population standard deviation.

An example. I run 5 experiments and get a mean value of 5 +/- 2. What do I know?

1) I have one sample of size 5.

2) The sample mean gives an estimate of the population mean which is 5 +/- 2.

3) The SEM (standard error of the mean) = 2.

4) the population SD estimate –> (2 * sqrt 5) = 4.5

Bellman
Reply to  Tim Gorman
December 6, 2021 3:48 pm

yet you refuse to admit that experimental standard deviation of the mean is not the same as standard error of the mean.

Rather than go other all this nonsense again, could you state as clearly as possible what you think the definition of “experimental standard deviation of the mean” and “standard error of the mean” and how they differ?

bdgwx
Reply to  Carlo, Monte
December 4, 2021 5:37 pm

CM said: “Just for the record, I wrote this without first doing the exercise myself. If I had known that blinding applying the GUM partial differentiation summation would lead to the root-N division that you and bellcurveman are so enamoured with, I would have done something else.”

I appreciate your honesty here. Does the fact that the NIST uncertainty machine arrives at the same result as the GUM satisfy your challenge against the claim that the uncertainty of the mean of multiple measurements is less than the uncertainty of the individual measurements that went into the mean?

CM said: “I then used reducto absurdity to show that if one simply increases the temperature sampling rate, that the uncertainty using this approach becomes vanishingly small. This went straight over the heads of you and bellcurveman.”

Except it didn’t. As we’ve repeated said the errors of from the same instrument especially when they are temporally close would likely be correlated providing a lower bound on the uncertainty of the mean even as N went to infinity. As you can clearly see regarding my concerns with the Frank 2010 analysis I specifically mention the spatial correlation of errors in a grid mesh and why you cannot use the number of grid cells for N. That’s something Frank does not address.


Last edited 1 month ago by bdgwx
Carlo, Monte
Reply to  bdgwx
December 4, 2021 6:32 pm

Does the fact that the NIST uncertainty machine arrives at the same result as the GUM satisfy your challenge against the claim that the uncertainty of the mean of multiple measurements is less than the uncertainty of the individual measurements that went into the mean?

Absolutely not, uncertainty analysis also requires a 10,000 ft view to see if the numbers make sense, beyond a blind plugging numbers into formulae.

A vanishingly small uncertainty as N—>infinity does not make sense, regardless of what the GUM and NIST publications seem to be telling you.

You have to find another way to combine the variances.

As we’ve repeated said the errors of from the same instrument especially when they are temporally close would likely be correlated providing a lower bound on the uncertainty of the mean even as N went to infinity.

Hypothetical blanket statements such as this cannot replace engineering judgement and a detailed uncertainty analysis needed for any given measurement procedure.

You can’t just assume things are correlated and wall-paper over the hard work. Correlation is a huge problem in the GUM that goes way beyond the partial differentiation method of variances, of which I have absolutely no experience or expertise. This is where uncertainty analysis requires the services of real statisticians/mathematicians.

Tim Gorman
Reply to  Carlo, Monte
December 5, 2021 9:50 am

temporally close”

What is temporally close? That’s just one more hand waving exercise. Being temporarily close doesn’t guarantee correlation. Almost all instruments are subject to hysteresis, e.g. if the temp is going up or going down can you can get two different readings for the same actual outside temperature. Or you can get the exact same reading for two different actual outside temps.

You are correct. You just can’t assume things are correlated.

bdgwx
Reply to  Tim Gorman
December 5, 2021 11:20 am

I think you meant to respond to me since I’m the one that first used that phrase. Temporally close means the measurements were taken close together in the time dimension. I agree that measurements taken close together in time doesn’t guarantee correlation, but one of Frank’s arguments is that wind and radiation induce a systematic bias on the measurement. Measurements taken close together in time are likely to be subject to the same wind and radiation profile thus having the same (or at least similar) error profiles as well. A similar effect is likely for measurements that are spatially close as well. In other words measurements are going to have autocorrelated errors. I happen to agree with Frank on this particular point.

Last edited 1 month ago by bdgwx
Tim Gorman
Reply to  bdgwx
December 6, 2021 8:24 am

I agree that measurements taken close together in time doesn’t guarantee correlation, but one of Frank’s arguments is that wind and radiation induce a systematic bias on the measurement.”

You still don’t understand what Frank is saying. Wind and sun aren’t systematic *bias* for one instrument at one site. They are part of the measurement environment. For instance, an instrument at the top of a 1000′ hill will see a different wind environment than a station at the base of that same hill. That is not a *bias* in the wind readings at either site. It’s a true representation of the wind environment and its impact on temperature measurements at each site. You shouldn’t expect all stations to have the same wind and temperature. It’s one of the fallacies of trying to combine temperatures from stations with different environments into a global average temperature.

This even applies to the ground cover at a specific site. That ground cover can change from green to brown and back to green as the seasons change. This impacts the solar insolation hitting the measurement station over time. Yet long term baseline averages used to calculate temperature anomalies don’t take this into consideration.

Carlo, Monte
Reply to  Captain climate
December 3, 2021 2:35 pm

I’m convinced he is dedicated to supporting the IPCC party line, numbers be damned.

Carlo, Monte
Reply to  Captain climate
December 3, 2021 5:38 pm

He roots around through texts to find formulae that agree with his preconceived ideas.

bdgwx
Reply to  Carlo, Monte
December 3, 2021 2:02 pm

Let’s talk about that using a trivial scenario. Given two instruments A and B both with an uncertainty of ±0.5 consistent with the definition of “uncertainty” presented by the GUM and you take measurements of N different things M_1 through M_N with each instrument and log the difference D_Mn = A_Mn – B_Mn what would be your expectation of the distribution of D_Mn? How often would abs(D_Mn) < 0.5? How often would abs(D_Mn) > 0.7?

Carlo, Monte
Reply to  bdgwx
December 3, 2021 2:40 pm

Uncertainty is NOT a study of random error!

Carlo, Monte
Reply to  bdgwx
December 4, 2021 6:36 am

what would be your expectation of the distribution of D_Mn?

YOU DON’T KNOW, this is the entire point of uncertainty.

bdgwx
Reply to  Carlo, Monte
December 4, 2021 5:28 pm

I DO know. A lot of people know or at least have the ability to know. The GUM told us how to do it. The expectation of abs(D_Mn) > 0.7 is 32% of the time.

Last edited 1 month ago by bdgwx
Carlo, Monte
Reply to  bdgwx
December 4, 2021 6:35 pm

Then you are only fooling yourself.

The standard coverage factor for expanded uncertainly in the GUM does NOT imply any sort of expectations or distributions about measurement results.

bdgwx
Reply to  Carlo, Monte
December 4, 2021 7:32 pm

“expanded uncertainty” does change the answer to the question I asked. The GUM says you obtain the “expanded uncertainty” by multiplying the “combined standard uncertainty” by a coverage factor k. k is chosen on the basis of the level of confidence required for the uncertainty interval of the measurement. You select k = 2 for ~95% confidence and k = 3 for ~99% confidence. Note that I underlined standard uncertainty here. When we say 2σ we are talking about “expanded uncertainty” with k = 2.

And the GUM literally says “expanded uncertainty” implies an expectation about the “probability distribution” of the measurement. In other words k = 2 implies 95% and k = 3 implies 99%. It says and I quote “More specifically, U is interpreted as defining an interval about the measurement result that encompasses a large fraction p of the probability distribution characterized by that result and its combined standard uncertainty, and p is the coverage probability or level of confidence of the interval.” Note that U is the “expanded uncertainty” and is defined as U = k*u_c(y) where u_c is the combined standard uncertainty.

Carlo, Monte
Reply to  bdgwx
December 4, 2021 9:44 pm

k = 2 is the standard coverage factor according to ISO 17025 for laboratory accreditation; it originated from student’s t and the GUM but the reality is that k = 2 cannot be used to imply a 95% level because probability distributions are typically unknown.

Carlo, Monte
Reply to  bdgwx
December 4, 2021 9:51 pm

A Type B variance will not tell you this.

Tim Gorman
Reply to  bdgwx
December 5, 2021 10:35 am

Each measurement will consist of a stated value plus an uncertainty interval.

All your experiment does is use the stated values, i.e. you find the differences between the stated values. In essence you assume that there is ZERO uncertainty associated with the stated value. That’s what most climate scientists do with their analysis of temperatures. They assume their baseline average temp has ZERO uncertainty and the same with the temperature measurement used to create an anomaly – ZERO uncertainty in the stated value of that measurement. Therefore the anomaly has ZERO uncertainty. They then go ahead and try to define the uncer