The Pause Lengthens Again: No Global Warming for 7 Years 5 Months

By Christopher Monckton of Brenchley

The drop from 0.03 K to 0.00 K from January to February 2022 in the UAH satellite monthly global mean lower-troposphere dataset has proven enough to lengthen the New Pause to 7 years 5 months, not that you will see this interesting fact anywhere in the Marxstream media:

IPeCaC, in its 1990 First Assessment Report, had predicted medium-term global warming at a rate equivalent to 0.34 K decade–1 up to 2030. The actual rate of warming from January 1990 to February 2022 was a mere two-fifths of what had been “confidently” predicted, at 0.14 K decade–1:

The entire UAH record since December 1978 shows warming at 0.134 K decade–1, near-identical to the 0.138 K decade–1 since 1990, indicating very little of the acceleration that would occur if the ever-increasing global CO2 concentration and consequent anthropogenic forcing were exercising more than a small, harmless and net-beneficial effect:

Note that all these charts are anomaly charts. They make the warming look much greater and more drastic than it is in reality. The 0.58 K warming trend since late 1978 represents an increase of just 0.2% in absolute global mean surface temperature – hardly a crisis, still less an emergency.

Meanwhile, the brutal invasion of Ukraine by Mr Putin and his cronies is bringing about a growing realization, among those who have eyes to see and ears to hear, that the global-warming narrative so sedulously peddled by the climate-change industrial complex originated in the Desinformatsiya directorate of the KGB. For a detailed background to this story, visit americanthinker.com and click on the archive for March 2022. There, the kind editors have published a 5000-word piece by me giving some history that readers of WUWT will find fascinating. It is a tale much of which, for security reasons, has not been told until now.

It is worth adding a little more about the economic aspect of this sorry tale of Western feeblemindedness and craven silence in the face of the unpersoning – the relentless campaign of vicious reputational assault – to which all of us who have dared to question the Party Line have been subjected.

Do not believe a word of what either the Russian media or the Western media are saying about Mr Putin. He is not a geriatric who has lost his touch. The events now unfolding in Ukraine have been planned since long before Putin’s silent coup against Boris Yeltsin in 2000, after which, over the following five years, Putin put 6000 of his former KGB colleagues into positions of power throughout the central and regional governments of Russia. Some of those who were in post in 2004 are listed above. Many are still there.

The televised meeting of senior advisers at which Putin shouted at those of them who dithered when recommending that Ukraine should be invaded was a classic maskirovka, designed to convey to the West the impression of an unhinged and mercurial dictator who might reach for the nuclear button at any moment.

The chief purpose of the Ukraine invasion was to hike the price of oil and gas, and particularly of the Siberian gas delivered to Europe via many pipelines, some of which date back to the Soviet era.

It was Putin’s Kremlin, later joined by Xi Jinping in Peking, that founded or took over the various “environmental” lobby groups that have so successfully campaigned to shut down the coal-fired power stations, particularly in Europe, which is now abjectly dependent upon Russian gas to keep the lights on when the unreliables are unreliable.

That is why one should also disbelieve the stories to the effect that the sanctions inflicted on Russia by the West are having a significant impact. The truth is that they were fully foreseen, prepared for and costed. The thinking in the Kremlin is that in due course the increased revenue from Russian oil and gas will more than compensate for any temporary dislocations caused by Western attempts at sanctions, which look impressive but count for remarkably little.

But surely sending close to a quarter of a million troops to Ukraine is expensive? Not really. Putin keeps 1.4 million under arms anyway – about five times as many per head as the UK, which has 200 tanks to Putin’s 15,000. The marginal logistical cost of the invasion is surprisingly small: and Putin will gain Ukraine as his compensation. It is the world’s most fertile agricultural area, and it is big. Russia is already a substantial exporter of grain: once it controls Ukraine it will have as much of a stranglehold on world food prices as it now has on world oil and gas prices, and it will profit mightly by both.

Putin’s first decisive act of policy when he became Tsar of Some of the Russias was to slash the Russian national debt, which currently stands at less than a fifth of the nation’s annual GDP. That is the ninth-lowest debt-to-GDP ratio in the world. Once he has gained control of Ukraine and its formidable grain plain, he can add the profits from worldwide sales to his immense profits from the elevated oil and gas price. His plan is to pay off Russia’s national debt altogether by 2030.

In this respect, Putin’s Russia compares very favourably with Xi’s China, whose national, regional and sectoral debts are colossal. For instance, the entire revenue from ticket sales for the much-vaunted high-speed rail network is insufficient even to meet the interest payments on the debt with which it was built, let alone to meet the operating costs.

Once Putin has restored Kievan Rus and Byelorus to the Sovietosphere, he is planning to expand his nation’s currently smallish economy no less rapidly than did the oil-rich nations of the Middle East. Do not bet that he will fail.

It is galling that those of us who have been sounding warnings about the Communist origin of the global-warming narrative for decades have gone unheeded. The late Christopher Booker, who came to the subject after reading a piece by me in Britain’s Sunday Telegraph and devoted most of his weekly columns to the subject thereafter until his untimely death, wrote week after week saying that by doing away with coal we should put ourselves at the mercy of Russia and its Siberian gas.

However, our politicians, nearly all of whom lack any strategic sense or knowledge of foreign affairs, and who are less scientifically literate than at any time since the Dark Ages, paid no heed. Now some of them are waking up, but far too late.

On the far side of the world, in Australia, the land of droughts and flooding rains, the south-east has been getting some flooding rains. The ridiculous Tim Flannery had been saying to any climate-Communist journalist who would listen a decade ago that global warming would cause all the rivers in south-eastern Australia to run dry. Now, of course, the climate-Communist news media are saying that the floods are because global warming. Bah! Pshaw!

4.6 42 votes
Article Rating
727 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
fretslider
March 4, 2022 2:22 am

“one should also disbelieve the stories to the effect that the sanctions inflicted on Russia by the West are having a significant impact.”

The ban from the Swift (text messaging – that’s what it is) system is Pythonesque, it merely means Russia will have to pick up the phone more.

As John Lennon once said, gimme some truth.

Last edited 9 months ago by strativarius
Scissor
Reply to  fretslider
March 4, 2022 5:22 am

I have reservations concerning the seizure of properties and bank accounts of Russian oligarchs and forced closure of independent Russian businesses here. I wonder whether we are observing a normalization of such theft of property and rights by governments, even though I don’t have a yacht, etc.

First they came for the Russian yachts…

jeffery p
Reply to  Scissor
March 4, 2022 5:41 am

This is not a new thing.

It is a rather blunt instrument and as you are arguing, there is no due process where both sides can present arguments before the assets are frozen.

Jay Willis
Reply to  Scissor
March 4, 2022 6:03 am

Yes, I’m also concerned about sanctions and seizures of assets without legal process. If you were intending to ferment revolt and begin a process of regime change through funding a disaffected internal or excile minority, that is exactly what you’d do. This is thus fueling a legitimate fear for Putin and his cronies. If you wanted peace you’d respect property.

stinkerp
Reply to  Jay Willis
March 5, 2022 5:59 am

Take your disinformation somewhere else you Slavophile troll. What kind of craziness are you spouting? Putin and his oligarchs are the ones with no respect for property and in the act of “regime change” in Ukraine, in case you’ve been living in a cave for the last decade. He has invaded and attacked a peaceful, democratic, sovereign nation with no provocation. He is destroying billions of dollars of private property, has killed thousands already, and has displaced millions of people. Putin is a violent, remorseless, petty little tyrant with Napoleonic ambitions. Now he is a war criminal. He has joined the ranks of Stalin, Mao, and Hitler in the history books. The oligarchs and military leaders who support him are just as culpable. Just to be clear.

Last edited 9 months ago by stinkerp
Carlo, Monte
Reply to  Scissor
March 4, 2022 7:05 am

Should not be a surprise considering the gulag that Nasty Pelosi runs in Wash DC.

Reply to  Scissor
March 4, 2022 7:36 am

The RAF sank my yacht by mistake, but did not pay compensation

Mike Dubrasich
Reply to  Monckton of Brenchley
March 4, 2022 9:18 am

Dear Lord M.,

I always enjoy your essays, but this one and the Am Thinker piece are beyond excellent.

The warmunistas have finally achieved their world war. The bombs are falling. Mothers clutch their children in dank cellars listening to the rolling thunder above. The streets run red with blood. All this wished for, planned for, by design of monsters.

Putin is to blame, and his minions. Caesar never stands alone. But the seeds of destruction were laid by the lefty liberal hippie milli sheeple herders, the barkers of hate, the panic porners, the commissars of death and slavery, in their gilded mansions, mad with greed and lacking any semblance of conscience.

It is a tragic day for humanity. You and a few others saw all this coming years ago and tried to warn us. It is not your fault. Your efforts have been stellar. You could not have done any more. I know that is little consolation. I have no tonic for grief.

Gary Pearse
Reply to  Scissor
March 4, 2022 8:14 am

Our man in Canada is looking into freezing and seizing accounts of truckers who had a massive protest in front of Parliament Hill for about 5 weeks of the coldest winter in a long time. In this huge long protest, not one person was physically hurt, probably a world record for a protest of this size and duration, a poster of a peaceful protest. This, despite the usual paid small group with swastika placards to taint the truckers with white supremacist BS and give the corrupted MSM a focus for fake news articles.

Clyde Spencer
Reply to  Gary Pearse
March 4, 2022 11:36 am

So much for freedom of speech and the right to protest peacefully in Canada.

AndyHce
Reply to  Scissor
March 4, 2022 11:20 am

It has been inching up for decades. Recall the wide spread ‘taking’ of property in the US some while back from people accused of various crimes, no proceedings, certainly no convictions, necessary. More of less every state was getting on that gravy train. I think the courts eventually put some limitations on the practice, either that or the msm stopped reporting on it.

meiggs
Reply to  Scissor
March 4, 2022 5:25 pm

They came 4 me well be4 the yachts…kaint have any freeman loose

stinkerp
Reply to  Scissor
March 5, 2022 5:46 am

You have reservations about governments around the world making life uncomfortable for Russian oligarchs in bed with the war criminal Putin??!! Your utter disconnection from the reality of an unprovoked war on a peaceful, democratic country by a murderous dictator and his evil cronies and the devastating impact it is having on millions of innocent people is…there are no words. Wow. Just wow. Like the global warming nutjobs, you’re bleeting about an imaginary problem while a monumental and real crisis happens before your eyes.

Scissor
Reply to  stinkerp
March 5, 2022 3:22 pm

At least you didn’t call me a racist, stink.

tonyb
Editor
Reply to  fretslider
March 4, 2022 9:38 am

Da, da, da) Well,
The Ukraine girls really knock me out (… Wooh, ooh, ooh)
They leave the West behind (Da, da, da)
And Moscow girls make me sing and shout (… Wooh, ooh, ooh)
That Georgia’s always on
My, my, my, my, my, my, my, my, my mind
Oh, come on
Woo (Hey)
(Hoo) Hey
Woo hoo (Yeah)
Yeah, yeah
Hey, I’m back In the U.S.S.R.
You don’t know how lucky you are, boys
Back In the U.S.S.R.

Steve
Reply to  fretslider
March 5, 2022 10:14 am

I think this should be 5 years and 7 months, not 7 years and 5 months?

David Guy-Johnson
March 4, 2022 2:29 am

Thanks as ever. Russia, however has 12400 tanks, but many are obsolete, compared to western armour.

MARTIN BRUMBY
Reply to  David Guy-Johnson
March 4, 2022 2:56 am

Even obsolete tanks are quite effective against women and children.

Even obsolete tanks will withstand untrained men with machine guns.

And be in no doubt that the vast majority of the non-obsolete tanks will be a match for almost anything, especially tanks operated by the “snowflakes” that our army have gone out of their way to recruit.

Not that our army is likely to see much action, at least until Putin is enforcing his “corridor” through Lithuania to Kaliningrad. Adolf would have been proud of him.

fretslider
Reply to  MARTIN BRUMBY
March 4, 2022 3:24 am

Adolf would have been proud of him.”

I don’t know, Adolf’s Wehrmacht defeated the French and British armies and then overran Northern France in two weeks.

MARTIN BRUMBY
Reply to  fretslider
March 4, 2022 4:10 am

Absolutely.
But the “Danzig Corridor” was a very successful pretext for invasion.
You think Vlad isn’t aware of that?

And, on the other hand, Zhukov did a great job against the Japanese tanks at Khalkin Gol in 1939 and went on to win against much more sophisticated German tanks using fairly basic but effective T-34s from Stalingrad to Berlin.

Meanwhile the tanks the British were provided with in early part of the war up to Dunkirk were, ahem, a bit embarrassing.

fretslider
Reply to  MARTIN BRUMBY
March 4, 2022 4:35 am

Zhukov – like most US military tacticians – relied on overwhelming numbers.

Nothing more.

You won’t find many British tanks at the bottom of the English channel, and there are a lot of tanks down there.

Last edited 9 months ago by strativarius
jeffery p
Reply to  fretslider
March 4, 2022 5:48 am

Zhukov succeeded where other Soviet Generals foundered.

tonyb
Editor
Reply to  fretslider
March 4, 2022 9:34 am

There’s a US Sherman tank near us recovered from the botton of Start Bay, South Devon, when a training exercise for D day went tragically wrong when the troops and landing craft were ambushed by German Uboats.

Operation Tiger – an Amphibious D Day Rehearsal Disaster (combinedops.com)

rah
Reply to  tonyb
March 4, 2022 1:44 pm

E-boats not U-boats. E-boats were kinda like MTBs (Motor Torpedo Boats which many in the US would recognize as PT Boats) but larger with torpedo tubes that fired from the bow of the hull. They were sturdier built craft than PT boats. Top speeds of MTBs and E-boats were comparable.

Bob boder
Reply to  fretslider
March 4, 2022 11:50 am

The American tanks that sunk were using a British flotation device.

rah
Reply to  Bob boder
March 4, 2022 1:52 pm

That is true but not fair. It was stupid to launch those DD Tanks off Omaha in the existing sea conditions. And had the officer in charge on the spot had been able to communicate with the half of his command that did launch, it would not have happened.

The British lost a few too but their beaches were much more protected and the waters not so rough and so a lot more of their made it.

The DD tanks were the only one of Hobarts “follies” that the US decided to use. The other several specialized tank configurations that British Officer came up with did great service with the British.

Ike was offered a limited number of LTVs (Alligators) as the Marines were using in the Pacific, but refused them. That was a mistake.

Ben Vorlich
Reply to  MARTIN BRUMBY
March 4, 2022 4:50 am

American Sherman tanks known as Tommy Cookers to the Wehrmacht.

Only when fitted out as a Sherman Firefly with a British 17lb anti tank gun did it stand any kind of chance against German armour

rah
Reply to  Ben Vorlich
March 4, 2022 6:31 am

Anyone that thinks that the invading Allies would have done better with the equivalent of a Tiger or Panther during the invasion and pursuit phases in Europe, is mistaken. Does not understand the logistical realities of the time. And does not understand how unreliable the Tiger was nor how the Panther was limited in range by fuel and the very limited service life of it’s all metal tracks on hard surface roads.

Admittedly the US should probably have started to introduce the M 26 Pershing earlier and in greater numbers than it did but this failure is understandable considering the fact that even at the time of the Battle in the Ardennes most US front line troops lacked proper winter clothing for most of that battle.

There is a reason why the Russians decided to use lend lease M-4 Shermans and not the T-34s to make the long track over the mountains to reach Austria.

I get tired of the one sided way this argument is presented.

Carlo, Monte
Reply to  rah
March 4, 2022 7:11 am

There is a story about some German officer who noticed cans of Spam after they overran the Ardennes line and said something to the effect that Germany was finished if the US was able supply the Army from across the Atlantic.

Reply to  Carlo, Monte
March 4, 2022 7:30 am

In 1942, maybe August….the German who was in charge of War Production flew to Hitler’s Wolf’s Lair or whatever in Ukraine and informed him that Germany could not out produce the Allies…too few people and resources – advised him to make peace.

Drake
Reply to  Carlo, Monte
March 4, 2022 8:53 am

In the movie, Battle of the Bulge, there is a scene where a German general looks at captured packages from families to US soldiers. The general says much the same thing.

It is a movie, but based on a true story!!

Alan the Brit
Reply to  Ben Vorlich
March 4, 2022 7:43 am

They were known as Tommy-Cookers because the tanks tended to burn rather well when hit close to fuel tanks, which I believe were un-armoured so vulnerable to shell hits! The Sherman Firefly was indeed a good upgrade with it’s 76.2mm 17lb shells. However, as in so many cases, the British Army was slow to learn, until the Comet tank was developed with a well powered engine, good well sloped armour, & a 76.2 barrelled gun!!! The final success story, the Centurion tank never got to see tank on tank action merely mopping up works towards the end. However, in the early years, British tanks had better armour than the German tanks, but poor firepower, with mere 3lb & 5lb guns, mere popguns compared to 75mm & 88mm guns on the other side!!! However the Sherman tank must always be viewed as a workhorse throughout!!!

rah
Reply to  Alan the Brit
March 4, 2022 8:11 am

The problem with the early British tanks of WW II was not only being under gunned but also poor mechanical reliability.

A British enlisted tank driver gave one of the first M-3 Stewarts delivered to N. Africa a test drive. He did his best to get it to throw a track. When asked by his officer how he liked it, he responded “It’s a Honey”.

And that right there in a nutshell expresses the primary reason the US tanks were favored. No tanks produced by any nation during the war came close to having the mechanical reliability of those that came off the production lines of the US.

Above I noted the “Tommy Cookers” phrase. And it was true until later versions of the M-4 came out with wet storage for the ammunition for their primary gun. Those later versions would still brew up due to the gasoline used for their fuel but they gave the crew time to bail out before their internally stored ammo started blowing up and thus improved survivability.

Ben Vorlich
Reply to  Alan the Brit
March 4, 2022 9:34 am

I think that the Israeli army used nearly 1000 Centurions in the 6 day war, mostly regunned

Bob boder
Reply to  Ben Vorlich
March 4, 2022 11:55 am

And Sherman’s upgrade with a 105.

Dean
Reply to  Alan the Brit
March 4, 2022 8:32 pm

Well sloped armour on the Comet???

A 90 degree upper glacis isn’t well sloped. Ok the turret was reasonable, but the hull was almost a box.

Drake
Reply to  Ben Vorlich
March 4, 2022 8:47 am

Sherman Tanks were designed to be mass produced in HUGH numbers. They were under armed but, non the less, their NUMBERS overran the Wehrmacht.

rah
Reply to  Drake
March 4, 2022 9:23 am

Tanks were important but:

I suspect many have heard that in the US Army the infantry is known as “the Queen of Battles”. How many have wondered what “the King of Battles” is? Well the answer is the Artillery. King Louis XIV had  “Ultima Ratio Regum” (The Ultimate Argument of Kings) cast into all of his artillery pieces and he was correct.

It was the development of artillery that eventually ended the usefulness of castles as fortified redoubts and caused the evolution of siege warfare. It was artillery that turned the infantryman into a gopher where by a good trench or hole is an essential for survival in static warfare.

But despite the trenches and holes about 75% all KIA during WW I were from Artillery. During WW II Artillery accounted for about 64% of the total casualties in the war against Germany and Italy. In the war against Japan it was about 46%.

Bob boder
Reply to  Ben Vorlich
March 4, 2022 11:52 am

BS, the Sherman was dominate in North Africa and quite possible the E8 version was the best tank of the war.

jeffery p
Reply to  MARTIN BRUMBY
March 4, 2022 5:46 am

Although it was before the war, I think Zhukov’s victory at Khalkin Gol was the decisive battle of WW2. Japan was soundly defeated and decided to not join the Axis war on the Soviet Union. Had Japan gone into Siberia rather than the Pacific (and not brought America into the war with the Peal Harbor sneak attack) an Axis victory would have been certain.

Reply to  jeffery p
March 4, 2022 7:42 am

When “Blitzkreig” failed at the outskirts of Moscow…it was “ovah”. It became a war of attrition…Hitler was not too good at arithmetic…believed in “will” over numbers…he was wrong.

Carlo, Monte
Reply to  Anti_griff
March 4, 2022 8:18 am

True this, the air force was completely designed around lightning war, with no strategic and heavy lift capabilities at all.

MarkW
Reply to  Carlo, Monte
March 4, 2022 8:41 am

I saw a special a few months back that claimed that during the battle for Russia, over half of German supplies were still being delivered in horse drawn carts.

rah
Reply to  MarkW
March 4, 2022 9:39 am

Meanwhile the Russians used US produced trucks and after the supply chain through Iran had opened the supply of trucks was so prolific that when the sparkplugs became fouled parked it and grabbed another.

And that is another aspect of logistics that the amateurs ignore or are ignorant of. No combatant in WW II even came close to achieving the efficiency or scope of the US vehicle and recovery, repair, and maintenance efforts.

It came not only from a concerted effort of the Army to make it that way but from the fact that during the prewar years the US had a far larger population of men familiar with mechanics than any of the other combatants.

Clyde Spencer
Reply to  rah
March 4, 2022 11:57 am

It seems to be a talent that has been forgotten. My father related how during WWII, along with food rationing, one could not get repair parts for cars. When his low-compression 30s-vintage Ford started blowing oil, he shimmed the cylinder with a tin-can, and got many more miles out of it.

Reply to  Clyde Spencer
March 4, 2022 1:40 pm

Unfortunately, a talent that is pretty much useless these days (unless you have a very old car).

Even with that talent, a modern battle tank doesn’t work that way. Coaxial laser on the main gun, coupled with a targeting computer that calculates range, windage, tube wear, temperature, humidity. Frequency hopping encrypted communications. IVIS for battlefield awareness of every other tank and attached vehicles. Armor that is definitely not “patch and go.”

That said, tankers ARE trained to do everything that they CAN still do, like breaking track (a very physical job that, sorry, females just cannot do). There is less and less of that as technology advances.

MarkW
Reply to  Clyde Spencer
March 4, 2022 7:17 pm

The story is probably apocryphal, but I remember reading about the world’s only DC-2 1/2. The story is that during the early days of WW2, and the Americans were evacuating ahead of the advancing Japanese. In a Japanese attack on an American airfield a DC-3 had one of it’s wings destroyed. The mechanics weren’t able to find any undamaged DC-3 wings, but they did find a DC-2 wing. So they jury rigged a way to connect the DC-2 wing onto the DC-3 body and used it to help evacuate the field.

Clyde Spencer
Reply to  MarkW
March 5, 2022 10:41 am

The story is probably apocryphal, …

I got it first hand. To be fair, my father was a machinist, amateur gunsmith, and knife maker in his younger days. He worked his way up to be a tool and die maker and jig and fixture builder at the end of his career.

As a teenager, I stripped first-gear in my ’49 Ford V8, drag racing with a friend. I know he had never repaired a transmission of that vintage. Yet, he unhesitatingly pulled the transmission, and repaired it, with the admonition, “Next time, you do it by yourself.” He was prescient!

Clyde Spencer
Reply to  MarkW
March 4, 2022 11:49 am

And, I have heard that in the early years of cars, many a farmer made money pulling tourists out of the mud holes in what passed for roads, using their plow team.

Carlo, Monte
Reply to  MarkW
March 4, 2022 2:32 pm

I certainly believe this—they had to regauge thousands of kms of Russian broad gauge railroads before being able to get rail supply east. Trucks ate fuel, plus they were needed for the motorized infantry.

Drake
Reply to  jeffery p
March 4, 2022 9:07 am

New interesting perspective for me, I haven’t studied the Japan/Russian war, which apparently was never settled after WWII due to mutual claims of 5 islands north of Japan. That Japan needed oil, and COULD have had it with much shorter supply lines from Russia, is something to think about. And without bringing the US into the war, there would have been nothing to obstruct the shipments to Japan.

Russia attacked on 2 fronts. That could have engendered a completely different ending to WWII. Without US steel and trucks and jeeps, etc. the Russian war machine would have probably collapsed.

All of Europe, possibly less the UK, would be speaking German and most of Asia north of the Himalayas would be Speaking Japanese.

Something for me to study in my retirement, thanks Jeffery P.

Bob boder
Reply to  Drake
March 4, 2022 11:59 am

Even more simple, Hitler doesn’t attack Russia. Builds the bomb and puts it on V2 rockets. War over.

MARTIN BRUMBY
Reply to  jeffery p
March 4, 2022 3:10 pm

Yes, jeffery,
And interesting that Stalin waited until the peace for the Khalkin Gol campaign was signed before following the Nazis into Poland, then swallowing the Baltic states.

And we all (hopefully) are aware how those occupations worked out.

And some of Putin’s latter day admirers today are shocked, shocked I say, that the children of the poor sods who suffered then and continually until 1991 are lacking in enthusiasm for Putin and his chums, being back in charge of their lives.

Hivemind
Reply to  David Guy-Johnson
March 4, 2022 3:38 am

The question isn’t Russian tanks against western tanks, because nobody in the west has the courage to stand up to Putin. Not the Stasi informant in Russia and certainly not the president suffering from dementia.

No, Ukraine will have to stand alone, and they don’t have enough tanks to do the job.

Derg
Reply to  Hivemind
March 4, 2022 4:38 am

The US should stay out.

n.n
Reply to  Derg
March 4, 2022 5:34 am

It’s too late to share responsibility. From 2014, this is the Slavic Spring in the Obama, Biden, Clinton, McCain, Biden Spring series.

rah
Reply to  Derg
March 4, 2022 8:20 am

This former SF soldier targeted for Europe during the cold war agrees. Weapons and advisors and that is it. No conventional forces deployed in Ukraine. I would bet though that guys from my former unit, the 10th SFG(A), are on the ground in country and advising and providing intel. That is exactly what they are trained to do and exactly the theater, including language, they have been trained to do it in.

Scissor
Reply to  Hivemind
March 4, 2022 5:27 am

I hear stories about the effectiveness of Javelins and our donation of hundreds of these to Ukraine. Perhaps these level the playing field to some extent or least place some concerns in the minds of Russian tank operators.

Cheshire Red
Reply to  Scissor
March 4, 2022 6:26 am

Also NLAW’s. (Fire n Forget guided missiles) Apparently they’re better for close combat which will be needed in urban fighting, and also a lot cheaper. Ukraine needs thousands of those rockets.

What a mess it all is.

Last edited 9 months ago by Cheshire Red
Drake
Reply to  Cheshire Red
March 4, 2022 9:28 am

My understanding of NLAWs is that they are great for lightly armored vehicles, which MOST of any convoy is. Javelins for tanks, NLAWS for the rest, Stingers for air cover, good mix.

Dave Fair
Reply to  Scissor
March 4, 2022 9:45 am

Its hard to subjugate an armed populace when the majority of your troops are trying to protect that expensive armor. As I said on a previous Thread, small hunter/killer teams supported by a local populace will devastate armored maneuver. Send in the Javelins! The Ukraine is and will become more of a bloody mess.

Bob boder
Reply to  Hivemind
March 4, 2022 12:01 pm

Tanks are meaningless, if The Ukrainians can keep the Russian from total control of the Air they can win as long as the west keeps sending supplies.

Editor
Reply to  David Guy-Johnson
March 4, 2022 6:11 am

They are NOT obsolete to Ukraine’s few tanks while the newest T-14 Tank is very powerful but only 20 of them built.

rah
Reply to  Sunsettommy
March 4, 2022 8:26 am

They cannot take Russia on head to head in armored warfare. What they can do is extract a very heavy price in urban combat. You gain nothing knocking down buildings because the rubble serves equally well for cover and concealment for the defenders and obstructs the passage of vehicles, including armor. The only reason to knock down the high buildings is to deny them as observation points. Other than that, it is self defeating in an urban area that one wishes to take control of with troops on the ground.

MarkW
Reply to  rah
March 4, 2022 8:44 am

I’d be very surprised if there are any large scale tank on tank operations. The Ukrainians know they don’t have the equipment for such operations. They seem to be gearing up for hit and run and urban operations.

Drake
Reply to  rah
March 4, 2022 9:19 am

This is NOT a tank battle, if it were, Russian tanks would be spread out passing through the fields, not sitting in convoys.

I have posted this before, the Flying Tigers is the model. Get US pilot volunteers, place them on furlough, get them Ukrainian citizenship, repaint A 10s, F 15s, and old F117As in Ukrainian colors, give them to Ukraine, and this war would be over in 2 weeks tops. Imagine what US stand off weapons, F117A targeting of Russian AA missile batteries and A 10s attacking he 40 mile long convoy. Total game changer.

Either the Russians will withdraw, or they will use nuclear weapons, but either way, it would be over.

rah
Reply to  Drake
March 4, 2022 9:44 am

It is not a tank battle obviously and it isn’t simply because it can’t be. Your ploy will not work though. You simply will not get the numbers of qualified volunteers even if this government would support such an effort, Which it won’t.

The country simply is not big enough either.

Clyde Spencer
Reply to  rah
March 4, 2022 12:03 pm

The only reason to knock down the high buildings is to deny them as observation points.

And long-range sniper nests.

diggs
March 4, 2022 2:43 am

For sure Putin has his strategy, but I cannot help but think he has underestimated the unified response of the West, especially where there are a few leaders that would see this distraction as a blessing in disguise, affording them the opportunity to hide their abysmal polling figures and low voter confidence by taking “decisive action” against an old foe now universally maligned. Time will tell

Climate believer
Reply to  diggs
March 4, 2022 5:10 am

 “he has underestimated the unified response of the West”

The West’s response at the moment is unified hysteria #putinbad.

Most people giving their “expert” views on the situation would have been unable to point out Ukraine on a map two weeks ago. It was the same with covid, everybody became overnight virologists.

Ukraine has been hung out to dry by the West and NATO.

If you want to stop a bully you have to punch him not tickle his roubles.

Tom Abbott
Reply to  Climate believer
March 4, 2022 6:54 am

“If you want to stop a bully you have to punch him”

That’s right. A bully understands a punch in the nose.

Dave Fair
Reply to  Tom Abbott
March 4, 2022 9:48 am

Mike Tyson: Everybody has a plan until he is punched in the face.

jeffery p
Reply to  diggs
March 4, 2022 5:55 am

I do think Putin underestimated the Ukrainian forces and overestimated the capabilities of the Russian army. I also agree underestimated the Western response but I believe without an outright ban on Russian oil and gas, the sanctions won’t have the desired efffect.

Tom Abbott
Reply to  jeffery p
March 4, 2022 7:04 am

There seems to be an unofficial ban on Russian oil. Private companies are taking it upon themselves and are refusing to handle Russian oil.

I imagine Chicom brokers will be more than happy to fill in the gap and the Chicoms will be happy to buy Russian oil.

Putin can do a lot of damage before all the money runs out.

Muzchap
Reply to  jeffery p
March 5, 2022 2:43 pm

Lots of Ukrainian citizens have Russian families.

Putin could have levelled the place, but hasn’t.

As pointed out in the article, this is about resource gathering.

Russians pride themselves on 50yr plans, we can’t make plans for 5 minutes in the West.

It’s sad but true…

Ireneusz Palmowski
March 4, 2022 2:47 am

Galactic radiation levels are still at solar minima levels to the 23rd solar cycle.
UV radiation levels are still low.
This indicates a weakening of the Sun’s magnetic activity. Therefore, I predict another La Niña in November 2022.comment imagecomment imagecomment image

Ireneusz Palmowski
March 4, 2022 3:03 am

 Very little chance of El Niño.comment imagecomment imagecomment image

Last edited 9 months ago by Ireneusz Palmowski
Reply to  Ireneusz Palmowski
March 4, 2022 10:24 am

Most grateful to Ireneusz Palmowski for his interesting material predicting that the current weak la Nina may occur again this coming winter.

Mike
Reply to  Ireneusz Palmowski
March 4, 2022 4:29 pm

Well that’s just great. If that comes to pass I don’t think I will able to stand the ”it’s climate change” squawking when we have floods again next summer.

Muzchap
Reply to  Ireneusz Palmowski
March 5, 2022 2:44 pm

I had hoped for an El Nino as selfishly, I’m building a house and need the hot dry days…

March 4, 2022 3:51 am

I still think China has played a very large role … primarily in raising the price of energy in the west as part of its long term plan to take our manufacturing.

We are now in a position, where if we want to go to war with China … we have to ask them to supply all the basic spare parts.

And, if you look at the way China has put money into Universities that then went woke and started hating Britain … you can see it wasn’t just an attack on British industry, but also an attack on the cohesiveness of British society.

If Christopher, as you suggest, the Russians have been pouring money into the Green groups, whilst China pours money into academia turning them into Western hating wokes, then it more or less explain how we got to this appalling situation.

Ron Long
Reply to  Mike Haseler (aka Scottish Sceptic)
March 4, 2022 4:07 am

There’s a good chance there is more to the China-Russia story, Mike (aka SS). China could roll through Russian anytime they want, and once Russia has been weakened militarily and economically they might go for it. Taiwan can wait. Remember the Tom Clancy book “The Bear and the Dragon”? The USA won’t be coming to the Russian Rescue in reality, partly because China has more on the Brandon Crime Family than does Russia. What a mess.

RobR
Reply to  Mike Haseler (aka Scottish Sceptic)
March 4, 2022 3:18 pm

China is facing several self inflicted wounds. CMB mentioned the high speed rail debacle that wastes millions per day. There are many others to be sure:
1. Population demographic shift to more retirees than workers, thanks to the one-child policy.
2. Massive imbalance in the ratio of males to females. Millions of female babies were aborted or killed at birth.
3. Belts and Roads initiative debt bomb.
4. A Massive GOAT f**k of a housing bubble crisis. Billions wasted by the likes of Evergrande et al. on failed non real estate ventures.
5. Billions wasted on creating ghost cities nobody will ever live in due to the declining population.
6. The loss of upwards of 80% of the populations personal wealth due to the housing market crash.
7. Drastic lockdown measures have largely failed, and only served to anger the population.
8. If the Chinese people discover the Wuhan Virus was created in a lab the will go ape s**t and there will be blood.
9. Massive debt incurred in road building projects to nowhere to stimulate the economy.

China is in much worse shape than most people realize. It remains to be seen if they will do something stupid like make a grab for Tiawan.

Galileo9
March 4, 2022 4:46 am

Can someone tell me why this guy is saying this on twitter please?
Dr Simon Lee
@SimonLeeWx
· 2 Mar 2020
Even the UAH satellite-based estimate of global lower-tropospheric temperatures (which has the slowest warming trend of all major datasets) showed Earth had an exceptionally warm February – the 2nd warmest on record, behind only 2016, which was fuelled by a Super El Niño.

ESH7t_qWkAAlkq5.jpeg
Derg
Reply to  Galileo9
March 4, 2022 4:56 am

I really wish it would warm up. More snow on the way 😡

Reply to  Galileo9
March 4, 2022 5:37 am

Two years ago?

Bellman
Reply to  Galileo9
March 4, 2022 5:43 am

Because that’s about February 2020. 2022 was only 16th warmest.

Here’s the current February graph.

202202UAH6month.png
Derg
Reply to  Bellman
March 4, 2022 1:49 pm

And yet CO2 rises 😉

jeffery p
Reply to  Galileo9
March 4, 2022 5:57 am

Why don’t you ask him why he’s saying that?

Galileo9
Reply to  Galileo9
March 4, 2022 7:22 am

Sorry guys, my mistake. I could have sworn that when I first looked at that tweet it read March 2 2022.
I guess I’ll have to blame old eyes and a small screen.

bdgwx
Reply to  Galileo9
March 4, 2022 7:36 pm

Don’t sweat it. I make a ton mistakes myself. I’m a little embarrassed to admit this but in one post I said carbon had 12 protons multiple times. Carbon has 12 protons and neutrons only 6 of which are protons. Now that is embarrassing.

Reply to  Galileo9
March 4, 2022 10:26 am

The full monthly lower-troposphere anomaly dataset is reproduced in the head posting. The temperature has fallen quite a bit since February 2020.

And one should be careful not to deploy the device used by the climate Communists, of picking out a single anomalous value that suits the Party Line and then considering it in isolation.

As the head posting shows, the underlying rate of global warming is small, slow, harmless and net-beneficial.

Clyde Spencer
Reply to  Galileo9
March 4, 2022 12:15 pm

2020 is tied for first place with 2016, despite the largest downturn in anthropogenic CO2 in history.

Matthew Schilling
Reply to  Clyde Spencer
March 5, 2022 5:45 am

And everyone knows the climate responds instantly to even the slightest change in CO2. Or something.

Clyde Spencer
Reply to  Matthew Schilling
March 5, 2022 10:46 am

It does so, without fail, within about a couple weeks of the leaves coming out, every Spring. The northern hemisphere MLO CO2 peak is in May every year.

Bellman
Reply to  Clyde Spencer
March 5, 2022 10:59 am

I think you’re getting your causation backwards there. CO2 goes down because the leaves come out.

Last edited 9 months ago by Bellman
Clyde Spencer
Reply to  Bellman
March 5, 2022 7:51 pm

One of us is missing something. When anything reaches a peak, it has nowhere to go but down. And, I did mention the leaves coming out.

I do expect better from you.

Bellman
Reply to  Clyde Spencer
March 6, 2022 10:49 am

I’m sorry if I misunderstood your argument.

You were replying to Matthew Schilling suggesting that the climate does not respond instantaneously to changes in CO2, and so I assumed when you said “It does so, without fail, within about a couple weeks of the leaves coming out, every Spring.” you meant that the climate was reacting immediately to a change in CO2. If that’s not what you meant I apologize, but in that case I’m not sure how it’s relevant to Matthew’s comment.

TheFinalNail
March 4, 2022 4:50 am

And we have another new start date for ‘the pause’! Oct 2014 replaces Aug 2015, or whatever other start date provides the longest non-positive duration that can be wrangled from the UAH data.

Notwithstanding the constant changes to whenever this latest ‘pause’ was supposed to have started, we should ask ourselves the usual question:

‘Is a period of 7 years and 5 months (AKA 89 months) without a best estimate warming trend unusual in a data set spanning several decades that, overall, shows statistically significant warming?’

The answer, as usual, is ‘no’.

Rounding up to a neat 90 months, there are 321 such overlapping periods in the full UAH data set. Of these, 111 are periods of no warming or else cooling. More than one third of all consecutive 90-month periods in the UAH data set do not show a warming trend. Despite this, the data set shows an overall statistically significant warming trend.

Given that Lord M starts counting (from his various start points) at the peak of a big El Nino, and finishes counting at the trough of the recent double-dip La Nina, it is hardly surprising to find yet another ~90 month period of no warming. Suggesting that the underlying long term warming trend has stopped or reversed is a wish.

Ireneusz Palmowski
Reply to  TheFinalNail
March 4, 2022 5:29 am

There is no global warming and as long as La Niña lasts there will not be. Many regions will be cooler.comment image

Ireneusz Palmowski
Reply to  Ireneusz Palmowski
March 4, 2022 5:34 am

For example, Australia will have below average temperatures due to cloud cover.
http://tropic.ssec.wisc.edu/real-time/mtpw2/product.php?color_type=tpw_nrl_colors&prod=global2&timespan=24hrs&anim=html5

Last edited 9 months ago by Ireneusz Palmowski
TheFinalNail
Reply to  Ireneusz Palmowski
March 4, 2022 5:46 am

And soon after the next round of El Nino conditions recommence, that map will look decidely red. Doesn’t matter what a single month looks like; it’s just a snap shot. What matters is the underlying long term trend; and that remains statistically significant warming.

BobM
Reply to  TheFinalNail
March 4, 2022 7:09 am

“What matters is the underlying long term trend; and that remains statistically significant warming.” – said all the climate experts in 1940.

Meab
Reply to  BobM
March 4, 2022 1:37 pm

Don’t confuse ToeFungalNail with a climate expert. He’s just a run-of-the-mill climate alarmist who makes “predikshuns” based on his belief that CO2 dominates all other factors that influence the climate.

ToeFungalNail doesn’t understand that CO2 is just one factor out of many. He doesn’t understand that, since climate models are unable to hindcast major changes in the climate when CO2 was stable, the models are essentially useless.

bdgwx
Reply to  Meab
March 4, 2022 2:16 pm

Meab said: “since climate models are unable to hindcast major changes in the climate when CO2 was stable, the models are essentially useless.”

Willeit et al. 2019 was able to hindcast major changes in the climate over the last 3 million years both with stable and unstable CO2 trends. Their model even explains the transition from 40k to 100k year glacial cycles around 800,000 YBP. It hindcasts both the CO2 and T profiles with pretty reasonable skill. As always, if you know of a model that has better skill in replicating both CO2 and T simultaneously I’d love to review it.

Graemethecat
Reply to  bdgwx
March 4, 2022 4:33 pm

Yet another worthless bit of computer modelling, completely devoid of any physical basis.

bdgwx
Reply to  Graemethecat
March 4, 2022 4:52 pm

Would you mind posting a link to the model you had in mind that exhibits better skill and also explains the mid Pleistocene transition and which has a firm physical basis?

Graemethecat
Reply to  bdgwx
March 5, 2022 7:41 am

I have no idea, but I do know that a model with as many adjustable knobs and dials as climate models can be “adjusted” to fit any desired scenario.

bdgwx
Reply to  Graemethecat
March 5, 2022 1:14 pm

The Standard Model of particle physics has a lot of knobs and dials that can be adjusted. Do you hold the same prejudice against it as you for the climate models?

Carlo, Monte
Reply to  bdgwx
March 5, 2022 2:03 pm

This is nonsense.

Graemethecat
Reply to  bdgwx
March 5, 2022 11:11 pm

Wrong. The Standard Model is based on the fundamental constants, has only four forces, six leptons and three quarks, and makes physically-verifiable predictions.

Mike
Reply to  bdgwx
March 4, 2022 4:40 pm

Willeit et al. 2019 was able to hindcast major changes in the climate over the last 3 million years”

God spare me. You actually believe we have the slightest clue about the details of the climate 3 million years ago?
You need help. And I don’t mean regular help but a team of specialists working round the clock. 🙂

bdgwx
Reply to  Mike
March 4, 2022 6:51 pm

Mike said: “You actually believe we have the slightest clue about the details of the climate 3 million years ago?”

Yes. I’ve not seen a convincing reason to doubt the abundance of evidence which says that glacial cycles were common.

Last edited 9 months ago by bdgwx
RLH
Reply to  TheFinalNail
March 4, 2022 8:56 am

When do you expect the next El Nino to happen and how big will it be?

Dave Fair
Reply to  TheFinalNail
March 4, 2022 10:02 am

And that warming is the result of what, TFN? What is the significance of that minor warming?

Last edited 9 months ago by Charlie Skeptic
ResourceGuy
Reply to  TheFinalNail
March 4, 2022 11:37 am

The AMO (down) cycle will prove you wrong Nail.

bdgwx
Reply to  ResourceGuy
March 4, 2022 2:11 pm

ResourceGuy said: “The AMO (down) cycle will prove you wrong Nail.

The AMO was negative from 1965 to 1998 and positive through at least 2019. Based on Berkeley Earth’s data the temperature increased about 0.50 C during the cool phase (34 years) and 0.45 C during the warm phase (21 years). Those are warming rates of +0.15 C/decade and +0.21 C/decade. Based on that alone you could reasonably hypothesize that the future trend would be lower, but since it was still significantly positive even during the cool phase I don’t think it is reasonably to hypothesize that it would be negative.

ResourceGuy
Reply to  bdgwx
March 4, 2022 8:54 pm

I’m not suggesting a univariate model but I guess you took it that way.

Clyde Spencer
Reply to  TheFinalNail
March 4, 2022 12:21 pm

… and that remains statistically significant warming.

Except for the last 7 years and 5 months. That is about 1/4 of the 30-year baseline. Not quite a “snap shot.”

MarkW
Reply to  TheFinalNail
March 4, 2022 7:27 pm

As every good climate scientist knows, once a trend starts, it never, ever, ends.

MarkW
Reply to  TheFinalNail
March 4, 2022 5:46 am

I don’t know if you are this math deficient, or just being your usual duplicitous self.

The calculation of the pause starts at the present and works backwards in time.
According to the leading lights of the AGW panic, such long pauses aren’t possible.

Bellman
Reply to  MarkW
March 4, 2022 5:56 am

For once I’d like someone who insists Monckton is working backwards, to explain exactly what they think he does, and why it make a difference.

This is about how you determine which month will be the start of the pause. You look at one month after another until you have found the correct start month, i.e. the earliest month that gives you a non-positive trend from that month to the most recent month. It makes no sense to perform your search backwards as you won’t know you have found the earliest such month until you have gone back to the start of the data set. It’s easier to start at the beginning look at each potential start month and stop as soon as you find the first one that gives you a non-positive trend. But it makes no difference which direction you look in, you will get the same result.

Carlo, Monte
Reply to  Bellman
March 4, 2022 6:54 am

NEE!

Bellman
Reply to  Carlo, Monte
March 4, 2022 7:11 am

It.

Mark BLR
Reply to  Bellman
March 4, 2022 8:10 am

For once I’d like someone who insists Monckton is working backwards, to explain exactly what they think he does

“The cure for boredom is curiosity. There is no cure for curiiosity.” — Dorothy Parker

1) For a given dataset, fix the end-point as “the last available monthly anomaly value”.

2) Working backwards, find the earliest month that results in a (just) negative trend.

3) One month later, when a new “last available monthly anomaly value” becomes available, go to step 1.

The latest results, for the main surface (GMST) and satellite (lower troposphere) datasets, are shown below.

… and why it make[s] a difference

It doesn’t, it’s merely an “interesting” phenomenon.

To classically trained detractors it can (legitimately …) be called “intellectual onanism”.

For ignorant peasants (such as myself, who need to look up what the term “sermo vulgaris” means instead of simply recalling it from memory) the term usually employed is “math-turbation”.

NB : The results still count as “interesting” though …

New-pause_To-Feb-2022_0.png
Last edited 9 months ago by Mark BLR
Bellman
Reply to  Mark BLR
March 4, 2022 8:24 am

Thanks. Yes that is how the start date can be determined. The question still remains, why work backwards in step 2? The only way to be sure a given month is the earliest month is to go all the way to the earliest date e.g. December 1978. You could just as easily start at the earliest date and work forwards until you found the first negative trend.

Mark BLR
Reply to  Bellman
March 4, 2022 9:06 am

The question still remains, why work backwards in step 2?

You could just as easily start at the earliest date and work forwards until you found the first negative trend.

You have been VERY badly misinformed …

UAH_Start-vs-End-point-trends.png
Last edited 9 months ago by Mark BLR
Bellman
Reply to  Mark BLR
March 4, 2022 9:39 am

That’s not what I’m describing. What I’m saying is you have to look at every start date (or end date if you prefer) calculate the trend from that date to the present, and choose from all possible dates the one that gives you the longest pause.

The forwards method

What’s the trend from December 1978 to February 2022. It’s positive so reject that as a start date.

What’s the trend from January 1979 to February 2022. That’s positive so reject that.

Repeat for each month until you get to

What’s the trend from October 2014 to February 2022. It’s negative. Hooray! We have the start date for the longest possible pause. We can stop now.

The backwards method

Whats the trend from January 2022 to February 2022. It’s negative, it’s the best candidate for a pause, but we have to keep going on.

Whats the trend from December 2021 to February 2022. It’s negative, it’s the best candidate for a pause, but we have to keep going on.

And so on till

Whats the trend from March 2018 to February 2022. It’s positive so not a candidate for a pause – April 2018 remains our best candidate, but it could easily turn negative again, so we keep going.

Until

Whats the trend from October 2017 to February 2022. It’s negative. Hooray, we can use that as a start date for the pause, but we have to keep going.

So on through more negative months until

Whats the trend from September 2014 to February 2022. It’s positive so not a candidate for a pause. October 2014 is now our best start date. But we can’t know if it won;t turn negative again. So keep going.

Finally

Whats the trend from December 1978 to February 2022. It’s still positive. We’ve come to the end of our data, so go back to the earliest pause date we found October 2014 – and that’s the start of the pause.

Mark BLR
Reply to  Bellman
March 4, 2022 10:13 am

What I’m saying is you have to look at every start date (or end date if you prefer) calculate the trend from that date to the present, and choose from all possible dates the one that gives you the longest pause.

NB : I don’t “have to” do that, it’s one option amongst many.

On the other hand, “been there, done that, got the T-shirt” …

UAH_Pause-options_1221.png
Bob boder
Reply to  Bellman
March 4, 2022 12:10 pm

Start 8000 years ago

Clyde Spencer
Reply to  Bellman
March 4, 2022 12:29 pm

Actually, you can save yourself some time by looking at a graph and seeing what historical dates can be eliminated as being impossible. Not that you will probably notice the difference unless you are still using an old 8-bit computer with a 1MHz clock speed.

Bellman
Reply to  Clyde Spencer
March 4, 2022 12:54 pm

True, and I thought about mentioning something to that effect, but it’s getting complicated enough. You can generally tell when the trend isn’t going to go negative, and either stop there if going backwards or start there if going forwards.

In all fairness, I don’t use an algorithm to determine the start of the pause, I just generate a time series of every month and eyeball it to see the earliest start date, and this also allows me to see where to cherry pick periods with fast warming rates.

The issue still is why people think using any process to find the exact start month to give you the longest pause is not cherry-picking as long as it’s calculated and done backwards. To me, the very fact you are doing the calculation for every month is what makes it a cherry-pick.

bdgwx
Reply to  Mark BLR
March 4, 2022 9:47 am

CMoB is doing the equivalent of your “Trends to last data point” line. And Bellman is right. You can start in either direction, but computationally starting at the beginning requires less calculations since you get to stop the moment a non-positive trend is observed. Starting at the end forces you to walk all the way to the beginning.

Derg
Reply to  bdgwx
March 4, 2022 1:51 pm

And yet CO2 keeps rising 🙂

bdgwx
Reply to  Derg
March 4, 2022 2:29 pm

Derg said: “And yet CO2 keeps rising “

ENSO keeps happening too.

Derg
Reply to  bdgwx
March 4, 2022 3:54 pm

A sphincter says what?

bdgwx
Reply to  Derg
March 5, 2022 8:28 am

ENSO is the El Nino Southern Oscillation. It has been shown to drive the UAH TLT temperature up during the warm phase (El Nino) and drive it down during the cool phase (La Nina).

Tim Gorman
Reply to  Derg
March 4, 2022 3:36 pm

You hit the nail right on the head!

The fact that CO2 keeps rising *should* mean there will be no pause. That’s what the climate models all show – NO PAUSE.

The longer the pause the more questionable the tie-in between CO2 and temperature becomes.

When the climate models get good enough to predict the pauses then they might become useful for predicting the future. Don’t hold your breath.

Clyde Spencer
Reply to  Tim Gorman
March 5, 2022 10:52 am

I have submitted two articles supporting your position, but Charles has not been willing to publish them. Would you be interested in seeing them?

Tim Gorman
Reply to  Clyde Spencer
March 5, 2022 2:14 pm

Yep. Send’em along! Do you still have my email!

bdgwx
Reply to  Tim Gorman
March 5, 2022 1:11 pm

TG said: “The fact that CO2 keeps rising *should* mean there will be no pause.”

That would only true if CO2 were the only thing modulating the atmosphere temperatures.

TG said: “That’s what the climate models all show – NO PAUSE.”

If all climate models show NO PAUSE then why is it that I see a lot of pauses in the CMIP5 members available on the KNMI Explorer?



Tim Gorman
Reply to  bdgwx
March 5, 2022 2:27 pm

Again, here is the graph of the models.

Where are the pauses?

cmip5_global (1).png
bdgwx
Reply to  Tim Gorman
March 5, 2022 6:18 pm

TG said: “Where are the pauses?”

Download the tabular data for each member from the KNMI Explorer. Load the data into Excel. Do a =@LINEST(X1:X89) on each monthly value. Look for occurrences where LINEST is less or equal to zero. If Excel is not your thing you can use R or your favorite programming language.

Tim Gorman
Reply to  bdgwx
March 7, 2022 4:33 am

Download the tabular data for each member from the KNMI Explorer. Load the data into Excel. Do a =@LINEST(X1:X89) on each monthly value. Look for occurrences where LINEST is less or equal to zero. If Excel is not your thing you can use R or your favorite programming language.”

I don’t need to do so. I’ve already posted the data in the form of a graph of the model outputs included in CMIP5.

bdgwx
Reply to  Tim Gorman
March 7, 2022 6:55 am

TG said: “I don’t need to do so. I’ve already posted the data in the form of a graph of the model outputs included in CMIP5.”

How did you apply the Monckton method to the graph? With so many lines on that graph how did you make sure you weren’t confusing members especially when the lines crossed?

Tim Gorman
Reply to  bdgwx
March 7, 2022 12:10 pm

I can only assume you are color blind. You can separate out the model runs via their color.

bdgwx
Reply to  Tim Gorman
March 7, 2022 12:54 pm

TG said: “I can only assume you are color blind. You can separate out the model runs via their color.”

I zoomed in on the graph you posted and put the pixel grid on it. It looks to me that a lot of them have the same color. I also noticed that when multiple members land on the same pixel the color seems to be a blend of all of them. And the mass in the center looks to be blended together so thoroughly that I can’t tell where an individual member even starts. Maybe my eyes are failing me. Maybe you can help out. Would you mind separating out just a single member here so that I can see how you are doing it?

comment image

Clyde Spencer
Reply to  Bellman
March 4, 2022 12:25 pm

What’s the big deal? With computer, going back to 1978 and doing all the calculations won’t even give you enough time to go get a cup of coffee.

Bellman
Reply to  Clyde Spencer
March 4, 2022 12:44 pm

It’s not a deal at all, big or small. I just don’t understand why people say it must be done in the slightly more complicated way, and more importantly why they think doing it this way means you are being more honest than if you did it the other way.

Last edited 9 months ago by Bellman
MarkW
Reply to  Bellman
March 4, 2022 8:47 am

If you work forward, you have to take each month in turn and then run the calculations from that month to the current month. Sure you get the same results in the end, but it takes a lot more time to find the last month.
If you start from the current month, you find the answer in one pass.

The claim has been that he cherry picks the start month, which he has never done.

Bellman
Reply to  MarkW
March 4, 2022 9:21 am

It doesn’t take more time that’s my point. Start in December 1978 and work forward. You stop when you reach October 2014 as that’s the first negative trend. Start in January 2021 and you have to go all the way back to December 1978 before you can be certain you’ve found the earliest start date.

The claim has been that he cherry picks the start month, which he has never done.

My claim is that looking at every possible start date in order to find the result you want is cherry-picking. Again I’ll ask, if I check every possible start date to find the earliest date where the trend is greater than 0.34°C / decade (The rate Monckton claims the 1990 IPCC predicted), would you consider that to be a cherry-pick or just a carefully calculated period?

The start date for that one is October 2010. Would you object if I right an article claiming that for the last 11 years and 5 months the earth has been warming faster than the IPCC predicted, or would you ask why I chose that particular start date?

Reply to  Bellman
March 4, 2022 10:34 am

Poor, hapless, mathematically-challenged Bellman! I do not cherry-pick. I simply calculate. There has been no global warming for 7 years 5 months. One realizes that an inconvenient truth such as this is inconsistent with the Party Line to which Bellman so profitably subscribes, but there it is. The data are the data. And, thanks to the hilarious attempts by Bellman and other climate Communists frenetically to explain it away, people are beginning to notice, just as they did with the previous Pause.

Bellman
Reply to  Monckton of Brenchley
March 4, 2022 11:08 am

There has been warming at over 0.34°C / decade for the past 11 years and 5 months. I did not cherry pick this date, I calculated it[*]. This is obviously inconvenient to anyone claiming warming is not as fast as the imagined prediction of from the 1990 IPCC report, so I can understand why that genius mathematician Lord Monckton chooses to ignore this carefully calculated trend. It doesn’t fit with his “the IPCC are a bunch of communists who make stuff up” party spiel. But the data are the data. And no amount of his usual libelous ad hominems, will distract from his inability to explain why this accelerated warming is happening despite the pause.

[*] Of course it is a cherry pick.

Bob boder
Reply to  Bellman
March 4, 2022 12:12 pm

Pick a start date if 8000 ago

Bellman
Reply to  Bob boder
March 4, 2022 12:56 pm

Tricky using satellite data.

Mike
Reply to  Bellman
March 4, 2022 4:47 pm

Then start in 1958. We have good balloon data which agrees with the satellite from 1979. There has been no global warming for 64 years…..At least!

Last edited 9 months ago by Mike
Bellman
Reply to  Mike
March 4, 2022 5:18 pm

What’s the point? We know how this will go, I’ll point to all the data showing a significant warming trend since 1958 and you’ll say that doesn’t count because you don;t like the data. But for the record, trends since 1958

GISTEMP: +0.165 ± 0.021 °C / decade
NOAA: +0.150 ± 0.021 °C / decade
HadCRUT: +0.139 ± 0.020 °C / decade
BEST: +0.175 ± 0.018 °C / decade

Above uncertainties taken from Skeptical Science Trend Calculator.

RATPAC-A
Surface: 0.166 ± 0.024 °C / decade
850: 0.184 ± 0.022 °C / decade
700: 0.165 ± 0.022 °C / decade
500: 0.197 ± 0.027 °C / decade

My own calculations from annual global data. Uncertainties are not corrected for auto-correlation.

All uncertainties are 2-sigma.

Tim Gorman
Reply to  Bellman
March 4, 2022 5:44 pm

As usual, you can’t explain the difference between precision and accuracy. How do you get a total uncertainty of 0.02C from measurement equipment with a 0.5C uncertainty?

That’s no different than saying if you make enough measurements of the speed of light with a stop watch having an uncertainty of 1 second you can get your uncertainty for the speed of light down to the microsecond!

The fact is that your “trend” gets subsumed into the uncertainty intervals. You can’t tell if the trend is up or down!

Carlo, Monte
Reply to  Tim Gorman
March 4, 2022 5:58 pm

He makes the same blunders, over and over and over, and he still doesn’t understand the word.

Bellman
Reply to  Tim Gorman
March 4, 2022 6:25 pm

It’s the uncertainty in the trend. I.e. the confidence interval. You know the thing Monckton never mentions in any of his pauses, and you Lords of.Uncertainty never call him out on.

The fact is that your “trend” gets subsumed into the uncertainty intervals. You can’t tell if the trend is up or down!

Which would mean the pause is meaningless, and calculating an exact start month doubly so.

Last edited 9 months ago by Bellman
Carlo, Monte
Reply to  Bellman
March 4, 2022 8:28 pm

The same old propaganda, repeated endlessly

Tim Gorman
Reply to  Bellman
March 5, 2022 5:51 am

The uncertainty of the trend depends on the uncertainty of the underlying data. The uncertainty of the trend simply cannot be less then the uncertainty of the data itself.

Uncertainty and confidence interval are basically the same thing. The uncertainty interval of a single physical measurement is typically the 95% confidence interval. That means when you plot the first data point the true value will lie somewhere in the uncertainty interval. When you plot the next point on a graph it also can be anywhere in the confidence interval. Therefore the slope of the connecting line can be from the bottom of the uncertainty interval of the first point to the top of the uncertainty interval for the next point, or vice versa – from the top to the bottom. That means the actual trend line most of the time can be positive or negative, you simply can’t tell. Only if the bottom/top of the uncertainty interval for the second point is above/below the uncertainty interval of the first point can you be assured the trend line is up/down.

Again, the trend line has no uncertainty interval of its own. The uncertainty of a trend line is based on the uncertainty of the data being used to try and establish a trend line.

You know the thing Monckton never mentions in any of his pauses, and you Lords of.Uncertainty never call him out on.”

Can you whine a little louder, I can’t hear you.

Which would mean the pause is meaningless, and calculating an exact start month doubly so.”

I have never said anything else. UAH is a metric, not a measurement. It is similar to the GAT in that the total uncertainty of the mean is greater then the differences trying to be measured. You can try and calculate the means of the two data sets as precisely as you want, i.e. the standard deviation of the sample means, but that doesn’t lessen the total uncertainty (ii.e. how accurate the mean is) of the each mean.

Are you finally starting to get the whole picture? So much of what passes for climate science today just totally ignores the uncertainty of the measurements they use. The stated measurement value is assumed to be 100% accurate and the uncertainty interval is just thrown in the trash bin.

Agricultural scientists studying the effect of changing LSF/FFF dates, changing GDD, and changing growing season length have recognized that climate needs to be studied on a county by county basis. National averages are not truly indicative of climate change. And if national averages are not a good metric then how can global averages be any better?

If the climate models were produced on a regional or local basis I would put more faith in them. They could be more easily verified by observational data. Since the weather models have a hard time with accuracy, I can’t imagine that the climate scientists could do any better!

Bellman
Reply to  Tim Gorman
March 5, 2022 3:34 pm

The uncertainty of the trend depends on the uncertainty of the underlying data.

No it doesn’t, at least not usually. The data could be perfect and you will still have uncertainty in the trend. But there’s little point going through all this again, given your inability to accept even the simplest statistical argument.

Therefore the slope of the connecting line can be from the bottom of the uncertainty interval of the first point to the top of the uncertainty interval for the next point, or vice versa – from the top to the bottom.

Which is why you want to have more than two data points.

Only if the bottom/top of the uncertainty interval for the second point is above/below the uncertainty interval of the first point can you be assured the trend line is up/down.

Again, not true, but also again, I see no attempt to extend this logic to anything Monckton says. If, say there’s’ a ±0.2°C in the monthly UAH data, then by this logic the pause period could have warmed or cooled by 0.4°C, a rate of over 0.5°C / decade.And if you take the Carlo, Monte analysis the change over the pause could have been over 7°C, or ±9°C / decade.

As it happens the uncertainty over that short period is closer to the first figure, around ±0.6°C / decade, using the Skeptical Science Trend Calculator which applies a strong correction for auto correlation. But that uncertainty is not based on any measurement uncertainty, it’s based on the variability of the data combined with the short time scale.

I have never said anything else

Fair enough if you think the pause is meaningless, but I see many here who claim it proves there is no correlation between CO2 levels. It’s difficult to see how it can do that if you accept the large uncertainties.

Carlo, Monte
Reply to  Bellman
March 5, 2022 4:46 pm

But there’s little point going through all this again, given your inability to accept even the simplest statistical argument.

Projection time.

Bellman
Reply to  Carlo, Monte
March 6, 2022 10:59 am

Do you want me to enumerate all the times he’s ignored all explanations for why he’s wrong? Insisting that uncertainty in an average increases with sample size, refusing to accept that scaling down a measurement will also scale down the uncertainty, insisting that you can accurately calculate growing degree days knowing only the maximum temperature for the day. To name but three of the top of my head.

Tim Gorman
Reply to  Bellman
March 6, 2022 2:53 pm

 Insisting that uncertainty in an average increases with sample size, refusing to accept that scaling down a measurement will also scale down the uncertainty, insisting that you can accurately calculate growing degree days knowing only the maximum temperature for the day. To name but three of the top of my head.”

You are wrong on each of these. The standard deviation of the sample means is *NOT* the uncertainty of the average value. You can’t even state this properly. The standard deviation of the sample means only tells you how precisely you have calculated the mean of the sample means. It does *NOT* tell you anything about the uncertainty of that average. Precision is *NOT* accuracy. For some reason you just can’t seem to get that right!

There is no scaling down of uncertainty. You refuse to accept that the uncertainty in a stack of pieces of paper is the sum of the uncertainty associated with each piece of paper. If the uncertainty of 200 pages is x then the uncertainty of each piece of paper is x/200. u1 + u2. + … + u200 = x. This is true even if the pages do *not* have the same uncertainty. If the stack of paper consists of a mixture of 20lb paper and 30lb paper then the uncertainty associated with each piece of paper is *NOT* x/200. x/200 is just an average uncertainty. You can’t just arbitrarily spread that average value across all data elements.

Growing degree-days calculated using modern methods *IS* done by integrating the temperature profile above a set point and below a set point. If the temperature profile is a sinusoid then knowing the maximum temperature defines the entire profile and can be used to integrate. If it is not a sinusoid then you can still numerically integrate the curve. For some reason you insist on using outdated methods based on mid-range temperatures – just like the climate scientists do. If the climate scientists would get into the 21st century they would also move to the modern method of integrating the temperature curve to get degree-days instead of staying with the old method of using mid-range temperatures. HVAC engineers abandoned the old method almost at least 30 years ago!

Bellman
Reply to  Tim Gorman
March 6, 2022 3:55 pm

Thanks for illustrating my point.

Nothing in your points about the distinction between precision and accuracy do you explain how increasing sample size can make the average less certain. Your original example was having 100 thermometers each with an uncertainty of ±0.5 °C, making, you claimed, the uncertainty of the average ±5 °C. If your argument is that this uncertainty was about accuracy not precision, i.e. caused by systematic rather than random error, it still would not mean the uncertainty of the average culd possibly be ±5 °C. At worst it would be ±5 °C.

What do you think x/200 means if not scaling down. You know the size of a stack of paper, you know the uncertainty of that measurement, you divide the measurement by 200 to get the thickness of a single sheet, and you divide the uncertainty by 200 to get the uncertainty of that thickness.

The outdated methods requiring average temperatures in GDD was exactly the one you insisted was the correct formula here. It shows the need to have both the maximum and minimum temperatures and to get the mean temperature form them along with the range. I suggested you try it out by keeping the maximum temperature fixed and seeing what happened with different minimum temperatures. I take it you haven;t done that. Here’s the formula you wrote down, with some emphasis added by me.

(New Total GDD) = (Yesterday’s Total GDD) + (1/π) * ( (DayAvg – κ) * ( ( π/2 ) – arcsine( θ ) ) + ( α * Cos( arcsine( θ ) ) ) )

DayAvg = (DayHigh + DayLow)/2

κ = 50 (the base temp.)

α = (DayHigh – DayLow)/2

θ = ((κ – DayAvg)/α)

Carlo, Monte
Reply to  Bellman
March 6, 2022 6:33 pm

Who let prof bellcurveman have a dry marker again?

Tim Gorman
Reply to  Bellman
March 7, 2022 8:49 am

Nothing in your points about the distinction between precision and accuracy do you explain how increasing sample size can make the average less certain.”

Increasing the sample size only increases the PRECISION of the mean. As the standard deviation of the sample means gets smaller, you are getting more and more precise with the value calculated – THAT IS *NOT* THE UNCERTAINTY OF THE MEAN which is how accurate it is.

If you only use the stated value of the mean for each sample and ignore the uncertainty propagated into that mean from the individual members of the sample and then you use that mean of the stated values to determine the mean of the population you have determined NOTHING about how accurate that mean is.

Take 10 measurements with stated values of x_1, x_2, …, x_10 each with an uncertainty of +/- 0.1. Then let q = Σ (x_1, …, x_10). The uncertainty of q, ẟq, is somewhere between ẟx_1 + ẟx_2 + … + ẟx_10 and sqrt(ẟx_1^2 + ẟx_2^2 + … + ẟx_10^2)

Now, let’s say you want to calculate the uncertainty of the average. q_avg = Σ (x_1, …, x_10) / 10.

The uncertainty of q_avg is somewhere between ẟx_1 + ẟx_2 + … + ẟx_10 + ẟ10 (where ẟ10 = 0) and sqrt(ẟx_1^2 + ẟx_2^2 + … + ẟx_10^2 + ẟ10^2) (where ẟ10 = 0)

Taylor’s Rule 3.18 doesn’t apply here because n = 10 is not a measurement.

from Taylor: “If several quantities x, …, w are measured with small uncertainties ẟx, …, ẟw, and the measured values are used to compute (bolding mine, tg)

q = (x X … X z) / (u X …. X w)

If the uncertainties in x, …, w are independent and random, then the fractional uncertainty in q is the sum in quadrature of the original fractional uncertainties.

ẟq/q = sqrt[ ẟx/x)^2 + … (ẟz/z)^2 + (ẟu/u)^2 + …. + (ẟw/w)^2 ]

In any case, it is never larger than their ordinary sum

ẟq/q = ẟx/x + … + ẟz/z + ẟu/u + … + ẟw/w

Even if you assume that u, for instance is 10, the ẟ10 just gets added in.

Thus the uncertainty of the mean of each sample is somewhere between the direct addition of the uncertainties in each element and the quadrature addition of the uncertainties in each element. Since the number of elements is a constant (with no uncertainty) the uncertainty of the constant neither adds to, subtracts from, or divides the uncertainty of the sum of the uncertainties from each element.

When you use the means of several samples to calculate the mean of the population by finding the average of the means, the uncertainty associated with each mean must be propagated into the average.

Average-of-the-sample-means = (m_1 + m_2 + … + m_n) / n,

where m_1, …, m_n each have an uncertainty of ẟm_1, ẟm_2, …, ẟm_n

The uncertainty Average_of_the_sample_means is between

ẟAverage_of_the_sample_means/ sum_of_the_sample means =

ẟm_1/m_1 + ẟm_2/m_2 + …. + ẟm_n/m_n

and

sqrt[ (ẟm_1/m_1)^2 + ( ẟm_2/m_2)^2 + … + (ẟm_n/m_n)^2 ]

The standard deviation of m_1, …, m_n is

[Σ(m_i – m_avg)^2 / n where i is from 1-n ]

This is *NOT* the uncertainty of the mean. Totally different equation.

I simply do not expect you to even follow this let alone understand it. My only purpose is to point out to those who *can* follow it and understand it that the standard deviation of the sample means is *NOT* the same thing as the uncertainty of the mean of the sample means.

Only a mathematician that thinks all stated values are 100% accurate would ignore the uncertainties associated with measurements and depend only on the stated values of the measurements.

Bellman
Reply to  Tim Gorman
March 7, 2022 2:01 pm

Thanks for reminding me and any lurkers here of the futility of arguing these points with you. You’ve been asserting the same claims for what seems like years, provided no evidence but the strength of your convictions, and simply refuse to accept the possibility that you might have misunderstood something.

Aside fro the failure to provide any justification, this claim is self-evidently false. You are saying that if you have a 100 temperature readings made with 100 different thermometers, each with an uncertainty of ±0.5 °C, then if this uncertainty is caused by systematic error – e.g. every thermometer might be reading 0.5 °C to cold, then the uncertainty in the average of those 100 thermometers will be ±50 °C. And that’s just the uncertainty caused by the measurements, nothing to do with the sampling.

In other words, say all the readings are between 10 and 20 °C, and the average is 15 °C. Somehow the fact that the actual temperature around any thermometer might have been as much as 20.5 °C, means that the actual average temperature might be 65 °C. How is that possible?

Taylor’s Rule 3.18 doesn’t apply here because n = 10 is not a measurement.

Of course 10 is a measurement. It’s a measure of the size of n, and it has no uncertainty.

Carlo, Monte
Reply to  Bellman
March 7, 2022 2:25 pm

Uncertainty is not error.

Bellman
Reply to  Carlo, Monte
March 7, 2022 6:52 pm

And the relevance of this mantra is?

The problem is same regardless of how you define uncertainty. How can 100 thermometers each with an uncertainty of ±0.5 °C result in an average with a measurement uncertainty between ±5 and ±50 °C?

Tim Gorman
Reply to  Bellman
March 9, 2022 12:44 pm

The problem is same regardless of how you define uncertainty. How can 100 thermometers each with an uncertainty of ±0.5 °C result in an average with a measurement uncertainty between ±5 and ±50 °C?”

tg: “How is it possible for the uncertainty to be 65C? What that indicates is that your average is so uncertain that it is useless. Once the uncertainty exceeds the range of possible values you can stop adding to your data set. At that point you have no idea of where the true value might lie.”

I’m not surprised you can’t figure this one out!

Bellman
Reply to  Tim Gorman
March 9, 2022 1:29 pm

Oh I figured it out a long time ago. I’m just seeing how far you can continue with this idiocy.

Could you point to a single text book that explains that adding additional samples to a data set will make the average worse?

Tim Gorman
Reply to  Bellman
March 9, 2022 5:37 pm

Taylor and Bevington’s tome. I already gave you the excerpts from their text books that state that statistical analysis of experimental data (i.e. temperatures) with systematic errors is not possible.

adding additional samples to a data set will make the average worse?”

Very imprecise. It makes the UNCERTAINTY of the average greater. You are still confusing preciseness and accuracy. You can add uncertain data and still calculate the mean very precisely. What you *can’t* do is ignore the uncertainty and state that the preciseness of your calculation of the mean is also how uncertain that mean is.

Tim Gorman
Reply to  Bellman
March 8, 2022 1:37 pm

provided no evidence but the strength of your convictions, and simply refuse to accept the possibility that you might have misunderstood something.”

And all you do is keep claiming Taylor and Bevington are idiots and their books are wrong.ROFL!!

Aside fro the failure to provide any justification, this claim is self-evidently false. You are saying that if you have a 100 temperature readings made with 100 different thermometers, each with an uncertainty of ±0.5 °C, then if this uncertainty is caused by systematic error – e.g. every thermometer might be reading 0.5 °C to cold, then the uncertainty in the average of those 100 thermometers will be ±50 °C. And that’s just the uncertainty caused by the measurements, nothing to do with the sampling.”

I’m sorry that’s an inconvenient truth for you but it *IS* the truth! The climate scientists combine multiple measurements of different things using different measurement devices all together to get a global average temperature. What do you expect that process to give you? In order to get things to come out the way they want they have to ignore the uncertainties of all those measurement devices and of those measurements and assume the stated values are 100% accurate. They ignore the fact that in forming the anomalies that the uncertainty in the baseline (i.e. an average of a large number of temperature measurements, each contributing to uncertainty) and the uncertainty in the current measurement ADD even if they are doing a subtraction! They just assume that if all the stated values are 100% accurate then the anomaly must be 100% accurate!

They just ignore the fact that they are creating a data set with a HUGE variance – cold temps in the NH with hot temps in the SH in part of the year and then vice versa in another part of the year. Wide variances mean high uncertainty. But then they try to hide the variance inside the data set by using anomalies – while ignoring the uncertainty propagated into the anomalies.

At least with UAH you are taking all measurements with the same measuring device. It would be like taking one single thermometer to 1000 or more surface locations to do the surface measurements. At least that would allow you to get at least an estimate of the systematic error in that one device in order to provide some kind of corrective factor. It might not totally eliminate all systematic error but it would at least help. That’s what you get with UAH.

BTW, 100 different measurement devices and measurements would give you at least some random cancellation of errors. Thus you should add the uncertainties using root-sum-square -> sqrt( 100 * 0.5^2) = 10 * 0.5 = 5C. Your uncertainty would be +/- 5C. That would *still* be far larger than the hundredths of a degree the climate scientists are trying to identify. If you took 10 samples of size 10 then the mean of each sample would have an uncertainty of about 1.5C = sqrt( 10 * .5^2) = 3 * .5. Find the uncertainty of the average of those means and the uncertainty of that average of the sample means would be sqrt( 10 * 1.5^2) = 4.5C. (the population uncertainty and the uncertainty of the sample means would probably be equal except for my rounding). That’s probably going to be *far* higher than the standard deviation of the sample means! That’s what happens when you assume all the stated values in the data set are 100% accurate. You then equate the uncertainty in the mean with the standard deviation of the sample means. It’s like Berkeley Earth assuming the uncertainty in a measuring device is equal to its precision instead of its uncertainty.

In other words, say all the readings are between 10 and 20 °C, and the average is 15 °C. Somehow the fact that the actual temperature around any thermometer might have been as much as 20.5 °C, means that the actual average temperature might be 65 °C. How is that possible?”

No, the uncertainty of the mean would be +/- 5C. Thus the true value of the mean would be from 10C to 20C. Why would that be a surprise?

Remember with global temperatures, however, you have a range of something like -20C to +40C. Huge variance. So a huge standard deviation. And anomalies, even monthly anomalies, will have a corresponding uncertainty.

How is it possible for the uncertainty to be 65C? What that indicates is that your average is so uncertain that it is useless. Once the uncertainty exceeds the range of possible values you can stop adding to your data set. At that point you have no idea of where the true value might lie. In fact, with different measurements of different things using different devces there is *NO* true value anyway. The average gives you absolutely no expectation of what the next measurement will be. It’s like collecting boards at random out of the ditch or trash piles, etc. You can measure all those boards and get an average. But that average will give you no hint as to what the length of the next board collected will be. It might be shorter than all your other boards, it might be longer than all the other boards, or it may be anywhere in the range of the already collected boards – YOU SIMPLY WILL HAVE NO HINT AS TO WHAT IT WILL BE. The average will simply not provide you any clue!

With multiple measurements of the same thing using the same device the average *will* give you an expectation of what the next measurement will be. If all the other measurements range from 1.1 to 0.9 with an uncertainty of 0.01 then your expectation for the next measurement is that it would be around 1.0 +/- .01. That’s because with a gaussian distribution the average will be the most common value – thus giving you an expectation for the next measurement.

Bellman
Reply to  Tim Gorman
March 8, 2022 3:19 pm

And all you do is keep claiming Taylor and Bevington are idiots and their books are wrong.ROFL!!

I claim nothing of the sort. I keep explaining to you that they disagree with everything you say, which makes them the opposite of idiots.

Tim Gorman
Reply to  Bellman
March 9, 2022 1:21 pm

I keep explaining to you that they disagree with everything you say, which makes them the opposite of idiots.”

They don’t disagree with everything I say. The problem is that you simply don’t understand what they are saying and you refuse to learn.

  1. Multiple measurements of different things using different measurment devices
  2. Multiple measurements of the same thing using the same device.

These are two entirely different things. Different methods apply to uncertainty in each case.

In scenario 1 you do not get a gaussian distribution of random error, even if there is no systematic error. In this case there is *NO* true value for the distribution. You can calculate an average but that average is not a true value. As you add values to the data set the variance of the data set grows with each addition as does the total uncertainty.

In scenario 2 you do get a gaussian distribution of random error which tend to cancel but any systematic error still remains. You can *assume* there is no systematic error but you need to be able to justify that assumption – which you, as a mathematician and not a physical scientist or engineer, never do. You just assume the real world is like your math books where all stated values are 100% accurate.

As Taylor says in his introduction to Chapter 4:

“As noted before, not all types of experimental uncertainty can be assessed by statistical analysis based on repeated measurements. For this reason, uncertainties are classified into two groups: the random uncertainties, which can be treated statistically, and the systematic uncertainties, which cannot. This distinction is described in Section 4.1. Most of the remainder of this chapter is devoted to random uncertainties.” (italics are in original text, tg)

You either refuse to understand this or are unable to understand this. You want to apply statistical analysis to all situations whether it is warranted or not. Same with bdgwx.

Bevington says the very same thing: “The accuracy of an experiment, as we have defined it, is generally dependent on how well we can control or compensate for systematic errors, errors that will make our results different from the “true” values with reproducible discrepancies. Errors of this type are not easy to detect and not easily studied by statistical analysis.”

Temperature measurements are, by definition, multiple measurements of different things using different measurement devices. Thus they are riddled with systematic error which do not lend themselves to statistical analysis. There is simply no way to separate out random error and systematic error. A data set containing this information can be anything from multi-modal to highly skewed to having an absolutely huge variance. Your typical statistical parameters such as mean and standard deviation simply do not describe such a data set well at all.

It’s even worse when you want to ignore the uncertainties associated with each data point in order to make statistical analysis results “look” better. And this is what you, bdgwx, and most climate scientists do. “Make it look like the data sets in the math book” – no uncertainty in the stated values.

Bellman
Reply to  Tim Gorman
March 9, 2022 1:54 pm

You want to apply statistical analysis to all situations whether it is warranted or not.

You mean situations like taking the average of a sample or calculating a linear regression?

Tim Gorman
Reply to  Bellman
March 9, 2022 5:44 pm

You mean situations like taking the average of a sample or calculating a linear regression?”

Taking the average of a sample while ignoring the uncertainties of the components of the sample is *NOT* statistically correct. It requires an unjustified assumption.

A linear regression of uncertain data without considering the uncertainty interval of the data leads to an unreliable trend line. You’ve been given the proof of this via two different pictures showing why that is true. The fact that you refuse to even accept what those pictures prove only shows that you are continuing to try and defend your religious beliefs.

I’ll repeat what Taylor and Bevington said:

============================

As Taylor says in his introduction to Chapter 4:
“As noted before, not all types of experimental uncertainty can be assessed by statistical analysis based on repeated measurements. For this reason, uncertainties are classified into two groups: the random uncertainties, which can be treated statistically, and the systematic uncertainties, which cannot. This distinction is described in Section 4.1. Most of the remainder of this chapter is devoted to random uncertainties.” (italics are in original text, tg)

=======================================

=================================
Bevington says the very same thing: “The accuracy of an experiment, as we have defined it, is generally dependent on how well we can control or compensate for systematic errors, errors that will make our results different from the “true” values with reproducible discrepancies. Errors of this type are not easy to detect and not easily studied by statistical analysis.”
=========================================

You want to ignore these experts and believe that statistical analysis of data with systematic error *CAN* be treated statistically.

Bellman
Reply to  Tim Gorman
March 9, 2022 2:02 pm

No problem with either of your quotes, they are just saying there are random and systematic errors, both of which lead to uncertainty but in different ways. You keep jumping back and forth as to whether you are talking about random or systematic errors, and then start shouting “ERROR IS NOT UNCERTAINTY” whenever I mention it.

Temperature measurements are, by definition, multiple measurements of different things using different measurement devices. Thus they are riddled with systematic error which do not lend themselves to statistical analysis.

Wut? Why would multiple measurements using different devices have more systematic error than taking measurements with a single device?

A data set containing this information can be anything from multi-modal to highly skewed to having an absolutely huge variance.

What has that got to do with systematic error?

It’s even worse when you want to ignore the uncertainties associated with each data point in order to make statistical analysis results “look” better. And this is what you, bdgwx, and most climate scientists do.

We’ve been arguing about measurement uncertainties for months or years, why do you think we are ignoring them. Uncertainty estimates include the uncertainty in measurements.

Tim Gorman
Reply to  Bellman
March 9, 2022 6:12 pm

No problem with either of your quotes, they are just saying there are random and systematic errors, both of which lead to uncertainty but in different ways. You keep jumping back and forth as to whether you are talking about random or systematic errors, and then start shouting “ERROR IS NOT UNCERTAINTY” whenever I mention it.”

There is no “jumping around”. All real world measurements have some of each, some random error and some systematic error.

Error itself is *NOT* uncertainty. The fact that you don’t *KNOW* how much of each exists in the measurement is what defines the uncertainty. If you *know* what the sign and magnitude of each type of error is in a measurement then you could reach 100% accuracy for the measurement. The fact that you do *NOT* know the sign or magnitude of either means that you also don’t know the true value of the measurement.

Why is this so hard to understand after having it explained time after time after time after time?

Wut? Why would multiple measurements using different devices have more systematic error than taking measurements with a single device?”

OMG! Did you *actually* read this before you posted it? Do you think all thermometers have the same systematic error?

What has that got to do with systematic error?”

Again, did you actually read this before you posted it? If you are using different measurement devices you can easily find that you get all kinds of different distributions. You really have *no* experience in the real world doing actual measurements, do you? Suppose you and your buddy are measuring the bores in a V8 engine. You are doing one side using one device and he is doing the other side. If the systematic error for each device is not the same you will likely get a bi-modal distribution for the measurements. And that isn’t even considering the fact that you can find that the bores haven’t worn the same giving you a very skewed distribution!

“We’ve been arguing about measurement uncertainties for months or years, why do you think we are ignoring them. Uncertainty estimates include the uncertainty in measurements.”

Really? Then why does Berkeley Earth use the precision of the measuring device as their uncertainty estimate? Do you think the Berkeley Earth data is truly representing the uncertainty of the temperature data? When you use the standard deviation of the sample means as the uncertainty of the mean instead of just being the precision of the mean you are ignoring the uncertainty of the actual data. You just assume the stated values are 100% accurate!

Bellman
Reply to  Tim Gorman
March 8, 2022 3:21 pm

I’m sorry that’s an inconvenient truth for you but it *IS* the truth!

Continually asserting that something is the truth doesn’t make it so. (Whatever the Bellman says.)

Tim Gorman
Reply to  Bellman
March 9, 2022 1:21 pm

I just gave you two excerpts from Taylor and Bevington that show why you are wrong. My guess is that you will ignore both of them.

Bellman
Reply to  Tim Gorman
March 9, 2022 2:04 pm

As I said, I agree with both of them.

Tim Gorman
Reply to  Bellman
March 9, 2022 6:17 pm

As I said, I agree with both of them.”

No, you don’t. If you agreed with them you would calculate the uncertainty of the mean the way you do for data from measuring different things using different devices. You wouldn’t be amazed that uncertainty can grow past the range of the stated values when you have a data set from measuring different things using different devices. You wouldn’t ignore the existence of systematic uncertainty!

Bellman
Reply to  Tim Gorman
March 10, 2022 12:40 pm

Just point to me the places where either say that measurement uncertainties of an average can be bigger than any individual measurement uncertainty.

And stop claiming I’m ignoring things I keep mentioning.

Jim Gorman
Reply to  Bellman
March 10, 2022 7:18 pm

You’ve never had a job where you carry a measuring device have you? Why do you think a staircase going up 10 ft with 6″ risers might come up 3/4″ short? I’ve renewed a lot of engines. Should I order main bearings based on the “average” wear on the journals? Which main bearings have the most wear? Do you have any experience with certified labs?

Bellman
Reply to  Jim Gorman
March 11, 2022 7:03 am

You’ve never had a job where you carry a measuring device have you?

I have but it’s not relevant to the point.

Why do you think a staircase going up 10 ft with 6″ risers might come up 3/4″ short?

I’d need more context to know, but I’d guess it’s becasue you’re summing measurements. If you have 20 risers and a small small measurement error in the measurement of each riser, then the uncertainty in the height of the staircase will involve the propagation of errors in the sum. That tells you nothing about the uncertainty in the average riser.

If the total uncertainty of the staircase was 20mm, it’s difficult to see how the uncertainty in the average riser was also 20mm.

Tim Gorman
Reply to  Bellman
March 10, 2022 8:18 pm

Taylor and Bevington.

Especially Taylor in chapter 3 which is about total uncertainty, where you have both systematic and random error.

If y = (x1 +/- u1) + (x2 +/- u2) + … + (xn +/- un)

then the average is [ (x1 +/- u1) + (x2 +/- u2) + … + (xn +/- un) ] / n

the uncertainty of the average is u1 + u2 + …. + un + δn as a upper bound or

sqrt[ u1^2 + u2^2 + … + un^2 + δn^2 ] as a lower bound.

since n is a constant, δn = 0

So the uncertainty of the average is greater than any individual uncertainty.

I’m not at my library or I would provide you a quote (for the umpeenth time) from Taylor.

it’s the entirety of what Taylor’s chapter 3 is about!

Last edited 8 months ago by Tim Gorman
Bellman
Reply to  Tim Gorman
March 11, 2022 6:38 am

then the average is [ (x1 +/- u1) + (x2 +/- u2) + … + (xn +/- un) ] / n

the uncertainty of the average is u1 + u2 + …. + un + δn as a upper bound or

sqrt[ u1^2 + u2^2 + … + un^2 + δn^2 ] as a lower bound.
since n is a constant, δn = 0

You are just making this up. Taylor does not say anything of the sort in Chapter 3 or anywhere else.

As always you are trying to extrapolate the result you want whilst ignoring the central problem – when you divide by a constant n, you can divide the uncertainty by n.

E.g,

the uncertainty of the average is u1 + u2 + …. + un + δn as a upper bound …”

You are mixing up the rules for propagating uncertainties for adding and subtracting with the rules for propagating uncertainties for multiplying and dividing. Your sum of the uncertainties for the sum is correct, but you cannot just add the uncertainty of the divisor n. When you divide you have to add the fractional uncertainties, and it’s a mystery why you cannot see the parts in Taylor Chapter 3 where he explains that. (It’s not really a mystery, it’s just a case of seeing what you want to see).

Call your some S, with uncertainty uS. And call the mean M with uncertainty uM. Then the uncertainty of S has as you say an upper bound of u1 + u2 + …. + un. But then when you divide by n you have

uM / M = uS / S + δn / n = uS / s + 0

and as M = S / n

uM / (S / n) = uS / S

which means

uM = (uS / S)(S / n) = uS / n

Carlo, Monte
Reply to  Bellman
March 11, 2022 7:00 am

U —> zero as N —> infinity?

Again?

Bellman
Reply to  Carlo, Monte
March 11, 2022 7:47 am

Nope. Read what I say, read what I’m replying to, before getting triggered.

The uncertainty of the sum in that extract is being propagated as the sum of the uncertainties. That is assuming all errors are the same as the uncertainty interval and in the same direction. It’s an upper bound if you cannot assume the errors are random and independent. So

U —> U as N —> infinity?

Tim Gorman
Reply to  Bellman
March 11, 2022 3:56 pm

How does uS / n not go to zero as n –> infinity?

Bellman
Reply to  Tim Gorman
March 11, 2022 4:43 pm

I’m using this formula from Tim, “the uncertainty of the average is u1 + u2 + …. + un + δn as a upper bound …”

The uncertainty of the sum is u1 + u2 + …. + un, which if all the uncertainties are equal is N * u, and the correct uncertainty of the mean is equal to N * u / N = u.

This is the uncertainty if all error s are systematic, or as Tim puts it the upper bound of the uncertainty of the mean. If all errors are completely random and independent etc, then just as the GUM says the uncertainty will tend to zero as N tends to infinity. But that obviously isn’t going to happen in the real world.

Tim Gorman
Reply to  Carlo, Monte
March 11, 2022 3:54 pm

Yep, again! I really tire of trying to explain how this all works. I do it over and over and over and he keeps coming back to

U –> zero as N –> infinity.

Bellman
Reply to  Tim Gorman
March 11, 2022 4:36 pm

I explained why that isn’t the case here, becasue we were talking about systematic errors, the formula where you were just adding the uncertainties. The uncertainty of the sum is therefore N * U, so the uncertainty of the mean is U regardless of the sample size.

Carlo, Monte
Reply to  Bellman
March 11, 2022 5:46 pm

What a farce! Uncertainty covers both random and bias!

Do you still not understand this??!?

Bellman
Reply to  Carlo, Monte
March 12, 2022 5:18 pm

Your the one who keeps insisting that I’m saying uncertainty would go to zero with infinite sampling. I’m saying that would only happen if there was no bias, and would never happen in reality.

You and Tim meanwhile insist that

U -> infinity as N -> infinity

Tim Gorman
Reply to  Bellman
March 11, 2022 3:52 pm

You are just making this up. Taylor does not say anything of the sort in Chapter 3 or anywhere else.”

The only one making stuff up is you! Just look up Taylor’s Rule 3.16 and 3.17!

===========================================

“Suppose that x, …., w are measured with uncertainties ẟx, …, ẟw and the measured values are used to compute

q = x + … + z – (u + … + w).

If the uncertainties in x, …, w are known to be independent and random, then the uncertainty in q is the quadratic sum

ẟq = sqrt[ ẟx^2 + … + ẟz^2 + ẟu^2 + … + ẟw^2 ]

of the original uncertainties. In any case, ẟq is never larger than their ordinary sum

ẟq ≤ ẟx + … + ẟz + ẟu + … + ẟw.
===========================================

You *really* should take the time some day to actually read through Taylor’s Chapter 3 and actually work out all of the examples and chapter problems. Stop just making unfounded assertions that you base on a cursory reading.

As always you are trying to extrapolate the result you want whilst ignoring the central problem – when you divide by a constant n, you can divide the uncertainty by n.”

q is not an AVERAGE. It is a sum. The uncertainty of a constant is 0.

Read the start of Section 3.4 twenty times till you get it:

====================================

Suppose we measure a quantity x and then use the measured value to calculate the product q = Bx, where the number B has no uncertainty.

……….

According to Rule 3.8, he fractional uncertainty in q = Bx is the sum of the fractional uncertainties in in B and x. Because ẟB = 0 this implies that

ẟq/q = ẟx/x.

==========================================

You always want to ignore this. I don’t know why.

You always jump to the conclusion that if q equals multiple x’s then you can divide the uncertainty in x by B. If q is the sum of the x’x then you can’t do that. x = q/B, not the other way around. Somehow you miss that! The uncertainty of a sum, and Bx *is* a sum, simply can’t be less than the uncertainty of an individual component.

If q is associated with a stack of B sheets of paper then the uncertainty in q simply can’t be less than the uncertainty in each individual sheet of paper – which is what you keep trying to assert!

The relationship is ẟq/B = ẟx, not the other way around!

The same applies for fractional uncertainties. The fractional uncertainty in q simply cannot be smaller than the fractional uncertainty in x. That’s why ẟq/q = ẟx/x!

As I keep saying, you have *not* studied Taylor and worked out any of his examples, quick checks, or worked out *any* of his chapter questions.

Quick Check 3.3: The diameter of a circle (d) is

d = 5.0 +/- 0.1 cm

what is the circumference and uncertainty?

c = πd = 3.14 * 5.0 = 15.7

ẟc/c = ẟd/d

ẟc = (ẟd/d) * c = (0.1/5) * 15.7 = 0.3

If you will check Taylor’s answer to QC 3.3.you will find that it is 15.7 +/- 0.3 cm.

This means that we have π is equivalent to B in Example 3.9. π does not show up in the calculation for the uncertainty of πc.

If you knew the uncertainty in c before hand then you could find the uncertainty in d by dividing ẟc by π.

Just like if you know the uncertainty in q, the whole stack of sheets, you can find the uncertainty in each sheet by dividing ẟq by B.

As for your calculations you have set the problem up wrong to begin with. Go to Equation 3.18 in Taylor!

if q = x/w then ẟq/q = sqrt[ (ẟx/x)^2 + (ẟw/w)^2 ]

If w is a constant then ẟw = 0 and the uncertainty equation becomes

ẟq/q = ẟx/x

It is absolutely disgusting to me that you can’t work out any of the problems in Taylor and then try to find out where you keep going wrong!

Go work out Question 3.25. If you don’t have a full copy of his book let me know and I’ll provide it here. My guess is that you won’t bother.

Bellman
Reply to  Tim Gorman
March 11, 2022 4:33 pm

The uncertainty of a sum, and Bx *is* a sum, simply can’t be less than the uncertainty of an individual component.”

And right there is your problem. B is not a sum it’s a product. Maybe you dropped out of school after they taught how to treat multiplication as repeated adding, and missed out on fractions. But Taylor, who you never seem to read for meaning, only says that B is an exact quantity. No requirement for it to be an integer, no requirement for it to be greater than 1, no requirement for it to be positive. And if you ever looked at how the equation is derived that would be obvious. And if you do want to accept that you only have to look at the examples Taylor uses where B is equal to pi, or to 1/200.

If q is associated with a stack of B sheets of paper then the uncertainty in q simply can’t be less than the uncertainty in each individual sheet of paper

I begin to feel quite sorry for you sometimes. You are so convinced that uncertainties cannot be reduced that this simple example has to be continually rewritten in your mind so it doesn’t upset your believes. I’m sure you’ve got an aphorism to describe this.

The example of the stack of paper is that you can derive the width of a single sheet of paper by dividing the height of the stack by 200, and that this means the uncertainty in the width of a single sheet of paper is 1/200th of the uncertainty of the measured height of the stack. Nobody is saying the uncertainty of the stack is less than the uncertainty of an individual sheet of paper, it’s the other way round.

Exercise 3.25: The argument is fallacious because 3.18 requires the uncertainties to be independent, which they won’t be if multiplying x by itself.

Carlo, Monte
Reply to  Bellman
March 11, 2022 5:49 pm

I begin to feel quite sorry for you sometimes.

Now you are reduced to just a clown show, one that no one buys tickets to get in.

Bellman
Reply to  Tim Gorman
March 8, 2022 3:37 pm

BTW, 100 different measurement devices and measurements would give you at least some random cancellation of errors. Thus you should add the uncertainties using root-sum-square -> sqrt( 100 * 0.5^2) = 10 * 0.5 = 5C. Your uncertainty would be +/- 5C.

Careful, you’ll invoke the “UNCERTAINTY IS NOT ERROR” inquisition.

You are the one, just now who was insisting that you were not talking about precision but accuracy. You were implying and I was going along with the idea that these were systematic errors. You said that the uncertainty of the sum could be at most equal to the sample size times the uncertainty.

Your uncertainty would be +/- 5C. That would *still* be far larger than the hundredths of a degree the climate scientists are trying to identify.”

Of course it is, because it’s nonsense.

That’s probably going to be *far* higher than the standard deviation of the sample means! That’s what happens when you assume all the stated values in the data set are 100% accurate.

Yes, because your calculations are gibberish.

No, the uncertainty of the mean would be +/- 5C. Thus the true value of the mean would be from 10C to 20C. Why would that be a surprise?

Again, make your mind up. 2 comments ago you were saying “Thus the uncertainty of the mean of each sample is somewhere between the direct addition of the uncertainties in each element and the quadrature addition of the uncertainties in each element.” You were assuming the uncertainties might be due to systematic error and the upper bound of the uncertainty is a direct sum – i.e. ±50 °C.

How is it possible for the uncertainty to be 65C? What that indicates is that your average is so uncertain that it is useless.

Or that you are wrong – why would anyone think it’s more likely that it’s impossible to do what every text book and statistician has done for over 100 years and take an average, or that you don’t know who to calculate the uncertainty of an average.

Tim Gorman
Reply to  Bellman
March 9, 2022 1:45 pm

Careful, you’ll invoke the “UNCERTAINTY IS NOT ERROR” inquisition”

YOU *STILL* DON’T UNDERSTAND THE DIFFERENCE!

You are the one, just now who was insisting that you were not talking about precision but accuracy. You were implying and I was going along with the idea that these were systematic errors. You said that the uncertainty of the sum could be at most equal to the sample size times the uncertainty.”

Precision is not accuracy and accuracy is not precision. Every measurement you take in the real world has both random error and systematic error. Those together determine the uncertainty in your stated value. Nor did *I* ever say anything about the uncertainty of the sum being equal to the sample size times the uncertainty. I have always said, in a scenario where you have multiple measurements of different things using different measuring devices the upper bound of uncertainty for the sum of the stated values is a direct addition of the component uncertainties and the lower bound is the root-sum-square addition of the component uncertainties.

You need to get it through your head that even in the case of repeated measurements of the same thing using the same device the average of your readings may not give you the “true” value if systematic error exists. Averaging the measurements can only determine the “true” value if the errors are all random and not systematic. If you use a yardstick that is too short by an inch to measure the same 2″xr” board multiple times the average of your measurements will *not* give you the “true” value for the length of the board, the average of your measurements will cluster around a value that is off by an inch! All the statistical parameters you calculate from your measurements won’t help you identify that.

Yes, because your calculations are gibberish.”

They are only gibberish to someone that has no idea of how metrology in the real world actually works.

Again, make your mind up. 2 comments ago you were saying “Thus the uncertainty of the mean of each sample is somewhere between the direct addition of the uncertainties in each element and the quadrature addition of the uncertainties in each element.” You were assuming the uncertainties might be due to systematic error and the upper bound of the uncertainty is a direct sum – i.e. ±50 °C.”

As usual you don’t read anything for meaning, do you?

I said: “BTW, 100 different measurement devices and measurements would give you at least some random cancellation of errors. Thus you should add the uncertainties using root-sum-square -> sqrt( 100 * 0.5^2) = 10 * 0.5 = 5C.”

You apparently are wanting to consider all of the uncertainty to be due to systematic error. Do you have a justification for that assumption?

“Or that you are wrong – why would anyone think it’s more likely that it’s impossible to do what every text book and statistician has done for over 100 years and take an average, or that you don’t know who to calculate the uncertainty of an average.”

Sorry, I’m not wrong. Again, you didn’t even bother to think about what I posted. That’s usually the response of someone who knows they are wrong and are trying to defend an indefensible position – just quote articles of faith!

As I keep telling you, most statisticians and statistics textbooks just ignore uncertainty. I gave you a whole slew of examples from my textbooks. Not one single example of data sets where the individual components had uncertainty. All of the data was assumed to be 100% accurate. And that appears to be where you are coming from – stated values are all 100% accurate and all real world measurements can be assumed to be totally random with no systematic uncertainty meaning the results can always be analyzed using statistical parameters.

Open your mind to the real world, eh?

Bellman
Reply to  Tim Gorman
March 8, 2022 3:45 pm

The average gives you absolutely no expectation of what the next measurement will be.

That isn’t the main purpose of the average here, I’m interested in how the global temperature is changing, not trying to predict what a random measurement will be. But you are still wrong. You can use an average to give you an expectation of the net measurement. The very fact that you know the average gives you some (i.e. not absolutely no) expectation. If you know the average you can make a better prediction than if you have no information. If you also know the standard deviation you can have a reasonable expectation of the likely range as well.

It might be shorter than all your other boards, it might be longer than all the other boards, or it may be anywhere in the range of the already collected boards – YOU SIMPLY WILL HAVE NO HINT AS TO WHAT IT WILL BE. The average will simply not provide you any clue!

Your obsession with picking boards out of the ditch is getting a little disturbing. But again, you are wrong for all the reasons you are wrong about temperature. Of course knowing the average length of the boards I’ve found in the ditch is going to give me a clue about what other boards might be like. It might not always be correct, but it is a clue.

Tim Gorman
Reply to  Bellman
March 9, 2022 1:57 pm

That isn’t the main purpose of the average here, I’m interested in how the global temperature is changing, not trying to predict what a random measurement will be.”

How do you tell how its changing if your uncertainty allows the trend line to have a negative, positive, or no slope depending on where you pick to go through the uncertainty intervals? I gave you examples of this just a day or so ago!

 But you are still wrong. You can use an average to give you an expectation of the net measurement.”

And now you are back to assuming that all measurements have totally random error. You didn’t even bother to read my example of picking up boards out of ditches and trash piles. You can calculate the average of your boards but it won’t give you any kind of hint as to how long the next board you spy in the ditch will be!

If you know the average you can make a better prediction than if you have no information.”

No, you can’t. The average gives you no data about the variance of the data in your data set. If its a bi-modal distribution the average will not tell you which of the modes the next board is likely to be from. At best you just flip a coin! And it gets worse if its a multi-modal distribution! The standard deviation won’t help you at all!

Your obsession with picking boards out of the ditch is getting a little disturbing.”

I suspect that is so because they are so accurate at pointing out the problems with your assumptions of metrology in the real world.

It might not always be correct, but it is a clue.”

Wow! Good thing you aren’t an engineer designing a bridge the public will use!

Bellman
Reply to  Tim Gorman
March 9, 2022 5:45 pm

How do you tell how its changing if your uncertainty allows the trend line to have a negative, positive, or no slope depending on where you pick to go through the uncertainty intervals?

Read Taylor again. He explanes how to calculate an OLS linear regression, and how to calculate it’s uncertainty. You do not do it by picking a line through he uncertainty intervals. And, I’ll ask again. If you want to do it that way, why are you so certain there’s been no warming over the last 7 and a half years? If you cannot be sure there’s no warming how can you claim there’s zero correlation with CO2 over that period?

Tim Gorman
Reply to  Bellman
March 9, 2022 7:05 pm

Read Taylor again. He explanes how to calculate an OLS linear regression, and how to calculate it’s uncertainty. You do not do it by picking a line through he uncertainty intervals. And, I’ll ask again. If you want to do it that way, why are you so certain there’s been no warming over the last 7 and a half years? If you cannot be sure there’s no warming how can you claim there’s zero correlation with CO2 over that period?”

As usual you are skimming Taylor hoping to find something you can throw at the wall in the faint hope it will stick to the wall.

Go look at figure 8.1(b). It shows *exactly* what you’ve already been shown. Taylor defines the trend line as y = A + Bx. He then goes on to calculate the “uncertainty” of A and B and uses that to determine a σy which is used to determine the best fit of the line to the stated values of the data.

From Taylor: “The results (8.10) and (8.11) give the best estimates for the constants A and B of the straight line y = A + Bx, based on N number of measured points (x_1,y_1, …., (x_N,y_N). The resulting line is called the least-squares fit to the data, or the line of regression of y on x”

And, I’ll ask again. If you want to do it that way, why are you so certain there’s been no warming over the last 7 and a half years? If you cannot be sure there’s no warming how can you claim there’s zero correlation with CO2 over that period?”

You can’t tell *ANYTHING* from global temperatures. I keep telling you that. The global average is trash statistics, the baselines calculated from the annual global average is trash, and the anomalies calculated from the two are trash. The uncertainties associated with the values of each of this is wider (far wider) than the differences climate science is trying to determine!

Bellman
Reply to  Tim Gorman
March 10, 2022 7:07 am

As usual you are skimming Taylor hoping to find something you can throw at the wall in the faint hope it will stick to the wall.

No. I’m throwing Taylor at you, because he’s the one author who I think you might listen to. His equations for calculating the trend and confidence interval for the trend are exactly the same as every other text on the subject – and they are not at all what you describe. You think that the uncertainty of a trend comes from trying to draw a line through all the uncertainty intervals in individual measurements, and that’s just not correct.

Tim Gorman
Reply to  Bellman
March 10, 2022 9:34 am

and they are not at all what you describe”

Of course they are! I gave you the quote from Taylor on the subject!

“You think that the uncertainty of a trend comes from trying to draw a line through all the uncertainty intervals in individual measurements, and that’s just not correct.”

In other words you just want to ignore Taylor’s Figure 8.1. Typical.

The residuals are *NOT* a measure of uncertainty. They are a measure of the “best-fit”.

A trend line is not a measurement!

The best-fit trend line is based solely off of the stated values and ignore the uncertainty intervals of each individual data point. Just like you *always* do. The “true” value of each data point can be anywhere in the uncertainty interval, not just at the stated value. Trend lines based off the stated values are just one guess at the trend line. Picking other values in the uncertainty interval to form the trend line is perfectly correct.

From Taylor:

===================================

Nevertheless, we can easily estimate the uncertainty σy in the numbers y_1, …, y_N. the measurement of each y_i is (we are assuming) normally distributed about its true value A + Bx, with width width parameter σy. Thus the deviations y_i – A – Bx_i are normally distributed, all with the same central value zero and the same width σy.

=======================================

y_i -A – Bx_i is the residual between the data point and the trend line. In other words you are calculating the best fit, not an uncertainty, even if Taylor calls it such. The assumption that the residuals are normally distributed is a *very* restrictive assumption. For many data sets you will *not* find a normal distribution of the residuals.

Carlo, Monte
Reply to  Tim Gorman
March 10, 2022 12:12 pm

Which is why looking at the residuals histogram is so useful.

Bellman
Reply to  Tim Gorman
March 9, 2022 5:59 pm

And now you are back to assuming that all measurements have totally random error. You didn’t even bother to read my example of picking up boards out of ditches and trash piles.

Sure I did. This is your spiel:

It’s like collecting boards at random out of the ditch or trash piles, etc. You can measure all those boards and get an average. But that average will give you no hint as to what the length of the next board collected will be. It might be shorter than all your other boards, it might be longer than all the other boards, or it may be anywhere in the range of the already collected boards – YOU SIMPLY WILL HAVE NO HINT AS TO WHAT IT WILL BE. The average will simply not provide you any clue!

Nothing there about measurement uncertainty of any sort, just about averaging.

Of course if there’s a systematic error in all your measurements, that error will also be in the estimate of what the net board will be. But that error will also be in all the single measurements you’ve made of all your boards. By this logic you shouldn’t ever bother measuring anything because it might have a systematic error.

No, you can’t. The average gives you no data about the variance of the data in your data set.”

As an engineer I’m in the habit of plucking out random boards from the trash. Say there’s a big pile of trash with thousands of boards all wanting to be measured. I pull 20 out at random and measure them and find the average length was 1.5m. For some reason I forgot to write down the individual measurements do I’ve no idea what the standard deviation was. But I can still make an estimate that the net board I pull out will be 1.5m. It probably won;t be that, but 1.5m is the value that minimizes error. And I say it is better to base my estimate on the average lengths of the 20 boards I’ve already seen, then on nothing. If you come along and we have a bet as to who can guess the closest to the net board pulled out, and I guess 1.5m and you guess 10m, who’s most likely to be correct?

Tim Gorman
Reply to  Bellman
March 9, 2022 7:13 pm

Sure I did. This is your spiel:”

Obviously you didn’t!

“Nothing there about measurement uncertainty of any sort, just about averaging.”

The issue was no uncertainty, it was if the average could provide you an expectation about the next measurement. Nice try at changing issue – but its nothing more than a red herring.

“Of course if there’s a systematic error in all your measurements, that error will also be in the estimate of what the net board will be. But that error will also be in all the single measurements you’ve made of all your boards. By this logic you shouldn’t ever bother measuring anything because it might have a systematic error”

More red herring. Here is what I said: “YOU SIMPLY WILL HAVE NO HINT AS TO WHAT IT WILL BE. The average will simply not provide you any clue!

Of course you couldn’t address *that*, could you?

“As an engineer I’m in the habit of plucking out random boards from the trash. Say there’s a big pile of trash with thousands of boards all wanting to be measured. I pull 20 out at random and measure them and find the average length was 1.5m. For some reason I forgot to write down the individual measurements do I’ve no idea what the standard deviation was. But I can still make an estimate that the net board I pull out will be 1.5m.”

No, you can’t assume that. What makes you think you can? You are still assuming that the average is the most common value (i.e. a gaussian distribution) but you have no way of knowing that from just an average value. Again, if you have a multi-modal distribution with an equal number of components you have *NO* chance of getting a board anywhere near the average. There won’t be any boards that are of the length of the average.

Bellman
Reply to  Tim Gorman
March 10, 2022 5:37 am

The issue was no[t] uncertainty, it was if the average could provide you an expectation about the next measurement.

That was my point. I don;t know if you realize you keep doing this, shifting the argument from one thing to another and then accusing me of doing the same. Lets go other this again. You asked a question about what an average of a random sample could tell you about the next value. When I replied:

That isn’t the main purpose of the average here, I’m interested in how the global temperature is changing, not trying to predict what a random measurement will be. But you are still wrong. You can use an average to give you an expectation of the ne[x]t measurement.

You hit back with:

And now you are back to assuming that all measurements have totally random error. You didn’t even bother to read my example of picking up boards out of ditches and trash piles.”

And when I explain I wasn’t and was responding to your comment about averaging boards, you respond by completely agreeing with me and saying your example was not about measurement but about averaging, and then say I obviously hadn’t read your original questions, and accuse me of trying to change the subject.

Tim Gorman
Reply to  Bellman
March 10, 2022 8:19 am

That was my point. I don;t know if you realize you keep doing this, shifting the argument from one thing to another and then accusing me of doing the same. Lets go other this again. You asked a question about what an average of a random sample could tell you about the next value. When I replied:”

The only one changing the subject here is *YOU*!

You are the one that said: “For some reason I forgot to write down the individual measurements do I’ve no idea what the standard deviation was. But I can still make an estimate that the net board I pull out will be 1.5m”

“And when I explain I wasn’t and was responding to your comment about averaging boards, you respond by completely agreeing with me and saying your example was not about measurement but about averaging, and then say I obviously hadn’t read your original questions, and accuse me of trying to change the subject.”

Do you have dyslexia? I didn’t agree with on anything except that you can calculate an average, and then I tell you that the average is meaningless if you don’t know the distribution.

I’m not even sure you understand that using a trend line is a *very bad* way to predict the future. A linear regression trend line will have residuals between the actual data and the trend line. When you project past the last data point you ASSUME FUTURE RESIDUALS WILL BE ZERO! That all future data will lie on the trend line. In other words you are right back to the same old thing – assuming all data is 100% accurate.

Will you never stop that using that idiotic assumption?

Bellman
Reply to  Tim Gorman
March 10, 2022 5:45 am

And again:

More red herring. Here is what I said: “YOU SIMPLY WILL HAVE NO HINT AS TO WHAT IT WILL BE. The average will simply not provide you any clue!”
Of course you couldn’t address *that*, could you?

You were the one who bought up the idea that there were non random errors in the measurement.

And I did address the question. I said you were wrong. Your argument is nonsense becasue you always talk in absolutes, not probabilities.

You insist that if it’s not possible to predict exactly what the the next board will be, then that means you have no clue what it will be. I maintain that past experience can be a clue to the future, that it is possible to learn from the evidence, that if you know what something is on average you have more of an idea what it is than if you know nothing. Taking a random sample of things from a bag is the essence of statistical analysis and if you think it tells you nothing about the rest of the objects in the bag, then you don’t understand probability.

I maintain that if someone has thrown a load of boards in a trash bin, that if I take a random sample from that bin, the average is going to tell me more than nothing about the rest of the boards. Just as I know that on the basis that most of your arguments are nonsense, the next thing you say is more likely than not to be nonsense.

Carlo, Monte
Reply to  Bellman
March 10, 2022 6:35 am

Just as I know that on the basis that most of your arguments are nonsense, the next thing you say is more likely than not to be nonsense.

Your mind is closed, don’t confuse you with any facts.

Bellman
Reply to  Carlo, Monte
March 10, 2022 7:25 am

“…don’t confuse you with any facts.

You certainly don’t.

Tim Gorman
Reply to  Carlo, Monte
March 10, 2022 9:06 am

Yep. He simply believes that Taylor and Bevington are wrong and assuming gaussian distributions for everything plus assuming all stated values are 100% accurate is perfectly legitimate when it comes to real world measurements.

Bellman
Reply to  Tim Gorman
March 10, 2022 12:57 pm

Carl Monte has said both books were wrong because they use Error rather than the new definition of uncertainty. I’ve never said that either were wrong.

Carlo, Monte
Reply to  Bellman
March 10, 2022 1:58 pm

Wrong—the GUM standardized the EXPRESSION of uncertainty, read the title (AGAIN).

Tim Gorman
Reply to  Bellman
March 10, 2022 5:26 pm

I explained this to you at least twice in this thread and you just refuse to learn the lesson.

Uncertainty is made up of random error and systematic error. The issue is that you do not know either the sign or the magnitude of either error. The way this is handled is to define an uncertainty interval which defines where the true value might lie.

I’ve never said that either were wrong.

You have repeatedly said both books were wrong. You believe that you can do statistical analysis of measurements of that have systematic error as part of its uncertainty. Even when you have been shown that both Taylor and Bevington state that you cannot!

Bellman
Reply to  Tim Gorman
March 11, 2022 9:27 am

You have repeatedly said both books were wrong.

Citation required.

Pointing out that you don’t understand the books is not the same as saying they are wrong.

Tim Gorman
Reply to  Bellman
March 10, 2022 8:50 am

You were the one who bought up the idea that there were non random errors in the measurement.”

Of course I brought that up! All measurements have both random error and systematic error. You do your best to eliminate systematic error but usually the best you can do is reduce it to a level where it is far less than the tolerance required for what you are doing. That does *NOT* mean that you can just ignore it.

Neither can you just assume that all non-systematic error is gaussian – which is a necessary assumption to assume total cancellation.

You keep wanting to fall back on the assumption that all error is gaussian and systematic error can be ignored. Stated values are 100% accurate and statistical analysis is a perfect tool to use in all situation – even though both Taylor and Bevington specifically state that isn’t true.

“And I did address the question. I said you were wrong. Your argument is nonsense becasue you always talk in absolutes, not probabilities.”

Uncertainty does not have a probability distribution, not even uniform. The true value has a 100% probability and all other values in the uncertainty interval have a 0% probability. The problem is that you don’t know the true value! It could be anywhere in the uncertainty interval.

The only one talking in absolutes here is you. Absolute 1 – all uncertainty cancels. Absolute 2 – uncertainty that doesn’t cancel can be ignored. Absolute 3 – stated values are always 100% accurate.

Then you depend on these Absolutes to justify using statistical analysis on *everything* – totally ignoring what Taylor and Bevington say.

“I maintain that if someone has thrown a load of boards in a trash bin, that if I take a random sample from that bin, the average is going to tell me more than nothing about the rest of the boards. “

And you are wrong. You HAVE to know the distribution in order to calculate the standard deviation at a minimum. Even the standard deviation won’t tell you much if you don’t know the distribution. All the average by itself can tell you is what the average is. Nothing else.

====================================
From the textbook “The Active Practice of Statistics”:

“Mean, median, and midrange provide different measures of the center of a distribution. A measure of center alone can be misleading. Two nations with the same median family income are very different if one has extremes of wealth and poverty and the other has little variation among families” (bolding mine, tg)

“The five-number summary of a data set consists of the smallest observation, the lower quartile, the median, the upper quartile, and the largest observation, written in order from smallest to largest.”

“The five-number summary is not the most common numerical description of a distribution. That distinction belongs to the combination of the mean to measure center with the standard deviation as a measure of spread.

“The five-number summary is usually better than the mean and standard deviation for describing a skewed distribution or a distribution with strong outliers. Use y-bar and s only for reasonably symmetric distributions that are free of outliers.”

=====================================

In other words you *have* to know the distribution. The average alone tells you nothing. All you ever do is assume that all distributions are gaussian and all stated values are 100% accurate.

Carlo, Monte
Reply to  Tim Gorman
March 10, 2022 12:23 pm

Uncertainty does not have a probability distribution, not even uniform. The true value has a 100% probability and all other values in the uncertainty interval have a 0% probability. The problem is that you don’t know the true value! It could be anywhere in the uncertainty interval.

This is absolutely correct—ISO 17025 requires a UA according to the GUM as part a laboratory’s accreditation, and that expanded uncertainties be reported as the combined uncertainty times a coverage factor of k=2; this originated from student’s t for 95%, and many times U=k*u is referred to as “U-95”. But because the actual distribution for a given measurement is rarely known, calling it U-95 is misleading. k=2 is just a standard coverage factor and shouldn’t be used to imply that 95% of measurement values will be within an interval of the true value.

Bellman
Reply to  Tim Gorman
March 10, 2022 6:34 am

No, you can’t assume that. What makes you think you can? You are still assuming that the average is the most common value (i.e. a gaussian distribution) but you have no way of knowing that from just an average value.

I’m not saying the next board is most likely to be 1.5m, I’m saying it’s the best estimate of what the next board will be. This does depend on how you are scoring “best”.

Say the trash is filled equally with boards that are either 1m long or 9m long. Average is 5m. If the objective is to have the best probability of getting the correct size, the best strategy is to randomly guess 1 or 9 and have a 50% chance of being right, and in that case guessing 5m gives you a 100% chance of being wrong.

But if the objective is to minimize the error, 5 is as good a guess as 1 or 9, and either is a better guess than 10 or more.

If you are trying to avoid large error, e.g. scoring it by the square of the error, then 5 is the best guess. It’s a guaranteed score of 16, verses a 50/50 chance between 0 and 64, or 32 on average.

However, my point was that however you score it, knowing the average of a random sample of the boards tells more than knowing nothing. The question you are posing is not, what is the best guess knowing the distribution. It’s is it better to make an educated guess than guessing blind. If you haven’t looked at a single board then worrying about how normal the distribution is, is irrelevant. Your guess could be anything, 1cm 100m, you don’t know because you have no idea what any of the boards are like.

Carlo, Monte
Reply to  Bellman
March 10, 2022 6:48 am

I’m not saying the next board is most likely to be 1.5m, I’m saying it’s the best estimate of what the next board will be.

Extrapolating from regression results is asking for disaster.

Good luck, you’ll need it.

Bellman
Reply to  Carlo, Monte
March 10, 2022 7:25 am

Try to keep up. We aren’t talking about regression, gust the average.

Carlo, Monte
Reply to  Bellman
March 10, 2022 7:46 am

Evidently you don’t know what the word means.

Bellman
Reply to  Carlo, Monte
March 10, 2022 10:42 am

What word? “regression”? As I say I’m not an expert on any of this, so maybe regression can mean take an average, but I can’t find any reference to that. Regression is always defined in terms of relating a dependent variable to one or more independent variables.

Carlo, Monte
Reply to  Bellman
March 10, 2022 12:25 pm

Duh, regression is a form of averaging, and assuming a unit will be near an average of a group of other units is extrapolation.

Jim Gorman
Reply to  Bellman
March 10, 2022 8:29 am

I’m not saying the next board is most likely to be 1.5m, I’m saying it’s the best estimate of what the next board will be. This does depend on how you are scoring “best”.”

What do you think standard deviations are for? The mean is not the best estimate. It may not even be one of the measured values.

Bellman
Reply to  Jim Gorman
March 10, 2022 10:38 am

Tim ruled out the idea of using the standard deviation. The only thing you know is the average:

No, you can’t. The average gives you no data about the variance of the data in your data set.”

Tim Gorman
Reply to  Bellman
March 10, 2022 4:08 pm

No, you said the only thing you knew was the average! If that’s all you know then you don’t have the standard deviation!

Bellman
Reply to  Tim Gorman
March 10, 2022 4:39 pm

If you didn’t get so hysterical you could remember what you are asking, or at least make the parameters of your thought experiments clearer.

You said:

The average gives you absolutely no expectation of what the next measurement will be. It’s like collecting boards at random out of the ditch or trash piles, etc. You can measure all those boards and get an average. But that average will give you no hint as to what the length of the next board collected will be. It might be shorter than all your other boards, it might be longer than all the other boards, or it may be anywhere in the range of the already collected boards – YOU SIMPLY WILL HAVE NO HINT AS TO WHAT IT WILL BE. The average will simply not provide you any clue!

I replied:

If you know the average you can make a better prediction than if you have no information. If you also know the standard deviation you can have a reasonable expectation of the likely range as well.

Then you said:

No, you can’t. The average gives you no data about the variance of the data in your data set. If its a bi-modal distribution the average will not tell you which of the modes the next board is likely to be from. At best you just flip a coin! And it gets worse if its a multi-modal distribution! The standard deviation won’t help you at all!

Tim Gorman
Reply to  Bellman
March 10, 2022 8:03 pm

If you didn’t get so hysterical you could remember what you are asking, or at least make the parameters of your thought experiments clearer.”

Hysterical? ROFL!!! I write long, detailed answers trying to explain the basics to you and you just ignore them and stick with your religious dogma!

If you know the average you can make a better prediction than if you have no information. If you also know the standard deviation you can have a reasonable expectation of the likely range as well.

I give you the same example *YOU* provided. You have a group of boards of 1 length and 9 length. The average is 5.

You obviously have a bi-modal distribution, i.e. a skewed distribution. The average tells you nothing about the modes. Neither does the standard deviation. You have an average that can’t give you an expectation for the next board. And the standard deviation tells you that you have a large spread of values but not what the modal distributions are like.

You even admitted that picking 5 for the next board would be wrong 100% of the time. Proof that the average gives you no expectation for the next board. If you can calculate the standard deviation then you already know what the range of values for the distribution is. But that doesn’t give you any expectation of what the next board will be. Just like a coin flip. Pick one side as a winner and see what happens. That isn’t an *expectation*, it’s gambling, and you have no leverage to control the outcome.

I’m not exactly sure what you are getting at with this post. But you sure haven’t shown that you have learned anything!

Bellman
Reply to  Tim Gorman
March 11, 2022 7:42 am

Hysterical? ROFL!!!

Hysterical asks someone rolling about on the floor laughing.

Bellman
Reply to  Tim Gorman
March 11, 2022 9:24 am

You obviously have a bi-modal distribution, i.e. a skewed distribution.

It’s bi-modal, that was the point, it’s not skewed though, the assumption is there’s an even distribution between the two sizes of board.

The average tells you nothing about the modes.

It’s the mid point between the two modes.

Neither does the standard deviation.

The standard deviation tells you the distance between the mid point and each node. If you know this is a perfect bi-modal distribution you’ve got all the information you need with those two values.

You even admitted that picking 5 for the next board would be wrong 100% of the time.

The point is you have to define “wrong”. If you want to predict the size of the next board, then 5 won;t be it. If you are trying to minimize the square of the error than 5 is the best option.

That isn’t an *expectation*

That is how expectation is defined in statistics and probability theory. The expected roll of a six sided die is 3.5. It doesn’t mean you will ever roll a 3.5, but it is the expected value.

Tim Gorman
Reply to  Bellman
March 10, 2022 9:04 am

I’m not saying the next board is most likely to be 1.5m, I’m saying it’s the best estimate of what the next board will be. This does depend on how you are scoring “best”.”

I’ll bet the casino’s go crazy when they see you coming! If the next board is not most likely to be 1.5m then why are you picking that value? That’s like playing blackjack and keep drawing cards till the dealer breaks, hoping he breaks first no matter what cards you get.

guessing 5m gives you a 100% chance of being wrong.”

But 5m is the EXACT value you said above is the best estimate of what the next board would be!

“But if the objective is to minimize the error, 5 is as good a guess as 1 or 9, and either is a better guess than 10 or more.”

Minimize what error? If you are 100% wrong each time you guess the length of the next board what error have you minimized?

Or are you now saying the average is *NOT* the best estimate of the length of the next board?

If you are trying to avoid large error, e.g. scoring it by the square of the error, then 5 is the best guess. It’s a guaranteed score of 16, verses a 50/50 chance between 0 and 64, or 32 on average.”

Once again you aren’t living in the real world. Guessing the average every time is *NOT* the best. 5 has a zero chance of being correct. 1 and 9 have at least a 50% of being correct.

If you pick two boards and nail them end to end their length can be 2, 10, or 18. Since you will never find a 5 board their combined length will always be zero!

Bellman
Reply to  Tim Gorman
March 9, 2022 6:01 pm

Wow! Good thing you aren’t an engineer designing a bridge the public will use!

If i was building a bridge I wouldn’t do it by picking random boards out of ditches.

Tim Gorman
Reply to  Bellman
March 9, 2022 7:15 pm

but you might get loads of I-beams from different suppliers, each with a different distribution of lengths and uncertainties.

Tim Gorman
Reply to  Bellman
March 7, 2022 8:54 am

“x/200 means if not scaling down”

  1. If you have a group of 2″x4″ boards stacked up together and you measure the height of the stack then what is the height of each board?
  2. If you know the total uncertainty of the total stack of boards then what is the uncertainty of each of the boards?

If you can’t answer these simple questions then you will are just being willfully ignorant.

Tim Gorman
Reply to  Bellman
March 7, 2022 9:08 am

” It shows the need to have both the maximum and minimum temperatures and to get the mean temperature form them along with the range.”

  1. the mean of a sine wave is *NOT* the average value of the sine wave.
  2. Using Tmax and Tmin and finding a mid-range temp is the OLD way of doing degree-days.

from degreedays.net
—————————————————–
There are three main types of degree days: heating degree days (HDD), cooling degree days (CDD), and growing degree days (GDD). I’ve focused most of this article on explaining heating degree days. Once you understand how heating degree days work, it will be very easy for you to understand the others. ………
Simple example: calculating Celsius-based heating degree days from hourly data
First, we’ll give a simple example of calculating Celsius-based heating degree days for a spring day (2019-03-29) at station CXTO in Toronto, Canada. We’re using a base temperature of 14°C but you should choose the heating base temperature that makes most sense for your building.
This example is simple because:

  • The temperature data reported by the weather station on the day is exactly hourly. (As explained further above, although weather stations typically record the temperature once or more per hour, it’s surprisingly rare for them to record exactly on the hour every hour. Data that is exactly hourly has almost always been interpolated into that format.)
  • At no point did the temperature cross our chosen base temperature. This makes it easier to calculate the area between the temperature and the base temperature (which is effectively what we are doing when we calculate degree days using the Integration Method).
  • comment imageSimple example of calculating Celsius-based heating degree days with a base temperature of 14°C for a day with perfectly hourly temperature data

    To get the data and chart above, we:

  • Assemble all the temperature readings for the day in question, in the time zone of the station that they came from (weather stations typically report in UTC time so we have to convert the times to the local time zone).
  • Remove any temperature readings that look likely to be erroneous. (All were fine in this case.)
  • Assume a linear pattern of temperature change between each recorded temperature (effectively drawing a straight line between each point on the chart).

Then, for each consecutive pair of temperature readings, we:

  1. Calculate the time (in days) over which the temperature was below the base temperature. In this simple example this is always an hour (1/24 days).
  2. Calculate the average number of degrees by which the temperature was below the base temperature over the calculated time period (1). In this simple example this is always the base temperature minus the average of the two recorded temperatures.
  3. Multiply the time (1) and the temperature difference (2) to get the heating degree days for the period between the two temperature readings (an hour in this case).

Finally we sum all the figures (3) above to get the total heating degree days for the day.

———————————————–

This is called the INTEGRATION METHOD. It is the most modern method of calculating all of the aforementioned degree-day types, heating, cooling, and growing.

This may be a confusing thing for you to understand and an inconvenient truth for you to acknowledge but it is the truth nonetheless. You do *NOT* need to know the mid-range value in any way, shape, or form. It is based solely on the area between the temperature profile and the set point. It doesn’t really matter where on the x-y axis you put the temp profile and the set point as long as the area between the two curves remains the same you’ll get the same value for the degree-day.

Last edited 8 months ago by Tim Gorman
Bellman
Reply to  Tim Gorman
March 7, 2022 2:16 pm

Here we go again. I keep pointing out the various ways to calculate degree days are calculated, and you just keep insisting I don;t understand what they are.

We were talking about approximating GDD with a sine wave. You spent a long time in previous threads insisting that that was the best way to do it, pointing out to me how good an approximation a sine wave was to a daily temperature cycle, and crucially insisting that you only needed the maximum temperature. I’ve posted earlier in this thread your final comment where you insisted that the Ohio University had the correct formula using a sine wave.

Now, when it’s obvious that answer doesn’t work, you’ve finally gone back to what we can all agree is the most accurate was of calculating it, using multiple measurements taken throughout the day. (But of course, this is no use if you want to do what your “real agricultural scientists” when they only have max and min values).

But guess what. Taking readings throughout the day still means you are not basing it on just the maximum value. If all of the day is above the base line the GDD will be the mean temperature (if you are using multiple readings it will be a more accurate mean, but a mean nonetheless).

This may be a confusing thing for you to understand and an inconvenient truth for you to acknowledge but it is the truth nonetheless.

These discussions would be much more pleasant if instead of finding more ways to patronize me, you actually tried to listen to what I’m saying.

You do *NOT* need to know the mid-range value in any way, shape, or form.

You don’t need to know it because the mean temperature is implicit in the multiple readings. What you do need is more than the single maximum value.

It doesn’t really matter where on the x-y axis you put the temp profile and the set point as long as the area between the two curves r