Guest Opinion by Kip Hansen – 26 July 2022

I beg your forbearance if I step on your ideological or scientific-viewpoint’s toes with my essay today. But I hope to attract your attention long enough to make a point so important that it affects almost all empirical knowledge in our modern world. The point is so simple, yet so scientifically profound, that it might even sound silly to *casual readers*:

**Numbers are just Numbers**

~~ ~

The most famous and illustrative example (albeit, fictional) is Douglas Adams’, the “Answer to the Ultimate Question of Life, the Universe, and Everything“, as calculated by an enormous supercomputer named Deep Thought, is “42”.

This is precisely my whole point here today; with a rather long afterword on why this is important enough to mention here at a science blog. Readers who already understand why “Numbers are Just Numbers” is so profoundly true and those who already understand the significance of this for modern science can move on and read about (boring) climate change topics.

[ Warning: This is not easy essay – it is a short dissertation on the scientific philosophy of numbers and their use in modern science with some cautions and will extend to two parts, at least. ]

~ ~ ~

From the book “The Science of Measurement: Historical Survey” by Herbert Klein we get the following quotes:

“…the tools and techniques of measurement provide the most useful bridge between the everyday worlds of the layman and of the specialists in science.”

“Non-scientists may be similarly impressed to discover that units of measurement – for length, area, volume, time duration, weight, and all the rest – are essentials of science.”

“[This work] …should prove serviceable to professionals in science, but its main purpose is to make outsiders realize that in their daily lives and concerns they too are involved in the activities and ideas classified as metrology*, *the science of measurement – a subdivision of science that underlies and assists all others.”

And what is **metrology**? “**Metrology** is the scientific study of measurement.”

And what is measurement? “**Measurement** is the quantification of attributes of an object or event, which can be used to compare with other objects or events.”

And what is quantification? “…**quantification** is the act of counting and measuring that maps human sense observations and experiences into quantities. Quantification in this sense is fundamental to the scientific method.”

And what is counting? “**Counting** is the process of determining the number of elements of a finite set of objects” such as its physical attributes.

~ ~ ~

*All measurement* is, at its most basic, *simply counting* – the number of beans, coins, stars, inches, lightyears…(and ** all other units of measurement**). The result of counting is a number – the

*number*of “elements” – of the things counted.

And what is a number? “A **number** is a mathematical object used to count, measure, and label. The original examples are the natural numbers 1, 2, 3, 4, and so forth.”

~ ~ ~

So, the basic activity of Science is counting or measuring — a specific type of counting against pre-established , internationally-agreed-upon units of some quality/property such as temperature or weight or length or foot/lbs and many many more. There are a lot of different methods of measuring different things with vastly different tools at a wide variety of scales. Nonetheless, they are all really just types of counting.

Alas, when we (or you or they) *count, *the result is a number – which is nothing more than a mathematical object – “A mathematical object is an abstract concept arising in mathematics”. The number counted alone, of course, is not a *thing* at all – only an abstract concept — until the counted number is clearly stated as “number of *whatevers*?” – number of peaches, number of inches of 2×4 board, number of monarch butterflies at any given moment, number of any of the SI units under the International System of Units of various physical properties of something.

**Numbers can be tricky**….just because they are *numbers* – like 1, 2, 85, 400 million, 3.432 — some think we can just willy-nilly apply all types of mathematical processes to them: add them up, subtract them, multiply them, add them up and divide them into averages and/or average them spatially with various methods of distance weighting and kriging – all this is *meant* to produce *physically meaningful results*.

To make matters worse, statisticians often think that they can then take those numbers churned out by all the above processes and wring out even more meaningful results not otherwise visible to the human mind.

But does all that *Mathematica*-ing produce *physically meaningful results?*

While some interesting things can be done with numbers and statistics, many fields of modern science have often gone far down the slippery slope of *reification*** **of numbers – often creating whole fields worth of non-physical data – like

*Global Average Surface Temperature*, an entirely imaginary nonphysical number. Similarly, modern oceanic scientists have created the imaginary concept of Eustatic Sea Level – a “would have been” not-really-a-physical-level level.

SIDE BAR: “*Reification* is when you think of or treat something **abstract **as a physical thing. “ Remember, these numbers are *mathematical abstracts*.

In a two-decade old BMJ article, it is acknowledged that:

“Many people only respect evidence about clinical practice [think also: biology, climate science, geology, psychology, ad infinitum ] that is couched in the highly abstract language of graphs and statistical tables, which are themselves visualisations of abstract relations pertaining among types of numbers, themselves again abstractions about ordinary phenomena.” [ source ]

Thus, we find ourselves reading articles and essays and journal papers filled with __abstractions about abstractions about ordinary phenomena__.

In practice, we call these abstractions *numbers* or *data sets *or even *“the data” and* then we/somebody makes them into various visual presentations – charts and graphs and pretty pictures — * intended to sell their favorite hypothesis *or *to refute your favorite hypothesis*.

~ ~ ~

Preview of Part 2:

In Part 2, I will consider why it is that

**“One cannot average temperatures.”**

Really.

**# # # # #**

__Author’s Comment:__

I have been accused in the past of not liking numbers, of hating numbers, of not understanding mathematics, of not understanding statistics and being a general math-o-phobe. This is not true –not only do I love the beauty and certainty of mathematics, but I am also a true pragmatist.

*“one who judges by consequences rather than by antecedents.”*

Not a real fan of blind trust in so-called experts – experts, in my opinion, have to be able to show their work *in the real world*.

I am well aware of the many problems of scientific modern research, including that all fields of science are returning a lot of questionable results – even in fields closely monitored like medicine. It is my belief that much of this is functionalized by “too much math, not enough thinking” or the reification of mathematical and statistical results.

“I could be wrong now, but I don’t think so.”

(h/t Randy Newman)

Free-for-all on this topic in comments.

Thanks for reading.

**# # # # #**

I wonder how many people like Bell’s palsy.

Scissor ==> Now, THAT is an unexpected comment.

A lot of things are. Unexpected. Cheers

Scissor ==> Frighteningly, I love your sense of humor…..

I like your articles, not in a Bell’s palsy sense, but in the way they ponder one to think about things from a different perspective. They’re very enjoyable to read.

Karl Popper listed surprise as a good indicator of truth in his <i>Logic of Scientific Discovery.</i>

Doug ==> In my maturity, I find myself surprised everytime I learn something new — and any day I lear something new is a Good Day!

I get that after too much of this:

Statistics can be made to say whatever the person presenting wants them to say. Computer models say precisely what they are programed to say by those who program them. Beginning to see a pattern in climate catastrophics?

2hotel9 ==> I bambozzled a lot of them at Wm Briggs site with : https://www.wmbriggs.com/post/36178/

Kip, did you get my email on that particular topic?

Old Cocky ==> Have to check….I’ll look up your real email and check mine … if some time ago, yes.

Old Cocky ==> I don’t see any emails under your yahoo.com.au account. Want to re-send — my first name at i4.net

Third time lucky?

Cocky ==> YES! Got it….will have a look at it — but not til tomorrow….

Cool. I’m looking forward to your thoughts on it.

Mark Twain wrote that there are “lies, damned lies, and statistics”. He passed away in 1910, long before anyone worried about global warming, but what would he have said about today’s number manipulators trying to scare people away from keeping warm?

I’ve read that it was really Dr. Roger Revelle who took the established (established but not really proven) theory of greenhouse gas warming, and turned it into something alarming, starting in the 1950’s:

http://ruby.fgcu.edu/courses/twimberley/EnviroPhilo/Coleman.pdf

Before Revelle’s work, hardly anyone would have thought that a bit of warming was of any big concern, it seems. Of course, getting alarmed must have been an idea whose time had come, being seized upon eventually then by everyone from government officials to NASA administrators, scientist’s unions or associations, etc.

As to the general abuse of number concepts, it really is a bit mysterious how empiricism — the ability to actually handle measurements properly and verify either the usefulness or the uselessness of theories thereby — has fallen by the wayside in so many areas of publicly consequential science these days.

Interesitng. A few years back, I read D.S. Halacy’s “

Fuel Cells: Power for Tomorrow” written in 1964 I think where he mentioned CO2 and global warming and I always wondered when the scare tactics actually began.https://amzn.to/3zdJUtd

Except for true AI algorithms, which can go off and do whatever they want.

Retired ==> and apparently do so….

Which do not exist – true AI algorithms, that is. Today’s “AI” does exactly what it is programmed to do. Takes input numbers, manipulates them in some manner, and outputs numbers.

Whatever a computer does can be replicated by a human with paper and pencil. “AI” is only quantitatively different; i.e., that hypothetical human cannot perform the task in anywhere close to the time that the computer can. (In the case of supercomputer runs, beyond the theoretical heat death of the universe.)

Today’s ‘AI’ despite it’s name is little more than a Simulated Intelligence program, with preset parameters and data sets they are sophisticated programs but a universe away from what a true AI would be.

Al Gore rhythms have been out of sync for years.

How about AlGoreithms?

You spoke my mind. I was going to make the same suggestion, word by word.

Including assaulting/attacking humans

From the robo-surgeon that killed its patient to the driverless car that ran over a pedestrian: Worst robotic accidents in history – after chess robot breaks seven-year-old boy’s finger in Russia

That depends on the intellectual honesty of the programmer!

Looking at the “product” being produced their intellectual honesty is zero.

Or the size of the taxpayer funded research grants for the “right” type of statistics that governments agree with

The answer is 2.

Works for 1000 and 1001 and I think I’m quitting now while I can still do it in my head.

Goog algoreithm. Unfortunately, one cannot make money from it…

Complex mathematical / algebraic formula where: 2 = a+2-a

Too much math, no thinking at all.

“I made the formula say what I desire, and objective reality has no bearing.” – Modern Expert

roaddog ==> Well, real scientists do a lot of thinking but then they get those marvelous seductive data sets and can’t resist banging away at them with Mathematica and Statistica and all — they sort of lose their minds I think.

When you suggested in a previous article that modelers should get together in a room I thought of Monty Pythons argument sketch; my model is better than yours…no it isn’t; yes it is…no it isn’t…

Same for statisticians..similar room more arguments about which numbers are correct.

Mac ==> I think they do that skit in reality at AGU Conferences.

But the only “good” model is the Russian one, and the Russians are probably not allowed in the room.

The Russian model is “bad” — too close to being accurate, and not scary enough. Climate computer not wanted. Or we would already have them.

If they ever had them. Popularity is endemic among the liberal insanitii.

This is climate science you should always use alternative maths

22,000 upvotes!

I think the correct figure is $20,002,000

Pretty good. Now do genders. (Correct answer is 3, even in English).

No there’s way more than three, whatever maths you use, in the UK anyway

3. They keep relabelling the same 3 over and over again in slightly different ways until you end up with hundreds that all bear a remarkable similarity to the 3 they started with.

Douglas Murray in his book, “The War on the West” has a section where he describes the attempt to ‘deconstruct’ mathematics by proponents of critical race theory. One aspect of this was to question whether 2 + 2 = 4. “Others claimed that it was obvious that 2 + 2 cannot equal 4 and gave a variety of reasons. These included, but were not limited to, claims that 2 + 2 = 4 is part of a ‘hegemonic narrative’. that the people who make such narratives should not get to decide what is true, that 2 + 2 should equal whatever people want it to equal, and that making such a definitive statement excludes other ways of knowing. One PhD candidate took to social media to declare that ‘the idea of 2 + 2 equalling 4 is cultural and because of western imperialism/colonialism, we think of it as the only way of knowing.'” (Page 198)

If I recall, one of the examples to prove that 2 + 2 does not equal 4 is:

Imagine a factory that made 2 complete items and 1/2 of another. They recorded 2 completed items. They did the same the next day. Traditional math would conclude they completed 4 items, but when you combine the 2 half items you find there are actually 5 completed items! (I kid you not. I believe this examples came from California.)

But surely that then would be 2.5 + 2.5 = not 2 + 2 = wouldn’t it?

Math is hard.

The perils of mis-using statistics in an argument were highlighted by Winston Churchill when he said of an opponent during a Parliamentary debate, “The honourable gentleman uses statististics the way a drunk uses a lamp-post: more for support than for illumination.”

The best definition of confirmation bias I’ve ever come across…

Kip. Typo worth correcting: reification

James ==> I love detailed readers who can catch my typos….there must be one spot in this essay where I have it misspelled but danged if I can find it. The proper spelling is

R E I F I C A T I O N used 3 times in the essay — help me out here.

Sorry Kip. I thought that you meant to type Deification. Probably appropriate in the context of the the use by alarmists of such constructions as Global Average Surface Temperature.

Great topic Kip!

Numbers of Zeros in a Million, Billion, Trillion, and Morehttps://www.thoughtco.com/zeros-in-million-billion-trillion-2312346

Numbers Bigger Than a TrillionThe digit zero plays an important role as you count very large numbers. It helps track these multiples of 10 because the larger the number is, the more zeroes are needed.Millions, Billions, and TrillionsHow Can We Think About Really Large Numbers?

https://www.thoughtco.com/millions-billions-and-trillions-3126163

++++++++++++++++++++++++

How do you express large numbers?https://www.geeksforgeeks.org/how-do-you-express-large-numbers/

+++++++++++++++++++++++++++

So this number, 240,000,000,000 which is 240 billion could be written as 2.4 X 10 to the 11th power because there are 11-#’s after the last whole number.Kip, your post part one on numbers reminded me of a lesson learned long ago in college about economics. It was the main takeaway from a one semester course taught by John Kenneth Galbraith. I took it not so much for the subject matter (artificial demand creation via advertising based on his book ‘The Affluent Society’ on same) but because he was a truly funny professor. Example: he was debating Paul Samuelson of MIT, and claimed something that Samuelson disagreed with. So Samuelson asked Galbraith how he knew this was so? Galbraith responded, ‘Because I am the greatest living expert on it.’ Which brought the house down. But to the lesson.

Galbraith ran the Office of Price Administration (price fixing during WW2 shortages) for FDR. He learned a mental trick for summing a long column of numbers to within about 10% (answer, lots of rounding) in order to more quickly reach OPA decisions. A student (not me) asked him whether that was good enough for such important decisions. Galbraith replied, ‘If within ten percent isn’t good enough to answer the question, then you have the wrong question.’

Good enough for government work—and much else in life. Just get into the ballpark. It suffices for most things.

It was once said that if you asked a panel of 5 economists a question you would get 6 answers because JK Galbraith always changed his mind.

Nah.As I learned it, if you asked 3 economists you would get six answers…on the one hand, on the other hand. 3×2=6. Same basic joke.

Apparently you can just average all of the answers to get the correct one, as is done with climate model runs

Are these the same economists who predicted 18 of the last 3 recessions?

Stock investors predict too mamy recessions

Economists, as a group, never predict recessions.

This is why we need economists with only one hand

As a group. US economists

have never predicted a recession.

Not once.

That’s not a joke — it’s true.

They are about to do it again.

But,if all the economists in the world were laid end to end, they still wouldn’t reach a conclusion.

Auto.

What president was it that wanted the one armed economist? because economists always say “on the other hand…”

Truman?

Before sophisticated computer systems were widely in use by businesses (1960s), my early career work was in accounting / auditing production materials usage manually.

So we used a “practiced eye” to assess “first-pass fit” of manually compiled production reports numbers –

know approximately what reported total numbers should be,

add up whole tens or hundreds or thousands entries in your head,

compare to expected total,

if in the ballpark, call it “first-pass fit”,

then move on to next reports.

I’ve been using the “first-pass fit” technique for many & varied tasks all my life now.

Trying to teach the grandkids to use it now.

Mr. ==> Not an easy thing to teach, so Good Luck. Harder if they’ve been in school too long.

My lord have they complicated elementary math lessons.

It’s a major exercise now to arrive at 6 x 8 = 48.

I feel your pain, I’m also a grandchild maths tutor. One was struggling with multiplication of large decimal numbers 123.456×543.987 type calculations. In the the end I said i don’t care what you’ve been taught we’re doing it the way I learnt. The only problem was getting him to write neatly.

It’s important to be able to do mental calculations of approximate answers so you can say that doesn’t look right when using a calculator.

I went to a school in a little village. In the Co-op there was an assistant who was the most impressive I’ve ever met at mentally adding up £sd. People used to leave their shopping lists and collect the messages (shopping) later. He could do the calculation by running his pencil up the column of numbers in just a few seconds, this involved halves, 12s and 20s, correct every time. I’m still impressed 65 years later.

Ben ==> My older brother, who is/was almost entirely socially inept, practiced mental math tricks endless until he could do the same with long lists of numbers….I never understood the method.

Rud ==> Great story! I admit to having my own eccentric mental methods of checking maths and stats – some of which I have shared in various essays here.

But the #1 Lesson is to realize that those Numbers are Just Numbers — they are not the finding, not the meaning, and often not anything real at all!

Absolutely. The pursuit of precision, in order to corrupt it, is a pandemic all its own.

Is there a vaccine? Maybe we need another Warp Speed effort.

The First Law of Economics: For every economist, there exists an equal and opposite economist.

The numbers I think are most important are the actual amount of CO2 in the atmosphere- something less than 4 one hundredths of one percent and the amount of that which is naturally occurring- between 95-97% which we can’t do anything about – so the amount of CO2 in the atmosphere from man’s industrial and transportation activity is minuscule and you would have to abandon all common sense to believe that a tiny amount of CO2 has any effect on the earth’s temperature or climate

tiny! tiny! and even the amount of projected warming is tiny (1-2 degrees C in 100 years). Even the tiny, were it not so tiny, would have no meaningfully negative effect on the planet or people.On top of all that, the term “climate change” has no meaning!

CO² does have effect, just a very small one that decreases logarithmically. After exceeding pre-industrial levels, say beyond 290 ppm, CO²’s ability to trap additional heat (longwave IR) drops dramatically. Think of adding layers of clothing on a cold day- the first couple of layers do a lot, the third helps a little more, and beyond that it does little to add additional warmth, but eventually makes you look like the Michelin Man. The whopper of GHGs is good old H²O, which itself varies massively with seasonal changes and very slight blips in global average temperatures (as best we can measure that)…

Similarly, heat water. Once it reaches 100C (at sea level) it will not get hotter no matter how much more heat you apply.

Irrelevant comparison

CO2 does not add heat

It impedes heat from escaping into the infinite heat sink of space

I dunno. Pressure cookers work perfectly well at sea level…

So do steam engines and steam turbines – if, only if, you have any fuel for their boilers….

Yes. And heat an ocean. Once it reaches 30C (at sea level obviously) it will not get hotter no matter how long you apply the heat.

A good thing really as it means, with the oceans being some 72% of the Earth’s surface that we can never get runaway global warming.

Alasdair ==> Explain for readers, please.

Complete nonsense.

About one third of the current 420 ppm of CO2 is manmade.

CO2 did not increase about +50% from 1850 naturally.

nature is still ABSORBING CO2, not a net CO2 emitter.

You are spouting nonsense.

You are the one spouting nonsense Richard. You seem to soak the propaganda like blotting paper.

But is it a problem? Surely the problems begin when we stop adding CO2 and nature is used to extracting what we;re adding. The decline will be more rapid than the increase.

I should ask how you know that for sure. Are you absolutely sure that all sinks and all sources have been found and accurately accounted for? Have all cycles that affect the sinks and sources been found and accounted for?

If there is even a 10% error in the amount sinks take up or in the amount of non-human caused CO2, your assertion would be wrong.

William,

Indeed.

So, to the nearest one tenth of one percent, there is Zero CO2 in the atmosphere.

Yet our economy, and our society, is being destroyed with that as an excuse.

Auto.

Scientist and inventor Nikola Tesla was fascinated by number 3

I believe that number 3 had religious connotation for Tesla, his father was an orthodox christian priest, and orthodox christians (as I am) cross with 3 fingers (father, son & holly spirit) and many churches and graves feature triple cross.

Tesla turned ‘father, son & holly spirit’ to ‘energy, frequency, & vibration’ the basis of his universe’s existence understanding.

He invented a three–phase alternative current generators and motors, still in world wide use.

It is sad he would only stay in a hotel room containing no.3, walked 3 times about block, always used 3 napkins with his meals and all sorts of other nonsense.

Vuk ==> Tesla….wish we really knew what he was onto…..

Just an observation. Tesla has been a bit mythologized. He did understand AC versus DC. He did understand the implications of higher frequency AC (but not it’s negative implications). And some of his high frequency AC ideas proved later (with better mathematics) probably not very practical energetically. Always remember, lightning is a DC Helmholtz layer TStorm effect having NOTHING to do with Tesla.

Rud, my man, I never associated Tesla with lightning, but now the idea will not leave my mind!

Cudos for the most obscure joke ever. Or is it? I have no cookin’ clue.

Nobama identifying as 42!

Of course, the significance of the “number” 42 has nothing to do with math. Decimal 42 is the code for the symbol * in ascii, also known as the wild card character. That can stand for anything and everything. As in cd c:\ del *.* Bend over and kiss your hard drive goodbye.

Don’t even disclose or tell teenagers about DOS commands.

You know what havoc could be inflicted on pc’s all over the world.

As time went on, less & less of the “really good stuff” became

accessible to the end user (EU). This made life as a phone tech much easier as the most common fix was:: Replace EU!

It’s OK – they all use AppleDOS, or Unix.

Assuming that they can find the command prompt. It’s buried so far down when you use just mouse clicks that it’s effectively not there. (Why I stick with the “classic” UI on Windows. The wife’s machine, you can’t just right click the Start button and select Run. I fumble around every time.)

I think Douglas Adams was the first one to use 42 as a joke.

Objective Reality says:

42 x 0 = 0.

Because he’s a sociopathic narcissist, His Zeroness still identifies as 42!

Numbers are tricky & can take an unexpected turn!

Old Man ==> Really had to blow that image up to get the joke….thanks!

Lol

Always a problem as you take any lemma close to its defined zero, no matter how defined. Why more students should learn more math.

If the student took this to be a visual problem rather than a mathematical one then he/she/it/pumpkin produced a correct answer

While it looked like sheer genius treating numbers as

pure symbols, it was more than likely an act of desperation where she got lucky. Been there, done that!

I always liked this one

it the very useful ‘3-4-5’ triangle.

Carpenters use this to make a 90 degree angled cut, or measure.

Laying out a house foundation, 3/4/5 gets it square.

See the Alternative Maths video that LdB links to though.

Here’s another one: 16/64 –> cancel the 6’s you get 1/4. New math!!

So perfectly describes why Climate Scientists get away with calling climate a ‘

non-linear system‘Such is their maths – childlike scribblings given meaning.

A true non-linear system contains a singularity – the answer can be anything you want it to be = classically 42 or the ‘wild card’

Thus enter ‘reification’ = a lovely word to describe Magical Thinking = the process where (chronically chemically) depressed people brain-wash themselves.

iow: They apply the MSDOS “

del *.*” command to their own mindsIts very easily done – so easy in fact that creatures with only one brain cell do it

They do it because they are desperately hungry. In that starvation state they start eating whatever they can and reification kicks in – they then truly

believethat ‘the wrong thing’ is actually good for them.Hopeless (

Oh, I can handle it) alcoholics being the perfect example – convinced that booze is keeping them alive. (In really advanced states it actually is – suddenly stopping will ki11)2 nice examples being swarming locusts and John Kerry (Thank Fug there’s only one of him – or is there?)

Biden is not in the race = his braincell count is the divisor in the above equation

edit to PS

You do see the significant fail of Climate Science – as the author here states.

Climate Science gives reality to something that is not real.

i.e. Temperature.

Temperature is a dimensionless quantity – it has no tangible or palpable reality.

It can represent reality but you have to carefully describe the object that you are taking the temperature of – you have to specify some real tangible things (metres, kilograms seconds) to the thing you are recording the temp of.

Thus

Climate Science is one humongous lie– a lie by omission in that it never defines the ‘dimensions’ of what it’s recording the temp of.The most basic omission is that the water content is never mentioned.

But to do so requires an admission that water controls climate

So simple, even a child could understand.

(Now do we see how deep the doo-doo we’re now in)

Good one.

We had a “science teacher”in high school who asked everyone in the class to bring in some ice cubes wrapped in a towel. She need dry ice fro an experiment. True story. A little later my father went to the headmaster and had her sacked after another episode.

Well Kip is Dr Rosling wrong when claims he used 120,000 data points to display the countries of the world from 1810 to 2010?

Of course he has used UN data and this optimistic 5 minute video wrecks all the alarmist’s arguments about a climate EMERGENCY or CRISIS or even Biden’s so called EXISTENTIAL threat.

And Willis Eschenbach’s ” where’s the Emergency” article also requires a lot of numbers to test all of their alarmist claims. And he does this point by point.

And Dr Christy also tested their data point by point and came to a similar conclusion.

AGAIN here’s Dr Rosling’s video. Any comments, anyone?

Spectacular use of graphics by Dr Rosling to prove an important fact. It’s

too bad my teachers didn’t have something like this when I was younger.

Neville ==> Rosling doesn’t really use any numbers in his presentation — he uses visual representation of relationships. He doesn’t misinterpret the numbers as being real things.

The relationships between wealth and lifespan, by nation and population, through time is what he is showing.

To do that, he needs numerical data, but not reified numerical data.

I have written here about Rosling

Kip, separate comment based on your observation ‘too much math, not enough thinking.’

When I was learning math modeling and econometrics at my University, there were no Mathematica packages, no PC’s with Excels and Rs. The University’ Aikens Computer Center housed an IBM 340 with ferrite core memory so max allowable program RAM was 250 kilobytes. (So I got an A for my discrete step Harvard Square traffic jam simulation model NOT because I could run it (needed more than 250kb to simulate the 7 streets feeding the Square at rush hour with each byte a vehicle in time and space) but rather because the professor was impressed with the hundreds of lines of Fortran code modeling the problem.) Much of what I had to learn was based on thinking a lot, then a little but hard to do back up math to make sure you were in the ballpark.

These days, with all the available PC programs, numbers produced by ‘math’ are ‘easy’, while thinking is not only still ‘hard’, but ‘unproductive’ since it takes time. Plus, not that many Uni educated types these days can think at all, let alone critically. I offer AOC as exhibit A. Which IMO results in a lot of the ‘publish or perish’ irreproducible junk statistical science in fields as diverse as medicine and climate that you and others allude to.

Take their computers away, and they would be absolutely lost.

I suspect a large amount of these junk papers are generated by chucking numbers into random stats packages until they get something that looks interesting, and then trying to work backwards to justify it.

Rud ==> Interstingly, when I was still a teen, I worked whole summer in a plant that manufactured those ferrite core memories, on night shift of course.

And I agree, Thinking is Hard. Not the trivial kind of thinking that passes for science today, but real deep thinking, out of the box thinking, true critical thinking, thinking bounded by the rules of logic and reality.

Kip, totally agree. Have often wondered, what would ‘science’ now be if we had to go back to hand threaded three wire ferrite cores. Of course, an idle speculation. I would miss many modern miracles like this iPad. But we do not adequately recognize the ‘easy but wrong’ side effects.

Rud ==> I gotta love all my way-too-many digital computing devices — several tablets, more than enough Lenovo laptops, I even buy too many for my wife — who does only email and an occassional letter.

You really want us to believe ‘for my wife’ is the reason you buy them?

John ==> Come on…gimme a break here….

Kip,

In 1976 I flew (economy) with colleague Albert Berkavicius from Sydney to Los Angeles. His primary purpose was to remove and replace an identified faulty ferrite core in the board he carried, which from memory had 256 of them. We were allowed to watch the procedure, the taking out and putting back of the 4 thin wires through the loops. It worked. This story helps to show the value then placed in the emerging beast called the computer.

Geoff S

Geoff ==> Yep, those are what our factory, Electronic Memories Inc, produced. 256 bit memory boards.

Rud: We must be about the same age. When I learned to program in FORTRAN the hard part was done with a pencil and paper – creating a detailed flow chart of the steps required to solve the problem. Once you had figured that out, writing the code the execute the steps was easy. It was then just a matter of compiling and running to obtain the error codes that would lead to discovery of typos and logic errors – aka “debugging”. Of course, each run meant submitting your deck of punch cards to the computing center and waiting sometimes hours to get the printout of results which were often something like “syntax error line 455”. Played a lot of Bridge with other CS students in break room while waiting.

My thesis card punch decks always used a big magic marker top X. Because they almost always never ran first time due to IBM CTL goofs, and almost always came back return sequence goofed. Ah, memories.

Early versions of that period, I think:

FORTRAN, or FORTRAN II D, or FORTRAN IV

Rud,

Ten out of ten. Your comments resonate strongly with my experiences. My intro to computing was to write a perpetual calendar in machine language. Others in the class did much better, so I made a decision right then to leave programming to those with a bent for it, to free up my time for thinking about solutions, including those responding to computing. Thanks. Geoff S

Rud, you are numerically challenged. IBM 340’s never had ferrite core memory.

.

.

https://www.ibm.com/common/ssi/ShowDoc.wss?docURL=/common/ssi/rep_sm/0/897/ENUS7012-340/index.html&breadCrum=DET001PT288&url=buttonpressed=DET001PT008&page=0&user+type=EXT&lang=en_US

Likely meant a 360 which was the machine of the 1960s. Definitely had core memory

I look forward to part 2. As I was reading part 1, before reaching the end, I thought about the claim I’ve often seen here that a global average temperature is a useless measure. Is it that the number itself is really useless? (What other way would we have to know if we are in a “global” warming or cooling trend and whether it is likely to be beneficial or catastrophic?) Or is it that the means of arriving at said number are dubious/suspect/insufficient? I hope to find some answers in part 2,

Think about it this way, reduce the question down to two different locations: what does it mean to average all the temperatures from Cut Bank, Montana with Rio de Janeiro?

A partial answer, IMO. GAST is meaningless. But the GAST anomaly is not (e.g. UAH anomaly). The problem with anomalies is that they hide the very large absolute divergence in IPCC climate modeled temperatures. See essay ‘Models all the way down’ in ebook Blowing Smoke for an illustrated and referenced example of this problem.

Climate is the entire temperature profile at a location. Every time you take an average you lose part of the data needed to evaluate the climate. In calculating the daily mid-range value you lose data about the temperature profile since multiple different minimum/maximum temps can give the same mid-range value, you have lost what happened in reality When you then find a monthly average using those averages you lose more data, different daily mid-range values can result in the same monthly average so how do you tell what is happening in reality. Then when you average monthly averages to get an annual average you lose even more data, you don’t know what happened in reality since different monthly averages can result in the same annual average value. Now average all those averages one more time on a global basis and while you come up with a number what does it actually mean in reality? There is no place on the globe you can find and measure that value. What does it actually tell you?

If you can’t measure it then does it really exist?

This doesn’t even get into propagating the uncertainties of the initial measurements through each average calculation.

The issue is that temperature is not a good indicator of heat. Enthalpy (energy content – SI unit Joules per kilogram) is the correct measure.

By illustration 40C and 20% humidity has an enthalpy of 550 J/kg. 32C and 80% humidity is 2080 J/kg so is actually much “hotter”.

Maybe Kip will address this in part 2

This is my biggest problem with atmospheric temperature measurements of any kind.

What we really need is a calculation of the total heat energy in the entire global climate system, ie the oceans and the atmosphere. To be absolutely correct, we should also include a certain amount of the ground too, since that is affected by the climate.

If we can get all that, accurate to 100th of a joule, on at least an hourly, and preferably per minute, basis, for the entire planet, I’ll start taking the numbers seriously.

gdt ==> You been reading my drafts?

You are quite correct: “temperature is not a good indicator of heat. Enthalpy (energy content – SI unit Joules per kilogram) is the correct measure.”

Pressure also figures into this. So temperatures taken at different elevations create different enthalpies as well. You very seldom see temperatures go up with a cold front passes through while pressure changes significantly.

I think a global average temperature is about as useful as the average of all the house numbers in my street. However I would be happy to be corrected.

I have no formal science of mathematical qualifications.

Excellent, Kip!

“the data”— is an oxymoron because the word is plural!WUWT is a blog not a specialized scientific journal:

“In modern non-scientific use, however, it is generally not treated as a plural. Instead, it is treated as a mass noun, similar to a word like information, which takes a singular verb. Sentences such as data was collected over a number of years are now widely accepted in standard English” (Oxford).Chris ==> Love a good definition and love definitions that move with the times.

But it is probably not a wise move in that one has sacrificed a word (datum) and ended up with less precision by having to use one word to describe two sets — a set with one entry and sets with many entries. What advantage is provided by collapsing two similar words into a single word, other than not having to worry about verb-noun agreement? However, one can always decide not to worry about noun-verb agreement — if they don’t mind people viewing them as illiterate.

Clyde ==> Well, I would never write “a data”…..I might write “a data point….”

The trouble is that definitions change out of ignorance and misuse.

Still looks wrong from here, even if the Oxford eggheads hath decreed it not!

It is interesting to me how many ‘irregular’ (in the sense of based on Greek or Latin) singulars are disappearing from the language. You never see bacterium, larva, phenomenon any more, it is always bacteria, larvae, phenomena. I used to struggle against this trend, sad pedant that I am, but I wonder if it isn’t just the normal evolution of language, rather than a lowering of standards.

Why do you think “the” can’t be used with plurals?

Even if you did object to data being used as a singular, it wouldn’t make it an oxymoron, just bad grammar.

Does anyone understand the numbers that prompted the Biden donkey to declare that we have an EXISTENTIAL threat?

He even claims we can “feel the threat in our bones”, whatever that means?

Just watch this silly nonsense and fear for our future and ask yourself how we’ve fallen for this lunacy?

It’s the Twilight Zone.

Monte ==> “… you just crossed over into The Twilight Zone”

“Help me, Mr. Wizard!”

I’ve always had a problem of putting meaning behind temperature because without knowing the constituents of the temperature (what is the relative or absolute humidity, pressure and other particles), the number is not close to precisely anything of value! Further averaging this poorly defined number goes down the rabbit hole of “you lost me already”

Mario ==> Temperature is a measure of one thing and heat content or heat energy is a measure of not only a different thing but a different type of property.

Yes, that is correct. Delta temperature is used by many to “falsely” prove the earth is trapping energy. That is a slight of hand argument.

So I posit that measuring temperature without knowing the constituents of the air being measured is meaningless precisely because the amount of energy is not known simply by a temperature measurement without considering the other components I mentioned.

And I need to explain, when I wrote in my previous post “you lost me already” I was not referring to the author of this nice post. I was speaking generally to those others who don’t know the difference because heat and temperature.

Mario ==> We are now getting into the topic of Part 2…..

I figured that engineering units assigned to numbers would be a natural segway 🙂

With you so far, looking forward to part 2.

You do come across as a bit of a statistics-a-phobe.”All Models Are Wrong, Some are Useful” George Box, 1976. Fair (agenda-free) statistical analyses including summary statistics, empirical model-based summarisations, and theoretical model validations are an essential part of science. These all should consider appropriateness of the data, its measurement, measurement errors, and adequacy for the research question being addressed and all subsequent uses of the data should be closely scrutinised, peer reviewed (unfortunately a very fallible system), and subject to re-analysis (requiring all data and code to be freely available). Otherwise what is your alternative? The biggest hindrance is narrative-biased peer review. Fully transparent peer review would help.

Steven ==> I am a firm opponent of the mis-use of statistics to do data dredges to try and find something (anything) that might be claimed to be a publishable result. And other nonsensical practices — which are legion. Not really the topic here today.

I used to judge High School science fairs in Florida. All the teachers and advisors insisted every result include an ANOVA analysis — so every project did. NONE of the students, when I asked, had any idea wht ANOVA meant, what an ANOVA analysis did,etc. They just popped their results in a spreadsheet and pushed the ANOVA button……

Kip that’s fine but “we must not throw the baby out with the bathwater” as the saying goes. As a PhD-level applied statistician with over 40 years experience in statistical modelling/analysis mostly in forestry/fisheries/ecology I have seen many inappropriate to flat-out wrong analyses. A thing I get a lot of satisfaction out of is in helping collaborating subject-area researchers clarify and convert their research questions into valid and best-practice statistical models and inferences. cheers Steve

Steven ==> Don’t get me wrong, I am not an anti-statistics person.

“A thing I get a lot of satisfaction out of is in helping collaborating subject-area researchers clarify and convert their research questions into valid and best-practice statistical models and inferences.”

That is exactly what we need more of.Mathematical analyses can be useful in supporting research work and formatting data into easily visible summaries. However what we are seeing currently is that the mathematical analysis IS the research work and actual data is becoming more and more irrelevant. It’s a disturbing trend.

Somebody (Roman M, I think) pointed out at Climate Audit ages ago that there actually is a statistically correct way to handle significance in such data mining analyses.

Re: the students pushing the ANOVA button. Easily accessible powerful statistical analysis programs* allow people with no conceptual grasp of the area to throw numbers into the machine and produce

something. They then think that particularsomethingis important because it’s statistics and the computer said so 🙁[*] it’s rather a stretch to include ANOVA here, but it’s the concept, not the detail…

Kip,

Looking forward to your next part, which might be numbered 2.

We are all shaped by a past life of selecting what we like and we’re taught and sometimes disagreeing with others.

I was taught that in Earth Sciences at least, a derived number is incomplete unless it has another number expressing its uncertainty. Not all agree. I have been trying for 6 years to get our BOM to disclose the uncertainty that they place on routine daily temperature measurements. Still not there, have some of their estimates for instrumental but not yet including the setting, like screen errors.

Without proper uncertainties, one cannot claim a temperature as a record because it might be just random noise. This goes straight to your comments about numbers measuring something but not being that something.

Let’s push for more uncertainty estimates, done by the book. Geoff S

Geoff, don’t you realize that climatologists have to take the equivalent of the Hippocratic Oath — “First, do no harm.” It is called the Hypocrite Oath and is — “First, never admit uncertainty.”

Geoff ==> I think you know that I have been demanding uncertainty, particularly Original Measurement Uncertainty be accounted for and shown — and an end to the idiocy that Original Measuremen Uncertainty can be “averaged away”.

I agree. Plus the use of SEM as uncertainty is totally wrong. SEM is statistical error not measurement uncertainty. In other words it tells you how closely the sample mean estimates the population mean, it tells you nothing of measurement uncertainty.

Most metrology teaches that if you have a small number of samples, like 10 experiments, the uncertainty is best expressed by Standard Deviation of the 10 experiments. This is actually the Standard Deviation of sample Means (SEM). You can’t divide the SEM by the √N because you get a worthless number.

And for temperature, you get exactly one chance for the measurement, then it is gone forever. √N = 1.

“

You can’t divide the SEM by the √N because you get a worthless number.”Why would you want to divide SEM by

√N?You divide the standard deviation by √N to get the SEM.“

Most metrology teaches that if you have a small number of samples, like 10 experiments, the uncertainty is best expressed by Standard Deviation of the 10 experiments.”What do you mean by “experiments” in this case?

Experiments can be anything. Samples of a production run, a chemical reaction, you name it.

How many folks on here have said that to find the uncertainty of the Global Average Temperature, you divide the Standard Deviation by the

√N? Besides that not being uncertainty you are dealing with samples. Every daily average is of samples. Every monthly average is of samples. Every annual average is of samples. That is all there is — Samples. You can’t divide by√10,000 stations and get a number that means anything.Kip Hansen has started a series on numbers. I suggest you go there and begin to learn what you don’t know about metrology.But there’s a difference between an experiment that is taking just one value, and an experiment that results in a sample of values.

“

How many folks on here have said that to find the uncertainty of the Global Average Temperature, you divide the Standard Deviation by the √N?”Nobody as far as I’m aware. I’ve always said that the uncertainty of a global average is a complicated process. All I’ve said is that is the standard way to calculate the SEM and the SEM is a measure of the uncertainty of the mean, with a lot of caveats.

“

Kip Hansen has started a series on numbers.”Thanks, but based on his comments I’m not sure he understands averaging better than you.

Wake up. Read studies and see what they do.

The NIH has even recognized the problem.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2959222/#

“” However, many authors incorrectly use SEM as a descriptive statistics to sumthemarize the variability in their data because it is less than the SD, implying incorrectly that their measurements are more precise.””

You keep linking this.

Yes SEM is not a descriptive statistic, it’s inferential. If you use the SEM and imply it’s the SD you are wrong. If you use the SD when you use mean the SEM you are wrong. And none of this means, you estimate the uncertainty of a global anomaly average simply by dividing the SD by √N.

Yes, Kip,

I know that you have pressed for better uncertainty analysis from the beginning. I did not mean to infer that you were lax, I simply accepted that both you and I and most readers knew it.

Geoff S

You can use temperature to estimate the energy level of a known substance when the volume & pressure remains the same (see its “Specific heat capacity”). BUT …

So averaging temperatures is not a measurement of energy when volume, pressure and/or composition varies per locations & over time as occurs in meteorology (then aggregated over time as climatology).

trgrus ==> Absolutely — “temperature is not a measurement of energy” thus cannot be added and averaged…..

We can distill the dissertation.

“Lies, Damn Lies and statistics”

Excellent post.

“like

Global Average Surface Temperature, an entirely imaginary nonphysical number.” In principal aGlobal Average Surface Temperaturebased on a census of every defined surface unit and time interval, to be theoretical in the extreme lets say every m^2 and every sec, and then averaged over all are units in space and over a contiguous set of time units (say a year) is a valid statistic of a valid measurement (say instrumental) that approximates a physical reality. The problem is not in the theory but in the historical sampling realisations of the population of census values. Dealing with highly unbalanced sampling over space and time, time trend analysis of means, the uncertainty of the time trend, attempting to adjust for temporal and/or spatial confounders this is my effort in modelling an analogous set of very unbalanced data for long-term trend in the mean and its uncertainty (https://journalarrb.com/index.php/ARRB/article/view/30460)Steven ==> Your hypothetical produces a number, but still of a non-physical abstract., something with no meaning in the sense of physics. I hope to explain why in Part 2.

Steven ==> It is a problem of “What Exactly Are They Really Counting” — they are counting local degress of temperature — and they should be counting HEAT content. Heat is an extensible property of physical matter, while temperature is an intensive property. That is the physics of it…..

Kip, so here in southern Tasmania when the maximum daily air temperature can be about 10degC this time of year, if I was a poikilotherm (like those Tasmanian’s to be wary of – the tiger snake) I would curl up on a rock in the sun and not do much else. So air temperature and internal body temperature have a strong relationship with obviously body heat content functionally dependent on air temperature (with some insolation warming of the rock I am curled up on) not the other way around. I modelled coleopteran development in my PhD (a poikilothermic order) and a thermal sum with a lower temperature threshold is very well established as an excellent predictor of insect development through immature stages (larval instars). “they should be counting HEAT content” Good luck with that. I wouldnt have been able to do my PhD if I had to put tiny-weeny temperature probes in all those thousands of gum leaf beetles and weigh each one individually to estimate their heat content. Have you ever done any field based research?

I forget to mention the insolation warming of the black tiger snake itself. One contemporary PhD I communicated with did glue small temperature probes on the underside of the abdomen of a few gum leaf beetles to estimate the additional effect of insolation but a probe inside the beetle would have been ideal but then again they like us would have found it difficult to go about their business happily with a metal spike up their rear end! :-}

Steven ==> I was a herpts guy when I was a kid. Not quite getting your point — your snake curls on on a warm rock in the Sun to warm up….the air temperature being too low to allow him to do that — so he absorbs the radiant heat (and secondarily, the radiant heat absorbed by the hot rock) to accomplish that.

Quite clearly, you used “thermometer temperature” (sensible heat) as a quick-and-easy measure of the heat contained in and provided by these objects (snake body and rock).

Yes?

“scientific philosophy of numbers “..

No such thing Kip. Posting a word salad is dumb

And, I don’t think much of your Caesar salad — even with bacon.

THE NUMBERS ARE MEANINGLESS, BUT THE TRENDS ARE IMPORTANT.

— LARRY BURGGRAF

https://twitter.com/gaussianquotes/status/778646351022260224

Paul ==> And what would the importance of the trend of a series of meaningless numbers actually be? What could it be?

Now that’s what you might call a facepalm moment. The quote given by Paul Redfern may be quite accurate but it still makes no sense as it stands. Try inserting the word “alone” between “numbers” and the first “are”; makes slightly more sense then.

I’ve used this example before and I’ll make it short. I make shafts that are supposed to be 6″ ±0.01″. My cutter is broke and it makes shafts between 5″ – 7″. I make 1000 of them and by golly the mean is 6″ and the Standard Deviation / √N = 0.001. Will my customer be happy? Does the average have any meaning? How about the SEM, is it meaningful?

And as always your examples are ones where the SEM is not useful, and then conclude it can never be useful.

If all your shafts have to be within a certain tolerance then you need to look at the standard deviation, not the SEM. If on the other hand it didn’t matter what the exact size of your shafts were, but it was important that the average was close to 6″, then you would might want to take samples of a sufficient size that it could alert you to a change in the average. Then you need to know the SEM.

The main reason for wanting to know the SEM is in hypothesis testing, not in building things all of the same size. You want to know if two samples are from the same population, if one treatment is better than another, if a chemical is increasing the chance of getting ill. You can;t do that by assuming everything is the same, you have to take samples, and you have to know what the expected error of the mean will be.

As usual you are out standing in the cold. Here are some references.

https://www.investopedia.com/ask/answers/042415/what-difference-between-standard-error-means-and-standard-deviation.asp

Read this carefully. The SEM (Standard Error of the sampe Mean) is the SD (Standard Deviation) of the

sample meansdistribution.Did you get that? A test of a small number of times is a

SAMPLE! The mean of that sample is the average value.The SD of that sample is the SEM.Look on youtube for standard error of the mean. There are literally hundreds of videos that explain this.

Every time I try to explain why you might want to know the SEM, you give me some link that explains the difference between standard error of the mean and the standard deviation of a sample. Thanks, but I’m fully aware of the difference.

“Did you get that?”

Yes. But you still don’t, as your next statement makes clear.

“

A test of a small number of times is aSAMPLE! The mean of that sample is the average value.The SD of that sample is the SEM.“The SD of a sample is

notthe SEM.As so often you seem to get hung up on the odd phrasing. “Sampling distribution” does not mean the distribution of a sample. It means the distribution of all possible samples.

https://en.wikipedia.org/wiki/Sampling_distribution

See the words “arbitrarily large number of samples”, not one sample.

“SD is the dispersion of data in a normal distribution.”Wrong the SD is a sample statistic which when squared along with sample mean are for a normal distribution the joint sufficient statistics for the population variance and mean, respectively. The maximum likelihood estimate of the population variance is slightly biased because it divides the sum of squares by N and not N-1.“

SD indicates how accurately the mean represents sample data”wrong: the SEM estimates the accuracy of the mean as an estimate of the population mean.correct if you change to“SEM is the SD of the theoretical distribution of the sample means (the sampling distribution)”“SEM when squared is anestimateof the variance of …““A test of a small number of times is a

SAMPLE”what a mangled sentence, what is it trying to say?“The SD of that sample is the SEM.”what sample? the SEM is the SD of the sample divided by √N or alternatively you could use bootstrap resampling to draw a size-M sample of means and estimate the SEM from the sample of means as their SD but DO NOT divide this SD by the square root of M.Dont go to https://www.investopedia.com/ask/answers/ whatever go to statistics text books or talk to a bona fide statistician

These standard results assume equal-probability random sampling (or simple random sampling) so that all units in the population have equal chance of being selected and selection is random. There are other unequal-probability random sampling schemes I have used such as list sampling and response-biased sampling which require these sampling probabilities to be incorporated in estimation using for example Horvitz-Thompson estimation of the mean.

“Wrong the SD is a sample statistic which when squared along with sample mean are for a normal distribution the joint sufficient statistics for the population variance and mean, respectively.”

How does this apply to temperature measurements which are *NOT* a normal distribution? E.g. northern hemisphere temps mixed with southern hemisphere temps – at least a bimodal distribution. Or summer temps mixed with winter temps when each have a different variance (e.g. SH vs NH temps)

Since temperatures are multiple measurements of different things using different devices there is no kind of a guarantee you will get a normal distribution. In a bi-modal distribution what does mean and standard deviation really tell you?

One can resort to the Central Limit Theorem that asymptotically the sampling distribution of the mean approaches a normal distribution as the sample size N increases https://en.wikipedia.org/wiki/Central_limit_theorem). However, I would suggest in modelling the long term trend in average global surface temperature that one combines fixed effects (e.g. latitude, altitude, land, ocean, sinusoidal function of days since winter solstice, albedo etc) and random effects (e.g. weather station, or grid square for oceans, days etc) in a linear mixed model and calculate averages across levels of both fixed (except year) and random effects (converting continuous fixed effects to ordered categorical variables). You could assume a normal distribution for temperature since given the above fixed effects it is the residual distribution about the local mean that we need to model. The uncertainty could be modelled using Markov Chain Monte Carlo estimation/sampling (see my paper for something like this approach: https://journalarrb.com/index.php/ARRB/article/view/30460). The fixed effects take out the bimodality you mention since these modes are modelled by the fixed effects. I would also build in different error variances given the type of measurement (eg weather station versus remote sensing). The problem becomes a more complex modelling problem when weather stations come and go and there are confounding effects like urban heat island effects on the trend for a station, to name a couple of issues.

“One can resort to the Central Limit Theorem that asymptotically the sampling distribution of the mean approaches a normal distribution as the sample size N increases”

All that does is decrease the interval in which the calculated mean might lie. It does not represent the uncertainty of that mean in any way. Each measured data element (i.e. temperature) should be given as “stated value +/- measurement uncertainty”. You *must* include the propagated measurement uncertainty along with each sample mean. Those sample mean uncertainties must be propagated onto the average of the sample means if you truly want the uncertainty of the mean.

” You could assume a normal distribution for temperature since given the above fixed effects it is the residual distribution about the local mean that we need to model. The uncertainty could be modelled using Markov Chain Monte Carlo estimation/sampling (see my paper for something like this approach:”

You simply cannot assume a normal distribution for temperature. That would only occur if you are measuring the same thing multiple times. Temperatures are multiple measurements of different things using different devices. There is simply no way you can assume a normal distribution of the values you gather. This also means that Monte Carlo estimation will not work, that method is also based on generating random, normal distributions of values. Since you are only taking single measurements of an object there isn’t a random, normal distribution from which to evaluate uncertainty for each measurement.

For a global average temperature you must combine northern hemisphere temps with southern hemisphere temps. For the same month one will be measuring summer temps in one hemisphere and winter temps in the other hemisphere. This actually results in at least a bi-modal distribution. Not only that winter temperatures usually have a higher variance than summer temps. Combining independent data sets with different variances is not the simple exercise of just jamming the data together and finding an average.

“The fixed effects take out the bimodality you mention since these modes are modelled by the fixed effects.”

How do you do this? If temps in the SH range from 0C to 10C and in the NH from 10C to 20C how do fixed effects remove such a bimodality?

Climate is the entire temperature profile. Every time you take an average you lose data, you no longer have the entire temperature profile. An mid-range daily temperature can be 15C. Can *YOU* tell what minimum and maximum temperatures generated that value? I can’t. When you then average 30 daily mid-range values to get a monthly average of 12C can you tell what daily mid-range values generated that average? I can’t. When you then average 12 monthly average to get an annual average of 8C can you tell what values generated that average? I can’t.

If you can’t follow the temperature profile throughout all the averaging then what do you really know about the global climate. Using anomalies don’t help – they just further mask the actual temperature profiles!

“How do you do this?”

Read my paper and if you can understand the statistical methods come back with some informed criticism, otherwise we are arguing from the base level of your misconceptions due to your lack of practical experience in such modelling as is evident from your postings. I havent got time for this, I have paid statistical consulting work to get on with, and another applied statistical methods paper just submitted took up the last month of my semi-retirement time.

“How do you do this? If temps in the SH range from 0C to 10C and in the NH from 10C to 20C how do fixed effects remove such a bimodality?” This does it for me, got better things to do. Your language is so imprecise and arguments so even counter-intuitive. I live in the SH, in the southern most capital city in Australia and I can assure you the “temp” (do you mean temperature as displayed on the BOM site?) ranges well above 10C and sometimes below 0C. Today mid-winter the forecast maximum is 13C. What on earth (literally) do you mean by “temps in the SH range from 0C to 10C”? Do you mean “average annual mean daily temperature (AAMDT)”? If so then somewhere between Macquarie Island and Davis station in coastal Antarctica (which I have visited on a research cruise with AAD in the 2009/10 Austral summer) would fit with an AAMDT of 0C! BTW you get a daily mean temperature by integrating the 24hr diurnal temperature profile.

And now we devolve into nitpicking, an argumentative fallacy. The temps I used were meant to signify the difference between seasons in the hemispheres. Something which you used nitpicking in order to create a red herring to argue with.

Two issues you still need to address:

I don’t really expect you to answer because climate alarmist ignore things like this.

But please stop using arguementative fallacies to avoid answering. If you don’t wish to answer or can’t answer then just say so.

is a fictitious quantity that cannot represent climate.

It’s not even a temperature measurement let alone a climate measurement!

Your definition in no way proves mine wrong. In fact, I can give multiple references if you so wish. However, basically SD tells you the dispersion of data in a normal distribution. 10 ± 5 describes a much different normal distribution than 10 ± 25. It doesn’t even matter how many data points you have. You know that ~68% is in one SD, etc.

You are mistaken in this statement.

The Standard Error of the sample Means (SEM) IS the standard deviation of the distribution made from all the average of each sample of the population.

Here you are totally wrong. You make the same mistake that many scientists and mathematicians make.

The SEM is the standard deviation of a sample distribution. Now you may make many “samples” of a population and find the mean of each of those samples to create a distribution with many points and then calculate the SD of that sample means distribution. However, at that point you have an “estimated mean” and a standard deviation of the sample means (SEM). The SEM tells you an interval surrounding the estimated mean where the population many lay.

The relation between the SEM and the population SD is:

SEM = SD / √N

Your other mistake is that the √N is NOT the number of samples, it is the size of the samples.

Your statement:

is an insult to the information on this site. You need to provide references or letters you have sent to their site describing their errors. Personally, I would not slander a site like this that has been there for a long time and whose information is similar to many other references.

You need to provide a “textbook” reference to support your assertion that their information is wrong.

Here are two references from National Institute of Health that show bona fide statisticians do not understand the issues involved.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1255808/

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2959222/#

Inappropriate use of standard error of the mean when reporting variability of study samples: a critical evaluation of four selected journals of obstetrics and gynecology – PubMed (nih.gov)

Check this site out. Be sure and multiply the sampled distribution standard deviation (SEM) by the square root of the sample size and see if you don’t get the population SD. That will verify the above equation. I’ve included an image of one example I ran. This verifies the Central Limit Theorem by the way.

Sampling Distributions (onlinestatbook.com)

Since you wanted a textbook reference, here is one.

Introduction to StatisticsOnline Edition

Primary author and editor: David M. Lane1

Other authors: David Scott1, Mikki Hebl1, Rudy Guerra1, Dan Osherson1, and Heidi Zimmer2

1Rice University;

2University of Houston, Downtown Campus

Page 306

Page 307

Here is another online textbook.

Statistical Thinking for the 21st CenturyCopyright 2019 Russell A. PoldrackChapter 7.3

The important part of these is that the standard deviation of the sample means IS NOT divided again by √N to obtain a better number.

In fact, if the temperature data is not of the entire population, and it is not, it must be considered a sample.

I would give you a reference from my Engineering Statistics book but it disappeared many moons ago during one move or another.

If these don’t suffice let me know and I’ll get more.

You keep giving texts that explain what people are trying to tell you. The problem is you keep giving the impression that you don’t understand the difference between the standard deviation of a

sampling distribution, and the standard deviation of asample.Maybe you do understand it, but are not expressing yourself vary well, but I’ve had similar arguments with you over the months and you keep making the same statements.

You said:

Maybe you meant the SD of the sampling distribution. But that’s not how it reads, and when anyone tries to correct you, you say they don’t know what they’re talking about.

Then you claim that many mathematicians and scientists don’t understand it either. Before making such a bold statement you really need to tell us exactly what you mean, because it might just be the mathematicians understand it better than you.

None of you so-called statisticians seem to understand this at ALL! The standard deviation of the sample means is *NOT* the measurement uncertainty of that calculated mean. The uncertainty of that mean calculated from the sample means is the measurement uncertainty associated with each individual element in each sample.

Population: “mean of the stated values” +/- propagated measurement uncertainty

Sample 1: “mean of the stated values” +/- propagated measurement uncertainty

Sample 2: “mean of the stated values” +/- propagated measurement uncertainty

Sample 3: “mean of the stated values” +/- propagated measurement uncertainty

The standard deviation of the same means only gives you an interval in which the mean of the population stated values might lie.

The actual measurement uncertainty of the mean calculated from the sample means is the propagated measurement uncertainty of the sample means.

The SEM is *NOT* the accuracy of the mean of the sample means. It is *NOT* the accuracy of population mean.

For some reason it seems that most statisticians want to use the SEM as a metric of measurement accuracy.

IT IS NOT!I don’t care how many samples you take or how large the sample are. The accuracy of any mean calculated from individual elements is the propagated measurement uncertainty of the individual elements.

You can’t lessen measurement uncertainty using averages or anomalies when you are combining individual, random, independent measurements of different things using different devices. You can’t even do it if you have multiple measurements of the same thing using the same device unless you can *prove* that no systematic uncertainty exists in the measuring device.

Even if every square inch of the Earth was covered in thermometers, the mean calculated from those thermometers would still have a measurement uncertainty factor associated with it. I don’t care how many samples of how many elements you would pull from that population, the standard deviation of the means of those samples would *still* not be a metric of the accuracy of the mean thus calculated.

The ONLY metric of the accuracy would be the measurement uncertainty propagated from those individual thermometers.I can’t find a single set of temperature data that includes proper measurement uncertainty with each element. I can’t find a single analysis of any of the measurement data that actually propagates and shows the measurement uncertainty from combining all those pieces of temperature measurements.

THEY ALL USE THE STANDARD DEVIATION OF THE SAMPLE MEANS AS THE ACCURACY OF THE MEAN!It means each and every one of them either just ignores measurement uncertainty or assumes it all just cancels. Whether they do so out of ignorance or intention, I can’t judge. I suspect ignorance is a major problem because so few statistics textbooks cover how to handle measurement uncertainty. All the textbooks just assume that all stated values are 100% accurate! In the real world that simply does not cut it!

“

None of you so-called statisticians seem to understand this at ALL!”I’m not a statistician “so-called” or otherwise. But your hubris keeps growing. You really need to consider that statisticians understand statistics better than you.

“

The standard deviation of the sample means is *NOT* the measurement uncertainty of that calculated mean.”I’m not saying it’s the “measurement uncertainty”, I’m saying it’s a type of uncertainty, and generally more important than that from measurement uncertainty.

“

Population: “mean of the stated values” +/- propagated measurement uncertainty”I’m not sure how a population can have measurement uncertainty, unless you mean the population of all possible measurements. In that case I’d expect this to converge to the actual mean plus any systematic measurement error.

“

Sample 1: “mean of the stated values” +/- propagated measurement uncertaintySample 2: “mean of the stated values” +/- propagated measurement uncertaintySample 3: “mean of the stated values” +/- propagated measurement uncertainty”As with Jim, I don’t know why you keep wanting to take multiple samples.

“

The standard deviation of the same means only gives you an interval in which the mean of the population stated values might lie.”Which is what you want if you are talking about the uncertainty of your sample mean.

“

The actual measurement uncertainty of the mean calculated from the sample means is the propagated measurement uncertainty of the sample means.”If you want, but as I keep saying, this will generally be small compared with the actual uncertainty caused by the random sampling. (Of course, you will disagree because you don;t understand how to propagate those measurement uncertainties in a mean, properly)

“

The SEM is *NOT* the accuracy of the mean of the sample means. It is *NOT* the accuracy of population mean.”You need to read some of the articles Jim keeps posting. They disagree.

https://www.investopedia.com/ask/answers/042415/what-difference-between-standard-error-means-and-standard-deviation.asp

“

I can’t find a single set of temperature data that includes proper measurement uncertainty with each element. I can’t find a single analysis of any of the measurement data that actually propagates and shows the measurement uncertainty from combining all those pieces of temperature measurements.THEY ALL USE THE STANDARD DEVIATION OF THE SAMPLE MEANS AS THE ACCURACY OF THE MEAN!“Could you provide a link to one of these. Just using the SEM to estimate the uncertainty in a global anomaly calculation would be wrong. It’s not a random sample.

Here’s the description for HadCRUT5:

https://www.metoffice.gov.uk/hadobs/hadcrut5/

“It means each and every one of them either just ignores measurement uncertainty or assumes it all just cancels.”

It is considered when its of consequence when the modelling is done by bona fide statisticians. It is often minimal when averaging across a set of predictions because measurement error occurs at the lowest sampling level and is additive to that of sampling error at that level. Also, measurement error variance is most often a small component of the sum of the variances of these two independent errors and if its not find a better measuring instrument.

“However, correctly incorporating all of (i) binomial sampling errors, (ii) biological errors (i.e., overdispersion), and (iii)

errors in variablesis not possible using linear regression.”Candy, SG 2002 Empirical Binomial Sampling Plans: Model Calibration and Testing Using Williams’ Method III for Generalized

Linear Models With Overdispersion. Journal of Agricultural, Biological, and Environmental Statistics, Volume 7, Number 3, Pages 373–388. DOI: 10.1198/108571102302

See also: Carroll, R. J., Ruppert, D., and Stefanski, L. A. (1995), Measurement Error in Nonlinear Models, London: Chapman

and Hall.

“ Also, measurement error variance is most often a small component of the sum of the variances of these two independent errors and if its not find a better measuring instrument.”

But the measurement uncertainty is NOT a small component of the differences you are attempting to identify. Therefore you can’t just ignore the measurement uncertainty.

You are just throwing up word salad as an excuse.

If you do not explicitly incorporate measurement error in the error model for the response variable (eg temperature, see “Ln_Den_st” below) it is subsumed into the “units”-level error term. If you explicitly include measurement error variance as a known quantity by including it as a prior with almost zero variance then you get the same result in terms of parameter estimates with just more detail on units-level variance components. This is how I did it in R in the paper I quoted where an extra prior could be included for measurement error variance if this variance or an estimate were known a priori in the same way as “G3 = list(V =1.0, fix=1)” below (see Supplementary Material in https://www.researchgate.net/publication/357063946_Long-term_Trend_in_Mean_Density_of_Antarctic_Krill_Euphausia_superba_Uncertain)

prior1 <- list(G = list(G1 = list(V =diag(2), nu = 0.002),

G2 = list(V =1, nu = 0.002), G3 = list(V =1.0, fix=1)), R = list(V=diag(2), nu = 0.002))

m5d.1 <- MCMCglmm(Ln_Den_st ~ North60_f + t_SEASON.CENT + North60_f:t_SEASON.CENT,

random=~ us(1 + I(t_SEASON.CENT)):t_cell_f + t_SEASON_f + idh(SE):units, rcov = ~idh(North60_f):units, data = dataSxC,

nitt=130000, thin=100, burnin=30000, prior = prior1, family = “gaussian”, pr=TRUE, verbose = FALSE)

summary(m5d.1)

So ignoring measurement error in the response variable is not a big deal in terms of modelling like it can be for measurement errors in predictor variables where it can introduce bias in regression parameters and so those references I gave earlier (eg see my paper: DOI: 10.1198/108571102302). However, a relatively high level of measurement error in the response variable does not lead to good predictive accuracy for the underlying physical quantity that is implicitly being modelled.

You can label the above as a “word salad” if you like to retain your delusion that you know enough about statistical theory to be able to understand actual research-level applications.

I showed you two links from the Nations Institute of Health (NIH) that discusses how scientists and the mathematicians that assist them DO NOT use appropriate statistics to describe their studies. Climate scientists and their mathematicians are no different.

Here they are again.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1255808/

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2959222/#

Each of these contain other references that discuss the statistical errors made in interpreting sample data.

That is exactly what I meant. If you understood the statistical error definition you would know that:

SEM =

Standard Error of the sample Means =

Standard Deviation of the sample means distribution.

Samples versus populations is a pretty simple distinction. Temperature stations provide temperatures at distinct points and not the entire population of every point on the earth. Consequently, they are samples, not the entire population.

Worse, most historical temperatures are samples of a continuous function which don’t meet Nyquist requirements for sampling.

Here is an example. I perform an experiment 5 times using the same solution, the same pipet, the same beaker, the same heater, etc. I get 5 values.

2.0, 2.8, 3.1, 2.3, 2.5

The mean is 2.5 and the sample standard deviation is 0.4.

Now let’s discuss. Do you divide by (N-1) or by (N) to find the standard deviation. You need to decide is this a population (N) or is it a sample

(N-1)? Obviously, it can’t be the entire population since many, many more experiments would need to be run to obtain anything like an entire population.

What is the mean? Is it the population mean or a sample mean. Since the data is a sample, the mean is of a sample. Therefore, the standard deviation of a sample mean is the SEM. You don’t divide by the √N since you don’t have the population standard deviation you have the sample mean standard deviation.

This is what the NIH documents and other references are trying to get across. When you have a sample mean, be it from one sample or many, the standard deviation of that sample IS THE SEM. You don’t divide by the number of samples to decrease the SEM even further.

Now we haven’t even addressed uncertainty. Each of the numbers in the sample should be:

2.0 ± 0.2, 2,8 ± 0.2, 3.1 ± 0.2, 2.3 ± 0.2, 2.5 ± 0.2

How do you think that affects the uncertainty of the mean? How does it affect the standard deviation?

“

I showed you two links from the Nations Institute of Health (NIH) that discusses how scientists and the mathematicians that assist them DO NOT use appropriate statistics to describe their studies. Climate scientists and their mathematicians are no different.”You keep spamming the same links, that don’t say what you think they do. The claim is that some papers use SEM when they should be using SD or fail to indicate which they are using. I can’t comment on the accuracy of these claims because they don’t provide any examples. But it does not mean SEM is not a meaningful statistic,

From one of your articles:

Note that SEM is here identified with the uncertainty of the estimated mean – something you and Tim keep saying it isn’t.

“

That is exactly what I meant.”Good, then maybe we are making progress. But you keep writing in a confused style, and keep implying you mean the opposite, as you do later in your comment.

“

Here is an example. I perform an experiment 5 times using the same solution, the same pipet, the same beaker, the same heater, etc. I get 5 values.2.0, 2.8, 3.1, 2.3, 2.5”You seem to be describing a sample of individual values here. You’ve repeated the experiment 5 times and got a distinct value each time.

“

The mean is 2.5 and the sample standard deviation is 0.4.”The means 2.54 and the sample standard deviation is 0.43, but if you want to prematurely truncate them go ahead.

In answer to your other trivial questions.

You divide by (N – 1) = 4.

It’s a sample not a population.

The mean is 2.54.

It’s a sample mean.

“

Therefore, the standard deviation of a sample mean is the SEM.”Oh dear. You keep wanting to lecture everyone, but then you keep saying something like this.

“

You don’t divide by the √N since you don’t have the population standard deviation you have the sample mean standard deviation.”You still divide the sample standard deviation by

√Nto get the SEM estimate. The sample standard deviation is the best estimate of the population standard deviation. And what do you mean by “the sample mean standard deviation”? You don’t have a sample of means, you have a sample of values.“

This is what the NIH documents and other references are trying to get across.”None of them say anything of the sort, and if they do could you actually provide a quote. You keep failing to understand and of these documents.

“

When you have a sample mean, be it from one sample or many, the standard deviation of that sample IS THE SEM.”And this is where you are getting completely confused. There are two possibilities.

1) You have 1 sample. This is off a certain size. You know the mean of that sample which is a best estimate of the population size, and you know the sample standard deviation which is an estimate of the population standard deviation. You estimate SEM by dividing this SD by √N.

2) You have an infinite set of samples, all of the same size, and you calculate the standard deviation of the means of these samples, and we call this standard deviation the SEM. This is what’s meant by a sampling distribution. But in reality this is a concept, not something you would do except in simulations.

You seem to think that you can take a small sample of samples and work out the SEM from them. That’s possible, but makes little practical sense as you could just combine all the samples into one bigger sample and get a more accurate estimate of the mean.

But now you seem to think you can do that with just one sample, and somehow use the standard deviation of that one sample as if it was a sampling distribution. As I say, I think you are getting very confused. You should really read the articles you link to

And note, here SD is the sample standard deviation.

“

Each of the numbers in the sample should be:2.0 ± 0.2, 2,8 ± 0.2, 3.1 ± 0.2, 2.3 ± 0.2, 2.5 ± 0.2″I don’t think there’s any “should be”, but let’s try it your way.

“How do you think that affects the uncertainty of the mean? How does it affect the standard deviation?”You don’t specify what your coverage factor is for the measurement uncertainty. I’ll assume it’s 2, so the standard uncertainty is 0.1.

Assuming independence the measurement uncertainty propagated to the mean, is 0.1 / √5, around 0.045.

The standard error of the mean for the stated values is 0.43 / √5, approximately 0.192.

Combining these two values we get √(0.192^2 + 0.045^2), about 0.197.

Part of the problem here is the lack of mathematical notation, so that causes confusion with conflicting implicit definitions of sample estimates and population quantities that are being estimated. I was using SD to mean, as is standard, the usual sample estimate of the square root of the population variance

σ^2.“we divide the estimated standard deviation by the square root of the sample size:SEM = σ / √N

(equation typed by me)”The

σ should have a “^” over it to show that it is an estimate. This is the problem with duelling statistical “theory” in this type of forum.“BTW I did not slander the “answers” section of your “investor” web page I simply recommended text books or other validated references. I used Wikipedia myself and check descriptions there which I have found to be accurate.While numbers are interesting, since numbersrepresent data, numbers are not that important for leftist politics and climate change scaremongering:Numbers are not required. Data are not required.(There are no data for the future climate, just beliefs)All that is required is effective propaganda.Coming from government bureaucrat “scientlsts”hired to make scary climate forecasts.Many Americans are victims of the Appeal to Authoritylogical fallacy — almost every leftist. and a few Republicans too. As a result, a majority of Americans believe in a coming climate crisis, as predicted by government bureaucrat “scientists” for the past 50 years.Climate scaremongering could have been done without numbers. Unpleasant weather events can be demonized without any numbers. …My “climate rap” continues here:Honest global warming chart Blog: Climate Rap: Does the belief in a coming climate crisis require any numbers? (elonionbloggle.blogspot.com)

Kip, the question that imposes itself after all this is whether democratic governments are really suited for governance in our modern world. Fact is that the United States, generally assumed to represent the cream of democracy, are proving completely unable to comprehend the truly scientific importance of your philosophy. They base their policies on what they perceive as science; but which is actually only meaningless strings of numbers.

Looking back in history, old-fashioned governments never used to govern, to legislate, following “scientific viewpoints”. They always based their decisions, their enacted legislation, only on what they thought represented what was best and most beneficial for the citizens who had elected them to govern – science be damned.

Where did you ever get the idea that historical governments were so benevolent?

Please note, David Long, that I was referring exclusively to democratic governments – the whole idea of which is to deliver to its citizens what they need, crave for and are happiest with.

In world history, of course, democracy is an atypical form of government – but also the much more common authoritarian rulers (or ruling families) mostly tried to please their peoples. If they didn’t, people would soon rebel and somehow find another strongman to follow.

My question is whether democracies will survive the tyranny they are exerting on citizens at present. Will there soon be “a Caesar crossing river Rubicon” to give us the sort of governance we want??

“actually only meaningless strings of numbers.”

There are no real numbers for the future climate

No data.

Just unproven theories that include a few wild guessed “numbers”

that few people even know, but government bureaucrat scientists

“say so” (about a coming climate crisis), and that’s good enough for them.

The claim that the average temperature increased +2 degrees C.

since the late 1880s does not support the predictions of climate doom.

The predictions are for a much faster rate of warming that has never happened,

in spite of 50 years of such predictions.

.

Andy E ==> I think that is a question of sociology or PoliSci. Personally, I am a firm believer in “

Government of the People,by the People,and for the People”I think you need to seperate governments controlled by those that truly represent their constituents from those of career politicians who seem to only represent themselves and some desperate need to cling to power. Politician should never have been a career path; there should be a requirement for experience in other fields away from politics first.

Kip, neither you nor anybody in the comments mentioned the IPCC’s Global Warming Potential numbers. You know, CH4 is 86 times more powerful than CO2 at trapping heat, N2O is 265 times more powerful, and CClF3 is 13,900 times more powerful than CO2 at trapping heat.

And nowhere will you find an answer to how much warming those compounds will actually run up global temperatures over the next 100 years.

Now the scary part is if someone comes up with a compound so bad that way,that a single escaped molecule would raise the whole planet’s temperature by about 25 degrees C or so!

(Probably it would flip the Earth’s magnetic field, triple the hurricanes, and cause all the volcanoes to go off as well).

You clearly have not seen the most recent AR6 (WG-I) report.

N2O has been bumped up to a GWP(-20) of 273.

“Fossil” methane molecules (e.g. leaking from a pipeline) now have a GWP of 82.5.

“Non-fossil” CH4 (e.g. from melting permafrost), however, now has a GWP of 79.7.

Ain’t “(Climate) Science” wonderful ?

How does the atmosphere, or rather the up-welling IR, know the difference between “fossil” and “non-fossil methane”?

The only thing the IPCC says about that particular “minor detail” is the

bald assertion(on page 1017) :I would not presume to advance a conjecture of my own in this specific domain.

Steve ==> Yet another imaginary number “Global Warming Potential”

I think that part of the explanation may lie in the MSM promoting the reduction of such emissions to give the impression of actually doing something. Even methane, which COP26 made a priority, will have a negligible impact for realistic reductions since most is natural. The last thing that they want is for the public to realize that new laws and tax dollars will have almost no measurable impact, even with the high-end estimates of warming potential and possible reductions.

In theory (wishful thinking?), averaging temperatures should give you something meaningful, like averaging concentrations of a desired mineral in samples of an ore body. You only get a good estimate of the amount of that mineral that you will retrieve, after digging up the whole ore body, from the average if you take so many samples that you might as well keep going and dig up the rest.

Bespoke fitting of polynomials to far fewer ore samples can give you something worthwhile averaging, which is what the mining industry does successfully enough to get a useful estimate of the amount of mineral they will obtain. So when I criticized HadSST for being ridiculously dodgy, I was informed that they use krigging like they have used successfully in mining for decades, and asked if I don’t believe in krigging. My reply was that I don’t believe in salami science – Is krigging. Is good!

“Concentration” is the name of a number of different intensive properties that are the quotient of an extensive property of one component, typically used for quantity, divided by an extensive property of the whole sample, typically used for quantity. In the case of mining, the mass of the mineral over the mass of the sample of ore. Those who have done chemistry will be familiar with amount, n, (the proper name of the extensive property measured in moles) of a solute over volume of the solution, as well a number of other intensive properties under the banner of “concentration”.

It’s assumed that the same can be done with temperature, treating it as a simple intensive property, the quotient of the heat energy put into the sample divided the heat capacity of the sample, except that the basic definition of heat capacity is

https://wikimedia.org/api/rest_v1/media/math/render/svg/27a0d3558b14b5228694dfc7ab80ad6c6012c617

“The value of this parameter usually varies considerably depending on the starting temperature {\displaystyle T}https://wikimedia.org/api/rest_v1/media/math/render/svg/ec7200acd984a1d3a3d7dc455e262fbe54f7f6e0 of the object and the pressure {\displaystyle P}https://wikimedia.org/api/rest_v1/media/math/render/svg/b4dc73bf40314945ff376bd363916a738548d40a applied to it. In particular, it typically varies dramatically with phase transitions such as melting or vaporization (see enthalpy of fusion and enthalpy of vaporization). “

if the above doesn’t get published properly, here is a link. https://en.wikipedia.org/wiki/Heat_capacity#Basic_definition

And it’s a property of the probe at a particular time that is influenced by it’s immediate surroundings. It’s not a property of 10 km×10 km×10 km volume of the atmosphere. The average of the minimum and maximum temperatures is definitely not an intensive property like concentration. Each measurement is influenced by a different sample of the atmosphere as the weather system goes through, convection, heat going in and out, and changes in heat capacity as water evaporates or condenses.

To cut to the chase, krigging is far from a straight forward method to make a collection of measurements mean something when you average the values derived from them, and these measurements are simple intensive properties of carefully planned sampling, something the temperature record isn’t.

Krigging might give an average that is within 1% of the tonnage of the mineral retrieved for each tonne of ore dug up and the miners would be stoked with the analysis. One percent of 300 K is 3°C, or three times the warming in the past century. Even 1% of 30° C, the spread of sea-surface temperatures, is 0.3°C. This is about how much of the warming since 1950 that is consistent with extra warming above the natural variation that started before 1950 – according to the GTA that I’m claiming is worthless.

AND then you need to consider how much the temperature record is not suitable for such analysis.

It’s mind boggling that governments accept it as evidence to destroy their constituents’ standard of living.

Robert ==> Good for mining engineers…not good for temperature. Temperature, as measured and used in CliSci, is NOT a measure of heat energy — that’s the rub.

From:

http://geofaculty.uwyo.edu/yzhang/files/Geosta1.pdf

“”As stressed by Journel, “that there are no accepted universal algorithm for determining a variogram/covariance model, that cross-validation is no guarantee that an estimation procedure will produce good estimates at unsampled locations, that kriging needs not be the most appropriate estimation method, and that the most consequential decisions of any geostatistical study are made early in the exploratory data analysis”.””

Krigging, although complicated, is no guarantee that there are “good estimates at unsampled locations”.

Quote

“I am well aware of the many problems of scientific modern research, including that all fields of science are returning a lot of questionable results – even in fields closely monitored like medicine. It is my belief that much of this is functionalized by “too much math, not enough thinking” or the reification of mathematical and statistical results”.

I think you are being too generous there Kip. My view is increasingly coming round to the idea, too many scientific studies depend on returning the answer the funding body needs them to return. The inducement being, there will be more funding if you do the ‘right’ thing.

I may be becoming cynical as well as sceptical in my old age….. 🙂

Rod ==> Read Judith Curry on the biases in science — funding is one of those, but not the only, and I don’t believe it is the major, biasing factor.

Medicine is not a science falsifiable, but an art and technology depending on statistical verification for validation. Falsifiability is the demarcation boundary of science from nonsense. Beware the Black Swan. Eschew ad-hockery. Medicine is a corrupt mountain of hockery. Healers mine this mountain for nuggets of advice.

False claims of knowledge – lies – as subjective naive priors cannot reduce the entropic space to truth.

Kip, you are striking on some critical points. Great Part 1!

My former graduate advisor and scientific mentor is a brilliant, world class scientist and mathematician. He is also imminently practical and pragmatic. A former farmer, a soil physicist and micro-meteorologist, even at age 80+ he still runs circles around corporate defense attorneys as an expert witness in environmental litigations. His passions are many, from getting drunk drivers off of the road to refurbishing old housing with his own hands to benefit elders in need.

He taught me early on that you have to check your fine-tuned calculations and statistics against your own “back of the envelope” scaling and your own reasoning. Do your results make sense in the real world?

We can all err, but so-called climate “experts” cannot dismissively blow off skeptics as not scientists or scientists/engineers of the wrong kind. Many lay people, including many who frequent WUWT, have enough sense of the world and it’s processes to call “foul” when they read or hear of the nonsense that often comes out of academia, and even more so from politically motivated government officials and activist NGOs.

Pflash ==> “….you have to check your fine-tuned calculations and statistics against your own “back of the envelope” scaling and your own reasoning. Do your results make sense in the real world?”

That is The Pragmatist Approach — and was the foundational principle of practical scientists for generations.

Like your slant here! I grew up in town with a couple of Universities. One had a highly accredited Engineering section, and we were often pitted against the “Pure Math” section. Of course there were Humanities, but both real sections considered the Humanities as full of fruit cups and nut bars.

The Pure Maths people derided the Engineering folks as alcoholics and jocks, and we derided them simply as “Matholes”….

I stand to this day as considering those who view maths as God, being “matholes”, and devoid of a proper sense of reality.

Yes, math is a tool. Yes it is essential and useful for navigating reality. But we must never loose sight of the fact it is a symbolic representation of reality, not actual reality!

And Engineering has consequences – so you can’t lie and get away with it for long. But math people can lie with numbers and it’s often devoid of consequences of their pontifications.

D Boss ==> Engineers tend to be pragmatists — where theory gets tested against the real world.

And engineers may lose their license to practice if a public engineering project fails. I don’t know of any similar motivation hanging over the head of mathematicians.

Reminds me of a joke:

An engineer, physicist and mathematician are locked up in a jail. The guard comes around in gives them lunch in the form of an unopened can of soup and they have to figure out how to get it open with nothing but their wits to eat.

The engineer bashes the crap out of the can and it opens almost immediately.

The physicist takes a bit longer to make some calculations but gets his can open too.

The mathematician is sitting in the corner with the closed can in front of him saying over and over “Assume the can is open. Assume the can is open….”

^100

I always believed the addage that ” if you interrogate the numbers for long enough, you can get them to admit to anything”

Odds ==> That what my instructors in Intelligence training taught too, about people.

Thank you! Thank you! For at least two decades I have been claiming that the “numbers” representing “Global Average Surface Temperature” have the same utility as the “number” representing the “average phone number” in a phone book.

Here’s some numbers: UK highest temperature record set only 3 years before beaten in 43 places in one day, with 5 of those places recording a 40C plus temperature

That’s weather for you. It is brilliant at generating records. Record hot, record cold, record highs and record lows. Life would be monotonous without weather that’s for certain…..

So? I fail to see the significance of a hot building, runway or other structure on climactic trends. Enlighten me.

Earth has been warming for 12,000 years. A reasonable extrapolation would be that it will continue to warm. Be worried —

veryworried — if it starts to cool rapidly.