Guest Essay by Kip Hansen – 30 July 2022
One cannot average temperatures.
Let’s start with this simple statement – this statement is true but comes with some common-sense caveats.
Important Note: The purpose of this essay is not to refute the basic underlying concepts of “global warming” or “climate change”. Those concepts and their supporting data are an entirely different topic. This essay is about a scientific point: One cannot average temperature. This fact may affect your understanding of some of the supporting points of Climate Science.

Let’s say you run a web site for corporations interested in having conventions in Topeka, Kansas in August and you’d like to inform attendees what kind of weather, in terms of temperature, they should expect, so that they can pack clothes suitable for the trip. A chart like this is perfectly appropriate. It shows the average of historical high and low temps for each day of the month and appropriately shows this as a range and not just a number. It provides a common-sense answer to the corporate question: “What’s the weather like in August in Topeka?” Answer: Hot days and pleasant warm nights. So, speeches and presentations inside the air-conditioned auditorium during the day and in the evening, the Tiki Bar Luau around the hotel pool is definitely on!
In this case, they have not really attempted to “average temperatures” — they just averaged the numbers about temperatures to find an expected range of historic highs and lows – they don’t think this is a real temperature that could be measured – they acknowledge that it is a rather vague but useful range of expectable daily highs and lows.
This acceptable and reasonable approach is far different than taking the high temperature of San Diego, Los Angeles, Mohave and Palm Springs, adding them up, dividing by four, and pronouncing that you have produced the temperature average of the SW California Desert. You may have an absolutely correct — precise to many decimal places — mathematical mean of the numbers used, but you will not have produced anything like a numerical temperature or a physically meaningful result. Whatever numerical mean you have found will not represent the physical reality of “temperature” anywhere, no less the region of interest.
“But, but, but, but” ….. no buts!
One cannot average temperature
Why not? Temperature is just another number, isn’t it?
Temperature is not just another number – temperature is the number of – the count or measurement of — one of the various units of temperature.
temperature, measure of hotness or coldness expressed in terms of any of several arbitrary scales and indicating the direction in which heat energy will spontaneously flow—i.e., from a hotter body (one at a higher temperature) to a colder body (one at a lower temperature). Temperature is not the equivalent of the energy of a thermodynamic system. [ source ]
So, we can say that objects with temperatures with higher numbers, regardless of which scale one is using (°F, °C, K), are “more hot” and objects with temperatures with lower numbers (using the same scale) are “less hot” or “more cold”….and we can that expect that heat energy will flow from the “hotter” to the “colder”.
Multiplying temperatures as numbers can be done, but gives nonsensical results partially because temperatures are in arbitrary units of different sizes but most importantly because the temperatures do not represent the heat energy of the object measured but rather relative “hotness” and “coldness”. “Twice as hot” in Fahrenheit, say twice as hot as 32°F (freezing temperature of water) is 64°F – obviously warmer/hotter but only nonsensically “twice as hot”. In Celsius degrees, we’d have to say 1°C (we can’t double zero) and we’d have 2°C or 35.6°F (far different than 64°F above). Yes, that is because the unit sizes themselves are different. However, if we wanted to know how much “heat” we are talking about, neither degrees Fahrenheit or degrees Celsius would tell us….temperature is not a measure of heat content or of heat energy.
A cubic meter of air at normalized sea level air pressure (about 1,013.25 millibars) and 60% humidity at a measured temperature of 70°F contains far less heat energy than a cubic meter of sea water at the same temperature and altitude. A one cubic meter block of stainless steel at 70°F contains even more heat energy. The relative hotness or coldness of a body of matter can be expressed as its temperature, but the amount of heat energy in that body of matter is not expressed by giving its temperature.

How is heat expressed – quantified – in science?: the units of heat energy are calories, joules and BTUs. [ source ] We see that none of the units of heat are units of temperature (°F, °C, K). (Note: If thermodynamics were easy, I wouldn’t have had to write this essay.)
Temperature is a property of matter – and temperature is specifically an Intensive Property.

Extensive properties can be added together – Volume: Adding 1 cubic meter of topsoil to one new cubic meter of topsoil equals two cubic meters of topsoil and fills twice the volume the raised-bed garden in your yard. Length: Adding one mile of roadway to one mile of existing roadway gives two miles of roadway.
But for Intensive Properties, this does not work. Hardness is an Intensive Property. One cannot add the numerical Mohs scale hardness of apatite, which has a value of 5, to the numerical Mohs scale hardness of diamond, which has a value of 10, and get any meaningful answer at all – certainly not 15 and likewise, not “5 plus 10 divided by 2 equals 7.5”.
Color is an Intensive Property. Color has two measures, wavelength/frequency and intensity. Most of us can easily discern the color of matter – our eyes tell our brains the generalized wavelength of the light reflecting off or emanating from an object which we translate to a color name. Scientifically, the wavelength (or mixed wavelengths) of the reflected or emanated light can be measured as frequencies (in terahertz — terahertz, 1012 Hz ) and wavelengths (in nanometers). Colors cannot be added as numbers. In colored light, adding the three primary colors evenly results in “white” light. In pigments, adding the three primary colors results in “black”, and other combinations, such as magenta and yellow, in surprising results.

Similarly, temperature, an Intensive Property, cannot be added.
“Intensive variables, by contrast, are independent of system size and represent a quality of the system: temperature, pressure, chemical potential, etc. In this case, combining two systems will not yield an overall intensive quantity equal to the sum of its components. For example, two identical subsystems do not have a total temperature or pressure twice those of its components. A sum over intensive variables carries no physical meaning. Dividing meaningless totals by the number of components cannot reverse this outcome. In special circumstances averaging might approximate the equilibrium temperature after mixing, but this is irrelevant to the analysis of an out-of-equilibrium case like the Earth’s climate.” [ source: Does a Global Temperature Exist? By Christopher Essex, Ross McKitrick and Bjarne Andresen ( .pdf ) ]
That is a wonderful, but dense, explanation. Let’s look at the salient points individually:
1. Temperature, an intensive property, is independent of system size and represents a quality of the system.
2. Combining two systems (such as the temperatures of two different cubic meters of atmosphere surrounding two Stevenson Screens or two MMTS units) will not yield an overall intensive quantity equal to the sum of its components.
3. A sum over intensive variables carries no physical meaning – adding the numerical values of two intensive variables, such as temperature, has no physical meaning, it is nonsensical.
4. Dividing meaningless totals by the number of components – in other words, averaging or finding the mean — cannot reverse this outcome, the average or mean is still meaningless.
5. Surface Air Temperatures (2-meters above the surface) are all spot temperature measurements inside of mass of air that is not at equilibrium regarding temperature, pressure, humidity, or heat content with its surroundings at all scales.

We can see that even at very small scales, the few meters surrounding the MMTS sensor at the Glenns Ferry weather station in Idaho, the air temperature system is far from being at equilibrium — air over a hot transformer, frozen bare grasses, snow patches and brush, each absorbing heat energy from the sun and with differing heat content. All these smaller sub-systems are actively out-flowing heat or absorbing heat energy from the unequal systems around them. In a practical sense, if one was standing next to the sensor, you would know it was “cold” there, the air at the sensor being well below freezing – but in a pinch, you might be able to cuddle up to the transformer and feel warmer sharing its heat. It is not, however, scientifically possible to “average” the air temperatures even inside of the two-meters-on-a-side cube of air around the sensor.
One cannot average temperature.
# # # # #
Author’s Comment:
I am under no illusion that this essay will be widely accepted by all that read here. It is, however, scientifically and physically correct and might shatter a lot of firmly held beliefs.
I will be writing a follow-up, Part 3, covering the excuses used in CliSci for pretending that they can validly average temperatures – including the lame excuses: “We don’t average temperatures, we average anomalies”; “We don’t just find means, we find weighted means”; “We don’t average, we krig”; “We don’t make data up, we ‘use numbers from the nearest available stations, as long as they are within 1,200 kilometers’ [750 miles].” (Note: This is the approximate distance from Philadelphia to Chicago or London to Marseille, which as we all know, do not share common climates, no less air temperatures); and many more. In all cases, temperatures are inappropriately averaged resulting in meaningless numbers.
One can, however, average and work with heat content which is an extensive property of matter. It is the heat content of the “coupled non-linear chaotic system” which is Earth’s climate that Climate Science is concerned with when they insist that increasing atmospheric CO2 concentrations are trapping more heat in the Earth system. But CliSci does not measure heat content of the system but instead insists on substituting the meaningless numbers various groups label as Global Average Surface Temperature.
Please feel free to state your opinions in the comments – I will not be arguing the point – it is just too basic and true to bother arguing about. I will try to clarify if you ask specific questions. If speaking to me, start your comment with something like “Kip, I wonder….”
Thanks for reading.
# # # # #
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
Sorry Kip, I could not even read this. The phrase “not even wrong” came to mind.
Tom.1 ==> Your loss. Even if you violently disagree with another’s viewpoint or understanding, it is important to read widely. You might even read the paper referenced near the end of the essay:
Does a Global Temperature Exist? (degruyter.com)
Not even wrong?
There’s nothing wrong except the subject matter is not very important and should not have needed a three-part article. The link Hansen mentioned (below) is excellent. I read it about 15 years ago.
A adjunct of not adding temperatures is you cannot add powers (W/m2).
After slamming Nick-the-Stroker repeatedly in the Weather Station Siting article, for blindly defending all things “government”, I find myself disappointed today. And I will risk a new thumbs down personal vote record by saying Nick’s comments here make more sense than the article, probably for the first time at this website. The purpose of the article remains puzzling.
Hansen, the author of many very good articles here, and Stroker, the defender of all things “government”, should have been a predictable battle. But Hansen decided to write about a nothingburger, and Strokes provided reasonable comments. This is like some bizarro world this morning — I’m going back to sleep !
There are plenty of issues with a global average temperature
— Accuracy — is it +/- 0.5 degrees C. or +/- 0.1 degree C.
or do we even know the likely margin of error?
— No one lives in a global average temperature
— One average hides temperature changes by latitude, land versus oceans changes by month of the year, and changes by day versus night? That’s a lot of hiding details.
— No one knows what a normal average is. And it’s debatable if a +1 or +2 degree C. increase from today would be good news, or bad news. Or would that imagined increase of +1 or +2 degrees C. be better, or worse, than a -1 or -2 degree decline from today.
The global warming that affected SE Michigan where I have lived since 1977 has been wonderful and we hope for a lot more warming. Our winters are not as cold as they were in the 1970s. Much less snow in 2022 than in the prior 44 winters. Please give us more of that. I didn’t need a global average to tell me that. Nor a scientist or a computer game. Living in the same home since 1987, and four miles south for ten years before that, helped a lot in our detection of climate change — this was not a big change.
Richard ==> If only I had been writing about Global Average Temperature…..but as I made clear from the very start, I was writing about the impossibility of averaging the numerical values of Intensive Properties like temperature.
Scientific mass-turbation.
Global average temperature is a statistic, not a measurement.
We already knew that.
We have a decent UAH temperature statistic since 1979
Has that been useful in any way?
I say yes
Apparently, you say no.
That makes you wrong
Richard ==> Opinions vary….
Are you running for political office?
Richard, tracking the GAT has no predictive value about future climate or weather. The proof is that all of the predictions for the last 30 years or more have been wrong.
It is like coming up with a program that tracks winning lottery numbers.
In what way has this effort been useful? Be specific please.
In many cases trends only tell you where we have been but have no value in predicting where we are going.
UAH has another graph that shows this very well. When you look at it, you can tell that temps go up and temps go down. Flip a coin to tell where we are going in the future.
Without averaging, climate science would have nothing.
Not true
Always wrong wild guess predictions of a coming climate crisis do not require any averages of past temperatures. The predicted future warming rates are not even based on past warming rates — they are 2x to 3x higher than the cherrypicked 1975 to 2022 period. Global average temperatures and climate computer games sound “scientific” but scary climate predictions could be made without them.
The models are merely fancy linear projections of atmospheric CO2 content, entirely the assumptions of the operators.
That are required to keep their jobs !
Averaging is most useful in small, local areas. The larger you go, the less useful it is. If you were to take the averaged temperature of every planet and moon in the solar system, then average them to gather for a planetary system average temperature, you can see how meaningless it becomes. Climate by definition is local.
Yep.
If you assume the temperature profile at least approximates a sine wave then the temperature at two different stations is sin(a) and sin(a + phi). The correlation between the two is cos(phi).
phi itself is a function: phi(distance, elevation, pressure, humidity, wind, precipitation, terrain, geography, etc)
As distance increases the correlation goes down. As elevation differences grow the correlation goes down. As humidity differences grow the correlation goes down. And on and on and on.
Trying to combine temperatures whose correlation is very low into an average is a losing proposition from the word go. Not even trying to weight each temp to a normalized value is going to help much because of the time dependence of each of the factors – humidity changes minute to minute, pressure changes at least hour to hour, wind changes minute to minute, etc.
The cosmic microwave background is about as big of a thing I can think of. Except for a few contrarians on WUWT most people have found profound meaning and utility in its average temperature of 2.7 K.
Once again, you are mentioning an intensive property derived from an extensive property – electromagnetic radiation and known physical constants.
And that radiation is supposed to be emanating from the smallest local area you can think of at the beginning of the Big Bang!
Temperature is an intensive property. 2.7 K is the average temperature of the CMB. Because that 2.7 K is an average of an intensive property then per the thesis of this article it is meaningless. Do you think the CMB temperature is meaningless?
Kip, thank you for this post. This is important and thought-provoking. Here is one of those thoughts.
How does one infer temperature from space? Detection of IR emission and computation from empirically established mathematical relationships. One such known relationship is that radiated energy is proportional to absolute temperature to the 4th power.
From space, outgoing visible and IR radiation is detected by sensors aboard the geostationary satellites. Every pixel in the resulting visualizations – in relatively high resolution – is a symbolic representation of the radiance value detected from that direction.
For the GOES East satellite, here is a link to the Band 16 – the “CO2” band centered at a wavelength of 13.3 microns – animation of 12 images over a two-hour period. The “brightness temperature” color scale used for these visualizations is such that the radiance at 50C (red) is 13 times the radiance at -90C (white.)
https://www.star.nesdis.noaa.gov/GOES/fulldisk_band.php?sat=G16&band=16&length=12
So here is my point: Just as you state, “One cannot average temperatures”, it can similarly be stated that one cannot infer an overall average heat-trapping effect from GHGs when it is obviously not a “trap” but a huge array of highly variable emitter/reflector elements.
For the lower tropopause (LT) satellite measurements, the microwave sounding units (MSU) record a convolution of the temperature-dependent O2 microwave radiation with the exponentially decreasing air temperature up to an altitude of about 10km. This number is then transformed into a single “temperature”, but it isn’t anything close to an average of the LT temperature.
David ==> It would be possible to detect a change in the heat content of the Earth’s climate system. It just is not and cannot be done using any of the GAST models.
Since there are many things between the sensor and the IR radiating media that can affect the amount of IR reaching the sensor there will always be some uncertainty associated with the value. Why is that uncertainty never factored into the results?
I should clarify that the point of my comment was not about the uncertainty of the reported radiance values, nor of the calculated brightness temperatures used for the visualizations. It was to point out from space-based observed data that the operation of the atmosphere as an infrared “trap” due to GHGs is an incomplete and misleading description. The high resolution visualizations show that the motion changes everything about where to expect the energy to end up. Therefore, in my view, it is just as meaningless physically to talk about an average “heat-trapping” effect of GHGs (i.e. a “forcing”) as it is to average an intensive property like temperature attempting to characterize the energy state of the planet.
David ==> If you’d like to write an essay of opinion piece for publication here, let me know. My email is my first name at i4.net
Nice to see someone else understands that you can’t average intensive entities.
Now, to tackle the difference between counts and measurements. Counts are definite numbers, and can be averaged properly. Measurements are not precise, and averaging them must take into consideration the LEAST precise measurement in the group.
Thus, if you average these measured temperatures (32, 55.1, 78.25 and 100.001) the average can ONLY be 66. Not 66.33775. This would be true of any measurements – volts, amps, wind strength, water flow, etc. You will CONSTANTLY see climate reports where they identify two-decimal place averages. Not possible, unless every temperature in the series is also two decimal places. (It never is.)
John ==> “Counting” of course is the huge umbrella term under which measurement falls.
CliSci not only gives numerical values to far greater precision than they were measured (usually as a result of the division involved in averaging) but they average properties (such as temperature) that may not be averaged to a meaningful result.
John Shotsky said: “Measurements are not precise, and averaging them must take into consideration the LEAST precise measurement in the group.
Thus, if you average these measured temperatures (32, 55.1, 78.25 and 100.001) the average can ONLY be 66. Not 66.33775.”
That’s not technically correct. Refer to the Guide to the Expression of Uncertainty in Measurement, An Introduction to Error Analysis by Taylor, and Data Reduction and Error Analysis by Bevington.
The correct answer using GUM equation 10 is.
sqrt[(1/4)^2 * ((0.5/√3)^2 + (0.05/√3)^2 + (0.005/√3)^2 + (0.0005/√3)^2)]
thus…
avg(32, 55.1, 78.25 and 100.001) = 66.34 ± 0.07
…using the significant figures rules described in Taylor above.
You can confirm this with the NIST Uncertainty Machine using (x0+x1+x2+x4)/4 where x0…x4 are rectangular distributions with left and right endpoints defined by the specific precision. For example, 32 would be 31.5 to 32.5.
How many times have you been told that precision is not accuracy. Accuracy is defined by uncertainty not by resolution. A very precise measurement can be very inaccurate and therefore have a large uncertainty.
If you would actually study Taylor’s tome instead of trying to cherry pick things you would find on Page 15, Section 2.2, Rule 2.5:
“Experimental uncertainties should almost always be be rounded to one significant figure.”
Rule 2.9: “The last significant figure in any stated answer should usually be of the same order of magnitude (in the same decimal position) as the uncertainty”
Since no uncertainties are included with the measurements it is necessary to fall back to the significant digit rules:
In our case the number of significant digits to the right of the decimal point in 32 is ZERO! The answer should, therefore have no digits to the right of the decimal point. Thus John is correct – the answer is 66! No uncertainty can be appended since no uncertainty is included with the measurement!
PLEASE WRITE THIS AS MANY TIMES AS NEEDED TO GET INTO YOUR LONG TERM MEMORY: “Precision is not accuracy!”
It’s why the standard deviation of sample means is *NOT* an uncertainty for the mean. The uncertainty of the individual elements must be propagated onto the mean in order to determine its uncertainty. You cannot calculate uncertainty from the stated values, you can only propagate given uncertainty.
He will never grok this.
Not this nonsense, again.
Technically correct, but in application wrong. Since the composition of air is relatively constant then temperature is proportional to heat. Sure humidity changes, but it is relatively constant, e.g. it is dry in winter. Over a timespan of decades, if you get a rising temperature trend, say the 1.1C/century rise in UAH it is meaningful.
James ==> This essay is about the error of averaging temperatures which are Intensive Properties.
The density of air is not constant under changing air pressure. The heat capacity of air changes with humidity — which ranges from 50 to 95%, even across relative small distances.
A substantial increase in the heat content of the climate system would be meaningful, but it cannot be proved by averaging temperatures from disparate locationms and times and conditions. Just can’t.
Why do you say the composition of air is relatively constant and then say the humidity changes? When humidity changes the composition of the air has changed which changes its buoyancy and thus its convection, conduction properties – all of which affect the temperature that mass of air you are measuring will exhibit.
Unless the humidity is also changing.
Kip==>
I agree with you Kip, that averaging temperatures may not provide meaningful results, but not exactly for your stated reason.* I made the point some 30 years ago in the first public talk I ever delivered about what at that time was called “global warming”. My reasoning is that the measurement of temperature produces a conditional quantity — it depends on a huge number of other factors almost none of which are ever stated, measured or quantified. In fact, in engineering classes I try to stress the point that a measured number is not useful unless one provides at the same time an estimate of its uncertainty. Almost never does a person stating a temperature do this. In fact, in a metrology class I taught at WSU I found that the engineering students would often not even determine if an instrument was calibrated properly before using it.
With regard to Briggs’s statement about time series I also point out in that same talk that time series are very difficult to deal with because measurements of temperature are not a sufficient sample to quantify even the mean (smoothed value) of a time series. They only become so when conditioned with the additional assumption that the time series is stationary. It is possible I think that Earth temperature is not stationary and does not conform to the concept of a central limit.
Well, that is my view.
*-In thermodynamics we can use intensive quantities such as enthalpy per unit mass, and adding them (or even multiplying them) can produce a perfectly valid result. I.e. in order to analyze output of a steam turbine we can use a state diagram based on per mass entities and then multiply by some factor to find a mass flow rate needed to produce a design power capability.
Kevin ==> It is, in fact, possible to discover the value of an Extensive Property from formulas which involve Intensive Properties.
“The ratio between two extensive properties is an intensive property. For example, mass and volume are extensive properties, but their ratio (density) is an intensive property of matter.”
Obviously, the formula works in several ways. Energy or heat content (enthalpy) is an extensive property, mass is an extensive property. “enthalpy per unit mass” is a ratio.
Kip, excellent post. TY.
BTW, I learned this stuff in high school. Pity most apparently didn’t.
A quibble. An average of temperature anomalies from some anomaly baseline has a legitimate meaning. It is what Roy Spencer of UAH posts. It just isn’t anything to do with global atmospheric heat content or the underlying ‘global warming’.
Rud ==> “It just isn’t anything to do with global atmospheric heat content or the underlying ‘global warming’.”
Yes, Spencer’s UAH records isn’t “nothing” But it is “fruit of the poisoned tree”.
“An average of temperature anomalies from some anomaly baseline has a legitimate meaning.”
How so? If you have a 1C anomaly in Fairbanks, AK and a 1C anomaly in Miami, FL on the same day does that tell you anything about the climate at each location? Does it tell one anything about the climate somewhere in-between the two locations? Does it actually tell you anything?
BTW, the uncertainty of an anomaly is inherited from the uncertainties of the components. Using anomalies doesn’t lessen uncertainty. If your component daily absolute temps have a +/- 0.5C uncertainty the average will have +/- 0.7C uncertainty. Start averaging all those and your uncertainty will grow. The anomaly generated from that very uncertain average will inherit the same uncertainty. Pretty soon the uncertainty overwhelms the differences you are trying to identify.
Tim ==> The anomaly is formed from averaged temperatures, which cannot legitimately physically be averaged, which do not, cannot, inform as to heat content or heat energy, not even momentarily.
After all that “breaks the rules of physics”, we get to uncertainty, original measurement error/uncertainty, etc etc.
But by that point Rud, doesn’t it become little more than an interesting abstract statistic? I agree that you could, theoretically, do anything you like with numbers; but trying to then relate the result to anything in the real world becomes completely meaningless.
Richard ==> Refer to the title of this series….
Using this logic one might make the argument that the radiant heat transfer equation Q = εσA(Th^4 – Tc^4) yields a meaningless result too since it subtracts the temperature, raised to the 4th power no less, of one body from another. Note that the subtraction here has similar semantics to a sum. The 4th power of the temperature and what it means is a whole other issue that I’ll pass on for now.
Similarly the hypsometric equation Tv_avg = (g/R)[(z2-z1) – ln(p1/p2)] must be meaningless as well since it explicitly computes the average virtual temperature of a layer between height z1 and z2. BTW…the hypsometric equation can be derived from the ideal gas law PV=nRT and the hydrostatic equation ∂p/∂z = -∂p and involves an integration function of T which as you know has similar semantics to a sum function.
Unfortunately we’re going to have to indict the QG height tendency equation [σ∇^2 + f0^2(∂^2/∂p^2)∂Φ/∂t = -[f0σVg•∇(ς-f)] + fo^2∂/∂p[Vg•∇(R*Tv/p)] as well. Those familiar with QG theory may notice that the second term on the RHS of that equation is the differential temperature advection term which is based on the divergence of the gradient of the virtual temperature field. And remember from your vector calculus that divergence is the sum of the partial derivatives of the spatial components of the vector field. In other words, temperature (or least the partial derivatives) are being summed here. But, of course, all of that is meaningless per the argument put forth in this article as well.
Do you see the problem?
bdgwx ==> You make a spurious point.
“The ratio between two extensive properties is an intensive property. For example, mass and volume are extensive properties, but their ratio (density) is an intensive property of matter.” This goes the other way as well. In your thermodynamic equations, T is in K, of course.
When formulas are used to find Extensive Values…..etc.
Read the paper referenced.
https://www.fys.ku.dk/~andresen/BAhome/ownpapers/globalTexist.pdf
I’m addressing the statements..
One cannot average temperatures.
and
3. A sum over intensive variables carries no physical meaning – adding the numerical values of two intensive variables, such as temperature, has no physical meaning, it is nonsensical.
and
4. Dividing meaningless totals by the number of components – in other words, averaging or finding the mean — cannot reverse this outcome, the average or mean is still meaningless.
I’m providing real world cases where temperatures are summed and an average temperature is computed. Doing so is useful, actionable, and meaningful.
And it goes way beyond trivial sum or average concepts. Scientists create temperature fields, They turn them into vector fields like with the gradient operation. They turn them back into scalar fields with divergence operators. Scientists do all kinds of things to temperatures and temperature fields; the least of which is summing and averaging them.
It might also be interesting to note that UAH sums temperatures as well. In fact, the formula for the TLT temperature at any specific spot (not an average) is as follows:
Tlt = 1.538 * Tmt – 0.548 * Ttp + 0.010*Tls
The global average TLT is then…
Tlt_global = Σ[Tlt_x, 1, N] / N
…where Tlt_x is the lower troposphere temperature at location x and N is the number of locations in the sample.
I forgot to mention that there are lot of skew-t calculations that involve summing temperatures as well. For example, Convective Available Potential Energy metric. The formula is CAPE = g∫[(Tparcel – Tenv)/Tenv, Zlfc, Zeq, dZ]. You can replace Tparcel with Tparcel_avg. This is often referred to as MLCAPE (mixed layer CAPE). CAPE is a useful, actionable, and meaningful quantity. Again, I’m just pointing out that temperatures are summed all of the time.
“Do you see the problem?”
The problem is that you are confusing intensive and extensive properties!
weight = g X mass. Mass = density X volume. Weight is an extensive property. Mass is an extensive property. G (i.e. force) is an extensive property. Volume is an extensive property. Density is an intensive property.
You can add/subtract weight. You can add/subtract mass. You can add/subtract volume. You can’t add/subtract density.
You can calculate extensive properties from intensive properties. You can calculate intensive properties from extensive properties. You can average extensive properties because you can add/subtract them. You can’t average intensive properties because you can’t add/subtract them.
TG said: “The problem is that you are confusing intensive and extensive properties!”
That’s not the problem. And your post has little if anything to do with my post. If you want to address something I actually said then I’d be more happy to engage with your point as long as it is relevant. But I’m not going to engage with yet another one of your strawmen.
“ Q = εσA(Th^4 – Tc^4)”
Nice job of dodging! Tell us in this formula what the extensive properties and intensive properties are!
Note: “εσA” converts the intensive property of T into an extensive property of Q.
What does Th^4 – Tc^4) represent by itself? It doesn’t equal Q! It doesn’t actually equate to *anything*!
I’m not surprised you refuse to answer. You don’t understand the subject at all!
Is Th^4 – Tc^4 a sum of an intensive property or not?
“Is Th^4 – Tc^4 a sum of an intensive property or not?”
NO!
A simple, straightforward thought experiment for you.
I push you out of the air lock of the starship Enterprise into the vacuum of space between Mercury and the Sun.
I then simultaneously transport a cube (T1) of some futuristic substance from the surface of Mercury and a cube (T2) of the same stuff from the corona of the sun to right in front of you, both with temperature measuring devices attached.
Questions:
You think T^4 is extensive?
Answer my questions to you first!
You’re deflecting and diverting. I’ll ask again. Do you think T^4 is an extensive property?
You first! I asked *YOU* first.
I think T^4 is intensive. I also think you can subtract one T^4 value from another T^4 value. I also think T^4 has meaning. Specifically we can say that for bodies A and B with Ta^4 > Tb^4 that body A is warmer than body B and will have a higher radiant exitance and heat will flow from A to B in proportion to Th^4 – Tc^4. The bigger the difference in Th^4 and Tc^4 or more simply Th and Tc the more heat that will flow. Almost everyone will agree that is meaningful, useful, and actionable.
Actually T is intensive. Even if you use an exponent, it remains intensive. The fact that it used to calculate a flux doesn’t change its property. Even if you split a mass the temperature of both will still be “T”.
You are trying to decide if the radiant flux between two bodies is intensive or extensive. It is neither. It is not a property of either body. If you look at the basic S-B equation, it is described for a single black body at a given temperature.
The wiki I’ve referenced before will give you an idea about conjugate properties and transfers of those properties.
Yes. I know T is intensive. Radiant exitance is intensive as well. The net flux between two bodies is neither. Yet σTh^4 – σTc^4 is still a meaningful and useful metric.
BTW…don’t think the irony of how you promote CDD, HDD, and/or GDD is lost on me.
You forgot FDD, JDD, and SDD.
bdgwx really doesn’t understand physical science at all. He can’t distinguish between an integral of a curve, i.e. the area under the curve, and the mid-range value between max and min. He’s a statistician with no physical science training, no engineering training, and apparently no calculus training. Anyone that thinks all uncertainty cancels in a non-normal distribution and that the standard deviation of sample means is the uncertainty of the population mean is hopelessly lost in statistics textbooks that never address uncertainty.
Somehow, without any real metrology experience, he is now the world’s foremost expert on the subject.
It is obvious that he didn’t read the Essex paper, including that an ISO committee tried and failed to come up with a standard averaging document.
And in lieu of opening his mind went on another formula hunt for anything he thinks disproves Kip’s subject.
(And I had forgotten that DD is degree-days!)
If he read it he didn’t understand it. It is pretty intense and takes some pondering to actually understand.
Indeed yes.
“BTW…don’t think the irony of how you promote CDD, HDD, and/or GDD is lost on me.”
Those are *NOT* averages. Those are integrals. The area under the temperature curve! There is no dividing to come up with an average. The area under the curve *is* an extensive property!
As we’ve discussed in the past, the equation (Tmax-Tmin)/2 doesn’t give you an average! It gives you a mid-range value. The climate scientists even mis-name this value. The more closely the daily temperature curve approaches a sine wave the closer the *average* daytime value approaches 0.67 * Tmax and the “average” nighttime value approaches 0.67 * Tmin!
You really don’t understand physical science at all. You just cherry pick equations you think might stick to the wall when you throw it against the wall.
Is Tmax + Tmin a sum of an intensive property or not?
Go read the paper.
“Is Tmax + Tmin a sum of an intensive property or not?”
Tmax is an intensive property. Tmin is an intensive property.
(Tmax + Tmin) is meaningless. It’s nothing. It’s neither intensive or extensive.
You *can* calculate an extensive property from an intensive one using a functional relationship. You can then add or subtract those extensive properties.
*YOU* are letting algebra confuse you. The functional relationship is Q = εσATh^4 – εσATc^4.
Algebraically you can factor out the εσA piece. That doesn’t mean you are adding intensive properties. You are still adding extensive properties. You don’t have to do the factoring!
Tmax + Tmin is not even useful for calculating the average of a sine wave (daytime) or exponential decay curve (nighttime).
By that logic CDD, HDD, and GDD are also meaningless then.
“By that logic CDD, HDD, and GDD are also meaningless then.”
Why are they meaningless? They are areas under a curve. An area is an extensive property. Why are extensive properties meaningless?
TG said: “*YOU* are letting algebra confuse you. The functional relationship is Q = εσATh^4 – εσATc^4. Algebraically you can factor out the εσA piece. That doesn’t mean you are adding intensive properties. You are still adding extensive properties. You don’t have to do the factoring!”
I forgot to address this earlier. I’ll do so now.
You think εσAT^4 is extensive?
“You think εσAT^4 is extensive?”
εσAT^4 is the power radiated from a body. Power is measured in joules/sec or in kg-m^2/t^3
Is a kilogram an extensive property?
Is a meter an extensive property?
Is a sec an extensive property?
Does multiplying or dividing extensive properties make them intensive?
I know what εσAT^4 is. I’m asking if you think it is an extensive property?
If I have a cube radiating power toward another cube and I cut the radiating cube in half what happens?
The power radiated is cut in half but its temperature stays the same.
So, is power an extensive property? You bet. It depends on the mass of the object. The temperature does not.
You should have been able to figure that out from the dimensional analysis I provided you.
For the record I think an argument can made that power is extensive because, at least in this case, if you change the amount of matter you change the power. P is dependent on the size of A. Multiplying intensive and extensive properties with other constants can make an extensive property. Is power always extensive though?.
What if we drop the A and think about the canonical SB form εσT^4? That is W/m2 which everyone would agree is intensive since radiant exitance is independent of the size of the body. We can then do εσ(Th^4 – Tc^4) or εσTh^4 – εσTc^4 to get the net flux across the boundaries of the hot and cold body. Here we have added (technically subtracted) two intensive properties to produce something useful and meaningful.
The point is this. You can add (or subtract) temperatures of different bodies to create useful and meaningful intensive quantities. You can multiply or divide temperature with something else to create useful and meaningful extensive properties. You can even raise temperature to the 4th power. You can do all kinds of things with temperature. Claiming you can’t do this or that with temperatures can be easily challenged with real world counter examples.
“What if we drop the A”
In other words let’s just create a phantom world and play like we actually live there.
Yeah! That’s the ticket We can prove anything in Bizarro world!
I assure you and the rest of the WUWT audience that the Stefan-Boltzmann Law describes the world around us. It may be bizarre, but it is very real. I know…you challenge the SB law. I have neither the time nor the motivation right now to defend it. I’m just telling you how it is.
Whoa! bgwxyz hath spake, so mote it be!
The SB law predates me by a considerable amount of time. It was the pioneers of thermodynamics Stefan, Boltzmann, Wein, Planck, etc. that hath spake. It’s also not a law just because they said so. It’s a law because it has yet to be falsified.
I am of course referring to your bizarre version.
Again…εσT^4 isn’t mine to claim; version or otherwise. And bizarre as it may be it still describes the real world around us.
That is *NOT* the only term at play in the real world. There is also a conduction term and convection term at play.
It’s a law when the restriction of equilibrium is at play. Read Planck again, this time for meaning.
Correct. If they are not at equilibrium, then a gradient term must be introduced, (ΔT/t), and ΔT doesn’t have to be linear either. S-B as normally seen is for an infinitely small point in time, i.e., equilibrium.
It,s a law because it is confirmed by experiment and the experiments confirm/validate the mathematical predictions.
I’m glad to hear that is your position. Maybe you can convince Tim and Carlo Monte of that. I haven’t had much luck so far and I’ve lost the motivation to continue trying.
You need to study some thermodynamics. There are three physical processes at work in the transport of heat. They are radiation (S-B equation), and conduction, and convection. They are all in effect in the atmosphere.
It is one reason the “radiation diagrams” you see fail in so many ways. They uses averages too and don’t take everything into account. Gradients of heat flow for one, both in the surface and in the atmosphere.
Absolutely right! They are simplified beyond the point of utility.
from wikipedia: “In physics, Planck’s law describes the spectral density of electromagnetic radiation emitted by a black body in thermal equilibrium at a given temperature T, when there is no net flow of matter or energy between the body and its environment. (bolding mine, tg)
I know you are just going to say this is wrong with no actual refutation to actually prove it is wrong. It’s a religious dogma issue with you.
Planck himself wrote in his Theory of Heat Radiation says: ” In a region where the temperature of the medium is the same at all there is no trace of heat points conduction.”
In other words, equilibrium!
And the sun is constantly moving so no equilibrium except at vanishingly small points in time.
They only confirm it for objects in equilibrium, no convection and no conduction.
As has been pointed out to you any number of times, S-B assumes an object in equilibrium, no convection, no conduction.
S-B has no conduction and convection term – meaning the only heat transfer process that is at play is radiation.
The world around us is *NOT* in thermal equilibrium. That may seem to be bizarre to you but it is *very* real.
You are trying to gaslight everyone. You failed.
Everyone read up on the kinetic theory of gases. It will help.
http://hyperphysics.phy-astr.gsu.edu/hbase/Kinetic/imgkin/kintem3.gif
What a surprise, Stokes is the first commenter! His comment is not blatantly wrong, just misleading as usual. What he said about more energy lost to Space is true. What he did not say is how CO2 controls how much energy is lost to Space, as no one knows, and the rate cannot be calculated from First Principles.
Actually, the comment is just a quote from Roy Spencer, in WUWT as linked.
Nick ==> Yes, even recognized it. But why did you quote Spencer, with whom you more generally disagree?
Spencer is quite good at debunking the most foolish ideas that circulate. His list of ten top arguments that don’t hold water is worth re-reading. At the time (2014) scientific standards at WUWT were not so low, and most of them did not prevail. Some, as from Sky Dragons, were even banned. But they keep coming back now, at higher volume.
Actually a range of probability could be developed. The problem is that conduction and convection is involved making it very complicated.
It is more complicated than that. How does ppm CO2 vary with altitude? At what altitude does the atmosphere become translucent to 15 micron IR? What exactly is water vapor doing up there? Wow…
If you have a set of figures you can average them – in several different ways. ANY set of figures.
So you can certainly average a set of temperature readings. The question is, what does that figure actually represent? I suspect that the figure does not actually represent what most climate scientists think it does….
dodgy ==> Right you are — that’s is why this is a series about NUMBERS…you can mathematically average any set of numbers.
You CAN calculate a mean value of a distribution. But even using just numbers, you have a meaningless number without also knowing the other descriptors of the distribution the mean represents.
50 is a mean. What values contributed to it?
That’s the reason I always say global temperature anomaly index. As an index it’s somewhat meaningful, but no strict physical meaning.
Edim ==> Indexes are an interesting concept, but often their physical, real-world meanings range from obscure to entirely unknown.
What would you say “global temperature anomaly index” means? (Just curious…)
Hard to explain. Index indicates (same etymology). I simply think that an uncorrupted GTA index is a somewhat useful measure of the Earth’s surface/atmosphere thermal energy.
Edim ==> I’m quite sure it is an indicator of something. Maybe a useful indicator of ranges needed in thermometer design?
The problem I am writing about is that there is no rational choice for GTA (GAST) and no physically correct way to determine such a numerical result.
“I simply think that an uncorrupted GTA index is a somewhat useful measure of the Earth’s surface/atmosphere thermal energy.”
So you must also think that humidity, pressure, wind, etc are all the same everywhere on the earth?
If they aren’t all the same then what is the “index” telling you?
Atmospheric temperatures are smooth, no discontinuities, so any path you take between locations that have different temps will necessarily pass through a location that has the average temperature of the starting and ending points when measured at each location in the same manner.
“when measured at each location in the same manner.”
Does a passing cloud create a discontinuity?
Have you ever driven into a stationary weather front?
Jim ==> I have sailed into one…..
The transition zone may be limited but no discontinuity.
You’ve obviously never experienced a gust front where temps can change drastically.
That might be true if you could measure the temperature *continuously*. What measuring station does that?
I have thought this for a long time, we live by, actually on, the sea, the temperature readings for our city are derived from a station at the airport. There is generally a 2 to 5 degree difference. if averaging was an actual or accurate indicator then the temperature halfway between the two should be 1 to 2.5 degrees above the sea level temp which it isn’t. Nick’s examples below are asinine. A tub of bath water is a single thing and room temperature varies depending on where you are sitting, you cannot find an average in that sort of environment.
Kip, I enjoyed this post. But truth be told I have enjoyed them all.
mkelly ==> Well, thank you! (I hope, of course, that you are referring to my posts….)
I’M NOT IGNORING YOU!
But I am out for the remainder of the day and will not be back online for about 20 hours.
Please feel free to continue the discussion among yourselves — though this post is buried pretty deep already.
Note: The Danielle Dixson story has far-reaching implications for science — CliSci, medicine, biology, etc. Pielke Jr. wrote about it today as well.
Also, about this statement-
“A one cubic meter block of stainless steel at 70°F contains even more heat energy. “
Specific gravity of stainless is 7.9 so about 7,900 kg per cubic meter vs water at 1,000 kg.
Stainless has specific heat capacity of 468 j/(kg K) while water has a specific heat capacity of 4,181 j/(kg K) at 70 F (294 K) the heat stored in the stainless is about 3.7 MJ and in the water 4.2 MJ.
The cubic meter of water actually stores more heat.
The extraordinarily high heat capacity of water effectively moderates temperature changes in the natural environment.
Kip, I wonder if you have considered that the gas laws support what you say here; in that they say that Volume is inversely proportional to temperature?
This becomes all too apparent when one looks at the thermodynamics of the water Evaporation Process, particularly at the surface of the Oceans; for here Solar Radiation gets absorbed and the enthalpy involved is converted to Volume rather than Temperature. This being expressed as “Latent Heat”intrinsic to the Vapor which had been generated. Meanwhile the Oceans never get much above 30C in spite of millions of years of this relentless solar radiation. Why not one may well ask?; but that is another story.
IMO the whole Mindset around statistical analysis relating to Climate needs a good shake up. ; but unlikely to happen as too many so called scientists appear to have sold their souls to the intense political pressures we witness today.
Good luck with your efforts.
Regards
Alasdair
Be careful what you are looking at. Pressure is also a factor. Are you describing a constant pressure experiment?
Alasdair ==> This “IMO the whole Mindset around statistical analysis relating to Climate needs a good shake up.” is certainly true.
I think the real question is: not enough money in the government’s climate budget to move the Glenns Ferry structure to a proper place? From 2009, I see-maybe it’s been done.
Just an editorial comment: The chart is impossibly small to actually read. It’s too small in the body of the article, and there is no allowance to click on it to enlarge it. Sorry to say, but for me, that makes it a useless visual. I’d like to actually study the detail of it.
Right click on the graphic and open it in a new tab. Then use the zoom feature of your browser to expand it so it becomes readable in that new tab.
Robert ==> My apologies, here is the link to the original page:
https://weatherspark.com/m/9454/8/Average-Weather-in-August-in-Topeka-Kansas-United-States
Scientists all over the world use the global average temperature statistic.
Skeptical scientists use the statistic to show that climate models are overpredicting the rate of global warming.
Some scientists argue that the statistic is not accurate enough to be useful until recent decades.
I have never read any scientist claim the statistic was worthless.
But that’s what Mr. Hansen is implying.
And if he is not implying that, he needs better communication skills.
But what do all those scientists know?
They are just scientists.
Mr. Kip Hansen thinks he knows better.
The scientists must be fools to waste any time with a global average temperature statistic ?
Hansen says “One cannot average temperatures”.
Meanwhile, there are many averages of temperature measurements.
The global average temperature statistic is worthless, implies Mr. Hansen. And to convince readers he is smarter than nearly all the scientists in the world, Mr, Hansen states “Please feel free to state your opinions in the comments – I will not be arguing the point – it is just too basic and true to bother arguing about.”
In plain English, Mr. Hansen trashed the global average temperature statistic, and he refuses to debate it. And if you disagree with him, he implies that you are a fool. This resembles a leftist “debate” style.
Richard ==> You are demonstrating that you have not read the referenced paper, Essex et al. which was published in the Journal of Non-Equilibrium Thermodynamics.
Once you have read that paper, and studied up on the differences between an Extensive and Intensive properties, please check back in with your newly informed opinion.
“I have never read any scientist claim the statistic was worthless.”
So what? Science is not consensus. Apparently you haven’t read any of Pat Franks papers.
“In plain English, Mr. Hansen trashed the global average temperature statistic, and he refuses to debate it.”
What is there to debate? If you add 32F in Pikes Peak with 50F in Denver what do you have? It’s not length that you can add and subtract. It’s not a mass that you can add and subtract. If you add the two together and divide by two you certainly don’t have anything like an “average climate” or “average temperature” for the region.
In fact, if you think about it what do you have if you have a 6′ board and an 8′ board averaged together? What does the average tell you? Can you build a stud wall as high as the average value? How would you do it? Cut one board off and try to scab the remainder onto the shorter board? How would you join them? If you overlap the 6′ board and the 1′ piece you still won’t reach the average height of 7′.
If you have 1000 men whose average height is 5′ 9″ tall can you order 1000 T-shirts to fit the average height and expect them to fit everyone?
You have to be *very* careful using averages. In the real world they don’t always tell you what you think they do – that only happens in math world.
So let me get this straight:
“3. A sum over intensive variables carries no physical meaning – adding the numerical values of two intensive variables, such as temperature, has no physical meaning, it is nonsensical.”
So assuming the converse that “a sum over extensive variables always carries a physical meaning” except, that is, when a sum over an “extensive” variable carries no physical meaning as in the example of a sample of fish lengths in the comments;
Steven ==> Length is an Extensive Property and can be added, averaged, etc.
steven candy
Reply to Kip Hansen
August 10, 2022 9:32 pm
Lengths of a sample of fish can be added for what purpose? To represent one mega-fish OR the fish all lined up head to tail. As ridiculous as it sounds.
“4. Dividing meaningless totals by the number of components – in other words, averaging or finding the mean — cannot reverse this outcome, the average or mean is still meaningless.”
With this simple dichotomy in properties of variables (intensive vs extensive) you conflate your spurious dichotomy of which of these two types of variables can be validly averaged. To which I give a counter-example to your general theory;
Counter-example: For a normal distribution the expectation of the sample median equals the expectation of the sample mean and since the sample median is not based on the sum of sample values but their rank order it is interpretable as a measure of central tendency and therefore so does the sample mean irrespective of the interpretability of the sum of sample values.
So already a couple of inconsistencies in your theory.
I did a google scholar search on the keyword sentence “extensive versus intensive variables” with only two references by the same author coming up with one open access;
https://escholarship.org/uc/item/5mp6r34r.
No mention of “mean”, “average” or “sample” in that reference, and interestingly in Section 2 the author states
“One of the more primitive mistakes rests on the belief that the classification extensive-intensive is basic for the development of thermo dynamics. It is not. The square root of the volume clearly is neither extensive nor intensive; yet it is a well-defined property and all thermo dynamic knowledge could be expressed if we replace the volume by the new variable. It would be awkward, cumbersome and inefficient. But science could live with it. It is obviously wrong to say that only extensive and intensive variables exist.”
When does air temperature data ever have a normal distribution?
Averaging time-series data from multiple locations is not sampling a normal distribution!
They will never understand. Statistic textbooks simply do not give any knowledge of real world usage of statistics. All data values are 100% accurate and they all form a perfect distribution that can be analyzed using standard deviation and mean.
Exactly. There are a lot of metrics in widespread use that are both meaningful and actionable yet lack an obvious physical description. You and I could spend all day going back and forth on who can come up with the most mind-bending metrics out there. But more to the point not only can temperatures be summed (I provided several real world examples above), but science does all kinds of weird and mind bending things with temperatures that turn out to be quite useful. This is true for many metrics regardless of whether they are extensive, intensive, or something else. Just because we struggle to provide physical descriptions to these metrics does not make them any less useful. Their meaning is still objectively defined by how they are calculated.
Did you read the Essex paper yet?
Didn’t think so.
Metrics are only useful if they describe the real world. The average temperature reached from using the temperatures at Pikes Peak and Denver in July doesn’t really give a metric that is useful in the real world. Do you wear a coat and gloves based on the average?
But useful for whom?
Someone expounding on the impact of a 0.002 C increase in some derived averaged reported temperature over 10 years all over the planet is never going to have a scintilla of effect in any creatures’ lives.
“Bullshit Man” would only spend 2 seconds on a fly-in / fly-out visit for this.
Mr. said: “But useful for whom?”
The radiation heat transfer equation is useful to those analyzing the heat transferred via radiation between two bodies.
The QG height tendency equation is useful to those analyzing how geopotential heights change.
The hypsometric equation is useful to those analyzing the thickness of atmosphere layers.
The convective available potential energy equation is useful to those analyzing thermodynamic buoyancy above the LFC.
Different metrics are useful to different people. Just because someone doesn’t understand them or can find no use for them does not mean that they aren’t understood and useful to someone else.
How about telling us the equation that relates CO2 to GAT, and not just for the last 30 years, but for the last 150 years.
For the last 40 – 50 years at least, climate science has been doing nothing more than curve fitting to a time series. Where are equations going to come from that allows accurate enthalpy predictions that can be verified? I can’t find a paper, not one, that tries to find a relation between enthalpy and CO2 or between enthalpy and humidity on a global basis. Why not? Are these not of interest in determining what the globe is doing?
“The radiation heat transfer equation is useful to those analyzing the heat transferred via radiation between two bodies.”
That’s not an intensive property, it’s an extensive one calculated using an intensive property.
Why do you *always* insist on trying to prove you can add and subtract intensive properties by using extensive properties as proof?
note: CAPE is an *extensive* property calculated using an intensive property. It’s unit is joules/kg, both extensive properties.
Tim Gorman said: “That’s not an intensive property, it’s an extensive one calculated using an intensive property.”
Kip Hansen says “A sum over intensive variables carries no physical meaning – adding the numerical values of two intensive variables, such as temperature, has no physical meaning, it is nonsensical.”
BTW…You think W/m2 is extensive?
Tim Gorman said: “Why do you *always* insist on trying to prove you can add and subtract intensive properties by using extensive properties as proof?”
I’m just pointing out real world counter-examples to Kip’s argument.
Tim Gorman said: “note: CAPE is an *extensive* property calculated using an intensive property. It’s unit is joules/kg, both extensive properties.”
You think j/kg is an extensive property?
“You think W/m2 is extensive?”
If I have one joule flowing to the left and two joules flowing to the right, what is my net joule flow?
If I have two times, t1 = 2 sec and t2 = 4 sec, what is the interval between them?
If I have a dining room table with an area of 12 sqft and I add a leaf in the middle of size 3 sqft, what does the area of the dining room table become?
“I’m just pointing out real world counter-examples to Kip’s argument.”
No, you aren’t. You are averaging extensive properties and then calculating an intensive property from the average extensive property. That is *not* the same thing as averaging intensive properties.
“You think j/kg is an extensive property?”
Again, if I have one joule flowing left and two joules flowing right, what is my net joule flow?
If I have a 1kg chunk of iron in a bucket and I add a 5kg chunk of gold to the bucket, how many kg do I have in the bucket?
You’re not answering the questions.
Do you think W/m2 is an extensive property?
Do you think j/kg is an extensive property?
I’m not asking whether you think joules, seconds, square meters, or kilograms are extensive. I’m asking if you think watts per square meter and joules per kilogram are extensive.
I’m asking you this because I want you to really think about it.
“You’re not answering the questions.”
Of course I am. You don’t know enough physics to understand.
“Do you think W/m2 is an extensive property?”
I answered you. Go look up what a watt is!
“I’m not asking whether you think joules, seconds, square meters, or kilograms are extensive.”
You don’t know what the definition of watt is, do you?
If I give you a metal bar at T1 and then help you cut it in half does the total temperature become 2*T1 for the bars? Does the temperature of each bar become T1/2?
Does the mass of each separate bar equal M1/2? Does the total mass equal M1? Does the total mass for both together remain M1?
Extensive properties add/subtract, be they joules, kg, seconds, or meters.
Intensive properties don’t. The temperature of each piece of bar remains T1, not 2*T1 or T1/2. If I cut the bar in 50 different pieces, the temperature of each will still be T1. If I put them all back together the temperature of the re-constituted bar will remain T1.
If you are in space and I give you two bars that are in infinitely insulated containers, one at T1 and one at T2, one in your left hand and one in your right hand, is there someplace between your two hands that will be at temperature (T1+T2)/2? Will the total mass you are holding be M1 + M2, i.e. will you need more force to accelerate yourself to a certain value than if you were holding only one bar? Or no bar?
Everything you have listed is an extensive property, joules, meters, kg, and seconds. When you cut a bar in half the mass (kg) goes down by a factor of two. The temperature remains the same. One is an extensive property and the other isn’t.
Adding the temperature of the first half-bar to the temperature of the second half-bar makes no physical sense. Adding the mass of the first half-bar to the mass of the second half-bar *does* make sense. Calculating the average of the temperature of the first half-bar and the second half-bar makes no physical sense, it is nothing more than mental masturbation because you are just going to wind up back at T1, the temperature of each half-bar.
It’s the same with density. How do you add densities to create an average? What physical sense does the average make? If I have a gold piece of density D1 and a silver piece of density D2, what does the sum of the two mean in a physical sense? What does their average density mean in a physical sense?
You can do all kinds of mathematical manipulations of numbers on a number line. That doesn’t mean those manipulations make sense in the physical, real world we live in. If it makes no physical sense to add the temperatures of two different objects then what kind of physical sense does their average make?
So you think W/m2 and j/kg are extensive? Is that correct?
Absolutely!
If I have a cube radiating W/m^2 and I cut the cube in half the radiated power gets cut in half. If I have two conductors carrying an rf signal and I cut the driving voltage in half, the amount of signal goes down. If I cut the mass of an object in half it takes half as many joules to raise it the same distance as the original mass.
Extensive properties can be divided into parts and the extensive properties will change. Intensive properties will not change when the subject in question is partitioned.
It’s not obvious from your posts that you can tell the difference.
If you take a parcel of air and divide into two halves, do the amount of joules remain the same in each half? Does the mass of each half remain the same?
Intensive means the measurement stays the same when you combine or halve a given object. The two quantities that make up the specific value are extensive.
Ratios (indexes) in many cases are intensive. If you double J/kg, the ratio remains the same. If you halve J/kg, the ratio stays the same.
But, be careful, can you really average ratios and get a value that is meaningful? Ratios or indexes are most times used to calculate other values and are meaningless by themselves.
Both W/m2 and j/kg are intensive.
Steven ==> I may have mentioned this before, but it looks like you have not done your homework…please read the Essex et al paper referenced in the essay. Then check back here. Thank you.
Pip. I have given a proof that your general theory is wrong using the counter-example I give. Your theory was not restricted to temperature it was very general. All the above comments about temperature not being normally distributed miss the point entirely. One counter example (e.g. a normal distribution) is enough to disprove a general theory. Just to repeat your general theory is stated in your point 4. Let someone disprove my counter example, then I will read your referenced paper. Your homework is go and get some formal qualifications in mathematical statistics, then publish your theories in peer reviewed journals, then write your essay. Science journalists and bloggers should not try and make up the science they are supposed to be reporting on just like news reporters should simply report and not fabricate the news themselves.
Sorry that should be Kip. Typo
Steven ==> Read Essex et al and then explain to us their viewpoint. If you cannot do that, then you do not understand their paper. You are not required to agree — but you are NOT allowed to criticize if you do not undertand.
I am responding to your essay in which you propose a general theory of statistics. You have not addressed my counter-example to your theory. While my counter-example stands unchallenged because it is correct your general theory is kaput! End of story!
Steven ==> I do no such thing. I report about an interesting and valid science question. I am waiting for you to respond to the paper being referenced in an adult way.
We’re trying to discuss your thesis here; not the thesis put forth in Essex et al. 2007.
bdgwx ==> You apparently misunderstand. I am a science journalist, I write about science — I don’t do original research, nor do I propose “theses”. I am writing about the ideas in the Essex rt al. paper.
Got it. You’re just summarizing their thesis.
If you have correctly summarised points from the Essex et al paper with your thesis in Point 4 my counter-example still stands correct and Point 4 and where-ever you dug it out of that paper is still wrong. A counter-example has to be disproved to resurrect a theory.
I accept and concede this point and withdraw the comment that its your theory. Evenso, my counter-example for the contention that averages of Intensive variables have no real-world application or comonsense interpretation still stands. It is not really a theory since no proof is given by Essex et al. of the second sentence as far as it infers a sample average in their contention on page 5; “A sum over intensive variables carries no physical meaning. Dividing meaningless totals by the number of components cannot reverse this outcome“. The discussion about temperature as an Intensive variable does not address the above general contention. I could find no proof of the above contention in their paper. If there was my counter-example would not be correct BUT it is in fact correct so that would explain their lack of a proof. Some contentions are harder to prove than to disprove using the counter example method. The issue they raise of different statistics for estimation of central tendency [their Eqn(9)] of simple average, harmonic mean etc, (I could add the geometric mean) does not invalidate these as alternative estimates of the population mean (central tendency) and all have a real-world interpretation whether for Intensive or Extensive variables. Which estimate is minimum mean square error depends on the underlying distribution.
“Read Essex et al and then explain to us their viewpoint”
I was criticised above for appealing to Authority, first by quoting Spencer, and then by invoking the WUWT (!) practice of posting average temperature plots on the front page, without error bars. But this seems to be a very strange appeal to Authority. Nobody actually quotes Essex et al, or seems to have a clue what they are saying.
I read Essex et al when it first came out in 2007. Despite a math veneer, it is really just an opinion piece like this one. Have you read the various rebuttals, eg here? I have.
This reminds me of a common contention on WUWT:
All those scientists over the years are wrong. Can’t quite explain why, but our man says so, and he is a professor!
Nick ==> Now, I suppose you understand their viewpoint, and thus have the right to disagree if your wish.
The others here didn’t bother to read, and having read, demonstrated that they still didn’t understand. One of them finally read to understanding, then he earned the right to disagree.
I happen to agree — though it is not clear to me why you don’t agree with Essex et al — you haven’t said.
“though it is not clear to me why you don’t agree with”
Well, here is one thing from their conclusion:
“The purpose of this paper was to explain the fundamental meaninglessness of so-called global temperature data”
I don’t think that is a slip. In rejecting the average, they say the data is meaningless, which is a logical conclusion from the arguments about the average. It’s true that if you aren’t allowed to do arithmetic with the data, it is meaningless. But then, how can we ever know anything? We have a huge temperature record, which we are enjoined to ignore.
There is another misapprehension that they have, in common with you and other commenters. That is, that the role of temperature is just as an indicator of internal energy, and so needs to be enhanced with information about enthalpy and maybe other things. But as I said above, the important role of temperature is as a potential for heat flux, where it stands on its own. Fourier’s Law Flux=-k ∇T.
Suppose you have a warm bath and a saucepan of boiling water. Which has the greater internal energy? – the bath. But it is the saucepan that can harm you. The reason is that contact with the water creates a temperature gradient, which allows heat inside you to reach temperatures which can denature proteins etc. That is a parable for the effect of an overheated atmosphere.
“But then, how can we ever know anything?”
By doing it right. Use enthalpy, not temperature. Track minimum and maximum temps separately. Provide propagated uncertainties from the individual measurements through the calculations of means and anomalies. Provide variances for the base and derived distributions. Account for different variances between winter and summer temperatures. Account for the multi-modality created from combining northern hemisphere and southern hemisphere temps. Stop using mid-range values which can be caused by widely different climates thus masking actual climate.
“But as I said above, the important role of temperature is as a potential for heat flux, where it stands on its own. Fourier’s Law Flux=-k ∇T.”
The climate models don’t output flux, they output temperature.
““The law of heat conduction, also known as Fourier’s law, states that the rate of heat transfer through a material is proportional to the negative gradient in the temperature and the area, at right angles to that gradient, through which the heat flows.””
∇is merely a vector differential, so you aren’t really snowing anyone with this. It translates to δ/δx, δ/δy, δ/δz.
So flux is associated with a temperature differential – and how does the temperature in San Diego and the temperature in Denver create a temperature differential gradient thus creating a heat flux?
“Suppose you have a warm bath and a saucepan of boiling water. Which has the greater internal energy? – the bath. But it is the saucepan that can harm you. “
So what does that have to do with averaging the temperatures of the bath and the saucepan to come up with an average temperature?
Essex covers this pretty well on Page 6. I have yet to see anyone refute his generalized analysis.
Nick ==> Do you see that your objection is not based on their findings or how they arrive at their conclusion — but you find the conclusion “objectionable”.
I’m not sure where the “misprehension” bit is from — is that something you are saying, something you think I said (I did not), or something from some other reader?
I would disagree that Essex et al enjoin us to ignore the temperature record altogether — they object to a misuse or misanalysis of the data — averaging to get a GAST.
Your last paragraph is trivial (scientifically) and an inept parable. If one accepts the versions of GAST being circulated by various groups, the Earth hasn’t even reach what NASA and others consider to be an “Earth-like planet’s” average temperature of 15°C — which means that “overheated” is not an accurate description. 20°C might be overheated….but today’s temperature (if one accepts GAST) is still running a little cool.
“an ordinary arithmetic mean will enhance the common signal in all the measurements and suppress the internal variations which are spatially incoherent”
Really? Using the mean does nothing but smooth out variation. It decreases the variance in the data. How does that enhance the common signal?
“Temperature itself can be inferred directly from several physical laws, such as the ideal gas law, first law of thermodynamics and the Stefan-Boltzmann law,
In other words temperature has to be inferred from extensive properties. How does that prove that temperature is not an intensive property that can’t be directly averaged? If you have to average some extensive properties in order to calculate an intensive property which have you averaged, the extensive properties or the intensive properties?
If the rest of the rebuttals are like this then they really aren’t rebuttals!
“One counter example (e.g. a normal distribution) is enough to disprove a general theory.”
How do you get a normal distribution of temperature? Your counter-example is just plain wrong.
The *only* way to get a normal distribution is to have multiple measurements of the same thing. You can take all the measurements of the same thing you want and generate a normal distribution for either extensive or intensive values. E.g. you can take multiple measurements of density of a metal bar but if the bar is homogeneous all you’ll get is the same value multiple times with perhaps a small deviation because of measurement uncertainty. But you can’t create a normal distribution from measuring the density of an iron bar, a silver bar, a gold bar, and a lead bar! You’ll have multiple measurements but they will *not* be a normal distribution. In fact, if your measurement device has sufficient resolution you could measure the density of 100 lead bars and *still* not create a normal distribution!
But you *can’t* take multiple measurements of the intensive property of different things and create a normal distribution. Take a simple thermometer. Every time you take a reading you are taking a measurement of a different thing. How do you create a normal distribution from measuring different things?
As Duane has pointed out: “Any statistical variable, including measured temperatures, must represent a sample population where all members of the population being measured are alike or share the same key attributes of concern. If the measurements are not derived from a real population of shared attributes, then any statistical calculation of that population is meaningless.” (bolding mine, tg)
If you can’t add intensive properties of multiple things then exactly how do you create an average since they must be summed in order to create an average!
I cannot seem to get through to some of you the fact that Kip’s theory is general since it only specifies Intensive versus Extensive variables and the statistical operation of calculating a sample average. Therefore this theory assumes it applies for every and all statistical distributions of Intensive and of Extensive variables. To disprove this general theory in an incontrovertible and mathematically precise way I only need to specify one distribution for which it is false for some notional Intensive variable to disprove Kip’s general theory that sample means of Intensive variables have no practical (real-world) value i.e. interpretation. I chose one distribution, the normal or Gaussian distribution where it is straightforward to disprove this general theory in a simple and mathematically precise way. That’s how mathematical proofs work. I DO NOT need to show that one particular Intensive variable (i.e. temperature) has a normal distribution. So I falsified the theory for the case of a Gaussian distribution so its up to Kip to mathematically prove it is valid for all non-Gaussian distributions or some defined subset of these distributions. You cannot just propose a general statistical theory without mathematically proving it including any necessary restrictions on its generality. Good luck with that Kip because you are neither a mathematician or mathematical statistician or any sort of bona fide statistician/scientist.
I gave you a simple, neat counter-example to your general theory on statistics of Intensive and Extensive variables and every mathematician and mathematical statistician knows a valid counter-example is sufficient to disprove such a general theory. However, you gave no defence of your theory in the face of my counter-example but tried to obfuscate. Your silence on this is very telling. What would happen in peer review is that you would be forced to address a review that pointed out such a fatal flaw and have to withdraw the manuscript, go back to the “drawing board” a tad bit humbled. Oh that’s right humility is not one of your strong points
“Please feel free to state your opinions in the comments – I will not be arguing the point – it is just too basic and true to bother arguing about.”
All your “valid counter-example” did was demonstrate that you understand very little about real temperature measurements.
Shish you are thick or being deliberately disingenuous. The issue is about more than just temperature its about so-called Intensive versus Extensive variables and modelling them with statistical methods. Have you got any critique of my counter-example? Do you even understand the issue and how mathematical proofs work?
Intensive attributes simply do not add. If I give you 100 metal bars can you add their densities to come up with a total density? If I give you 100 hot spheres can you add their temperatures to come up with a temperature that describes the total power being radiated from the 100 spheres? If I give you 100 balloons of different sizes can you add the pressures in each to come up with a total pressure?
Your counter-example assumes a normal distribution. How do you get a normal distribution of intensive attributes? You can take multiple measurements of the same object which will tend to a true value for that object. But how do you do that for multiple objects?
If you can’t add the pressures in 100 different sized balloons to get a total pressure then how do you come up with an average pressure? Especially an average that is a “true value” for the pressure inside a balloon.
At its base, an average is an expectation of what the next value might be. The standard deviation describes a range in which that next value might lie. How does averaging intensive properties of different things give me any expectation of what the density of the next object might be?
I am an engineer. Been one for going on 50 years. Math to me is a means to describe the real world. I’m sure that there are lots of things you can do with numbers that have no application to the real world. They are useless to me.
If you actually read and understood my counter-example you might have noted that it did not specifically reference temperature at all. So it could not demonstrate anything about my understanding or otherwise about “real temperature measurements”. BTW I have used ambient air temperatures quite a bit in my research Masters, PhD, and other professional work. Starting off with thermo-hygrographs in the 80’s and Starlog dataloggers in the 90’s.
Yet you treat these measurements just as if you are taking random samples from a normally distributed population!
Did *ANY* of your research involve propagating measurement uncertainty into your results? Or did you just assume all measurements were 100% accurate?
In case you didn’t find any other information on intensive vs extensive.
“Way to Tell Intensive and Extensive Properties ApartOne easy way to tell whether a physical property is intensive or extensive is to take two identical samples of a substance and put them together. If this doubles the property (e.g., twice the mass, twice as long), it’s an extensive property. If the property is unchanged by altering the sample size, it’s an intensive property.”
http://www.thoughtco.com/intensive-vs-extensive-properties-604133
Intensive and extensive properties – Wikipedia
I made a couple critical comments about your post but agree with you in some ways at a deeper level about numbers. We live in a reality that has continuous properties. We never know exact masses nor pressures nor velocities nor the circumference of a circle nor the gravitational constant etc etc. We assign numbers that approximate the real exact values. For many cases the number estimates are good enough for the purpose at hand while in other cases not nearly good enough. If you run numerical models of real physical systems there are always errors in the models of systems that evolve through time that include numerical estimates of physical properties that vary continuously. It is just the nature of the physical world that you cannot obtain exact results from assigning approximate estimates to real properties. The current problem is that too often people mistake model results for reality. Sometimes that is good enough, though not exact, and other times not nearly good enough.
Here is how it was explained to me. Measurements are not just numbers to manipulate. Measurements carry information. The amount of information is determined by the resolution of the measuring device. It is conveyed by following Significant Digit rules. If you extend the amount of information by mathematical manipulation, i.e., adding, subtracting, averaging to many decimal digits, statistical analysis, etc. you are creating information that was not there when the measurement was made. That is using mathematics in an incorrect way.
You dont know what you are talking about. Give us some examples of empirical research you have done! Mathematical modelling and statistical modelling underpins all modern empirical science. The data is used to calibrate and validate (ideally with independent data) the models. I had and still have a 47-year career getting paid to do just that and as a statistical consultant clients come to me to do just that so its not just the government jobs I had for 40 years.
Projection.
You don’t know what you don’t know.
Not projection based on comments which show no understanding of modern statistical modelling methods. I think I have more than a few runs on the board in that area having published four senior or sole author papers in statistics journals and many more in the area of applied statistics in application-specific journals.
Your words:
First, has nothing to do with the subject of extensive versus intensive properties, as a statistician, you see them all as just numbers.
Second, time-series temperature measurements do not and cannot*** have a normal distribution, so your counter example is a huge red herring.
***Do you know why this is true?
“First, has nothing to do with the subject of extensive versus intensive properties, as a statistician, you see them all as just numbers.”
My counter-example explicitly includes the statistic (sum of sample values) that discriminates between Intensive versus Extensive variables (i.e. “irrespective of the interpretability of the sum of sample values”) Just to spell it out it more for you Intensive=NOT interpretable, Extensive= interpretable.
“Second, time-series temperature measurements do not and cannot*** have a normal distribution, so your counter example is a huge red herring.”
You dont understand mathematical proofs so there is no point discussing my counter example that disproves Kip’s general theory with you until you do, whenever that might be!
“You dont understand mathematical proofs so there is no point discussing my counter example that disproves Kip’s general theory with you until you do, whenever that might be!”
The argumentative fallacy of Argument by Dismissal.
It’s apparent you didn’t read Essex at all. Here’s one of his refutations of the ability to add temperatures and develop an average:
Essex: “Let us propose that an average over temperatures from both systems is required to be a temperature. This proposition introduces a contradiction. The state, and the temperature, of system a, say, is completely determined by the variables Xa/i
and does not change in response to a change only in Xb/i.
But any average is a function of both temperatures. Thus, while each temperature is a function of the extensive variables in its own system only, the average must depend explicitly on both sets of extensive variables, Xa/i and Xb/i. That is it must depend on both states and it can change as a result of a change in either one. Then the average cannot be a temperature for system a, because system a is mathematically and thermodynamically independent of system b by assumption. Similarly, it cannot be a temperature for system b. Consequently, the average is not a temperature anywhere in the system, which contradicts the proposition that the average is a temperature.”
This is true for whatever intensive property you wish to use. It is pretty general in its approach.
Once again, where is the normal distribution that you refer to in your first sentence?
The key question you keep dodging.
+100!
“. The data is used to calibrate and validate (ideally with independent data) the models. I had and still have a 47-year career getting paid to do just that and as a statistical consultant clients come to me to do just that so its not just the government jobs I had for 40 years.”
In my first electronics lab at college there were eight students. We were all assigned to design and build a single transistor amplifier and take measurements to describe the amplifier.
We all thought we would save time and each of us took one measurement at each of the various points in the circuit, averaged them, and we all put down the same answers in the lab book for bias voltage, gain, etc.
We all failed!
Multiple measurements of different things using different devices does not usually generate a distribution that is normal and therefore the average and standard deviation simply doesn’t apply.
We each used different components with 10% tolerance and the transistors had a wide range of high frequency gain. Add in the uncertainty associated with the measuring devices at each station and all we did was generate a distribution with systematic biases which did not cluster around a true value because of the tolerances in the components used! The true value was different for each amplifier.
The “model” of that transistor amplifier was pretty simple and easy to calculate. But it didn’t match reality for any of the amps. Primarily because we hadn’t yet been taught how to handle uncertainty and to propagate it into the results.
First, I suspect your models do not propagate uncertainty, the climate models certainly do not. Second, I suspect your models only focus on a similar object, not random, independent objects with different attributes conjoined into some kind of a distribution.
Science in the normal (vs quantum) world seldom uses statistical modeling unless it can provide determinant answers that can be verified. A real scientific hypothesis uses a mathematical foundation that predicts determinant answers that can be replicated and verified.
I am sure you have lots of experience in statistics and modeling certain things. However, using your bona fides will do nothing for most here. You need to provide in this case, physical facts that either support or counter the assertions here. Many of us are trained engineers with various education from working in differing fields.
Climate science has spent 50+ years trying to obtain a trend that will accurately predict a Global Average Temperature. They have utterly failed after spending billions (and probably trillions) of dollars. Besides that, they are dealing with a physical quantity that shouldn’t even be averaged. How sad.
Maybe with your help they might do better although I doubt it. Climate science is unwilling or unable to provide the statistical descriptors of the distributions they using. Simple things like standard deviation, variance, kurtosis, skewness that any statistical software package will spit out at the push of a button. Ask yourself why. It might be that an anomaly of 0.002 ±0.7 would upset the apple cart of global warming.
So according to you we should just collect data and not do anything at all with it because we might add spurious information to it! WRONG! Mathematical models and statistical models assist in extracting useful information from the data in the form of mathematical equations, statistical hypotheses, parameter estimates and their uncertainty and subsequent use of the models to answer important and interesting questions taking into account model and parameter uncertainty. The models are always up for review, revision, replacement as the scientific process of collecting more (and better fit-for-purpose) data proceeds, and all published models are subject to informed criticism (letters to editors, commentary articles etc). Some scientific fields have more fundamental laws, well-validated and understood processes to work with (e.g. physics, chemistry, physiology..) some less so (e.g. wildlife biology, psychology)…
“So according to you we should just collect data and not do anything at all with it because we might add spurious information to it! WRONG!”
You HAVE to understand what you are collecting. You can’t take data from multiple, independent things with different attributes and assume it creates a normal distribution that can be described by mean and standard deviation.
“ Mathematical models and statistical models assist in extracting useful information from the data in the form of mathematical equations, statistical hypotheses, parameter estimates and their uncertainty and subsequent use of the models to answer important and interesting questions taking into account model and parameter uncertainty.”
When you are creating a distribution from different things with different attributes, each with their own uncertainty, those uncertainties ADD. Sooner or later the uncertainties will overwhelm what you are trying to find. No amount of equations, hypotheses, or parameter estimates (which have intrinsic uncertainty) can prevent this from happening.
This is something that almost no statisticians I have met or corresponded with understands. Most have *never* been trained in measurement uncertainties and how to propagate them. To a statistician the standard deviation of sample means is the uncertainty of the mean – never considering that those sample means have propagated uncertainty from the individual elements in the sample that must also be propagated forward!
“The models are always up for review, revision, replacement as the scientific process of collecting more (and better fit-for-purpose) data proceeds,”
Any model that depends on iterative steps where inputs to the model have uncertainty will see that uncertainty grow with each iteration. That’s the problem with climate models. The parameterizations used all have uncertainty that grows with each iteration – and quickly makes the models useless as the uncertainty overwhelms the ability to determine to the next value. No amount of temperature data collection can correct this – first because the climate scientists assume all temperature data is 100% accurate and therefore no uncertainty is propagated forward so you can’t really validate the model against past data and second because the input factors to the model are inherently uncertain.
When did I say those statistical models I mentioned had to assume a normal distribution for the response? Most of my published work in the statistical literature is on generalized linear models and generalized linear mixed models which use maximum likelihood and Fisher Scoring for estimation where the distribution of the response can be Normal, Poisson, Gamma, binomial, and inverse Gaussian combined with canonical and non-canonical link functions. Sorry but you are not the first person to note the issue of measurement error and error propagation etc. You just dont know the literature whereas I spent my 47 year career pouring over the literature, applying it where relevant and adding my small contribution to it.
“When did I say those statistical models I mentioned had to assume a normal distribution for the response?”
If they are not normal then the mean, i.e. the average of all the values is not an acceptable statistical descriptor.
You totally left out multi-nodal distributions which is what you get from combining northern hemisphere temperatures with southern hemisphere temperatures. You left out the treatment of the different variances between winter and summer temperatures.
Besides, maximum likelihood and Fisher Scoring typically use the assumption of identically distributed data and equal variance for all random variables. Neither of these are met for a distribution formed from global temperature measurements. In fact, neither restriction would be met for most intensive values derived from non-similar objects.
“If they are not normal then the mean, i.e. the average of all the values is not an acceptable statistical descriptor.” By “all values” do you mean all values in the sample or the population. Any univariate random variable will have an expected value which may conditionally depend on a set predictor variables (i.e. covariates). You can generalise this to a multivariate set of random variables. An estimate of the expected value based on a sample or set of samples is possible if that expected value exists.
“maximum likelihood and Fisher Scoring typically use the assumption of identically distributed data and equal variance for all random variables.” Again you are out of your depth here. Generalized linear models (GLMs) along with a link function also involve a variance function which will depend on expected value (so not constant) in a way that depends on the response distribution and may also involve an unknown dispersion parameter. So maximum likelihood and Fisher Scoring do not require equal variances for a given set of expected values that depend on values of the linear predictor. Note that maximum likelihood and Fisher Scoring are more general than just their application in GLMs and can include other non-normal distributions and a systematic component that is nonlinear in the parameters. You also made some erroneous comments about MCMC estimation but I havent got the time or energy to correct all your misconceptions. I am not an engineer so I know well enough not to espouse engineering principles and embarrass myself in the process.
Another important point. When statisticians talk about a distribution for a random variable what is implied is that it is a family of distributions that depend on the values of distribution parameters where the probability density function has the same general form e.g. normal, F-distribution, gamma, Poisson, negative binomial, binomial, hypergeometric etc etc. Typically the expected value is a function of those distribution parameters and it can be modelled conditionally as a linear (where this makes sense) or nonlinear function of covariates. Also the variance can be similarly be jointly modelled e.g DGLMs (double GLMs). Non-statisticians are often confused by not being aware of or understanding the above.
Any expected value must assume these distributions don’t change.
And you still run away from my point about temperature measurement distributions.
Technically all parameters defining the distribution must be constant to keep the exact same distribution. The point I make is that these parameters and thus the expected value and variance can change conditionally on the covariates but the resultant set of distributions all belong to the same family. Seriously this is Linear Modelling 1.01, first year undergrad stats course.
Most of my experience is in engineering and management. My training is electrical engineering in which analysis of cyclical phenomena is certainly emphasized. However, when I went to school, training in fossil fuel power plant including the thermodynamics required to design and operate plants efficiently at their best output was required. Steam tables became a “take with you necessity” in the classes.
Temperature is a thermodynamic quality. In the atmosphere the variance in temperature is a multivariate problem. KH’s paper is trying to show that many, many dollars have been spent in trying to “trend” a metric or index of temperature called Global Average Temperature and it hasn’t even been questioned about how you average an intensive quantity. It makes you wonder about the thermodynamic training supplied to climate scientists.
I don’t want to denigrate your knowledge of the math required for statistics and performing Fisher Scoring. However, this is still “trending” a single variable in time at heart and does nothing to help develop a functional relationship of cyclical behavior involving multiple phenomena.
This isn’t exactly a statistics discussion anyway. KH wanted to elucidate what temperatures are and can they be averaged, averaged again, then averaged again, and finally one more time (daily, monthly, annual, global) to arrive at number that has any meaning.
My thermodynamic training was that you can’t average temperatures and arrive at anything resembling something you can count on. Temperature is a result of several things, mass density, specific heat, and the heat energy supplied. In other words, you need to deal with enthalpy to compare two different things.
To use temperature as a proxy means you must assume that water vapor is constant and well mixed globally which it is not.
Tou must assume that insolation is the same everywhere globally. yet that ignores several things like clouds and topology. It ignores that insolation varies with a cos(θ) variance in latitude and assumes that the earth is flat and everywhere receives an average insolation. You only need to at T^4 in radiation equations to know an average is going to give you wrong answers.
Things like altitudes, proximity to airports, proximity to large bodies of water, all kinds of geography can affect the enthalpy at different locations.
The current GAT calculations mix Decembers with the previous January and February months in order to maintain a calendar year. Climate science ignores seasons totally. You’d think they never heard of equinoxes or solstices. Why wouldn’t you make a temperature year be from December of one year to February of the next year?
Plus they also stir in the southern hemisphere with its opposite seasons!
In the first line of your alleged counter example.
Do you want to play dueling resumes?
That statement was made in my general point on statistical modelling and was unrelated to my counter example to Kip’s general theory. Nice try!
BTW just got this from Academia
Dear Steven,
Congratulations on your 1015th Mention!
3,600 downloads of my PhD thesis.
3 sole author, 1 senior author statistics journal papers
23 senior author papers in other journals
++ second or lower order author journal papers but only statistician
1,278 citations according to ResearchGate
Please list all of the fundamental relationships/physical laws that have been deduced using statistical modeling of global air temperature measurements:
1
2
3
…
For instance the sufficient statistics contain all the relevant information in the sample data about the population parameters of interest. They do not add information to the sample data but summarise it without loss of information on those population parameters.
What are the sufficient statistics for a multi-nodal distribution? For a non-normal skewed distribution?
Do you believe that mean and standard deviation are always sufficient statistics in all cases?
What is the mean, median, and standard deviation of a sine wave?
Go look at a GAT graph. Can you tell me the standard deviation, variance, kurtosis, or skewness of the distribution used to calculate the mean?
If you are saying that all the stations make up the “samples” of the population then according to the CLT the distribution should of the samples should be normal if the samples are taken properly and of the correct size.
Show us where anyone calculating a GAT has shown what the distribution of their sample means is. What is the SEM of sample means distribution? Can you find it somewhere?
Why are they dividing the SEM by the √N where N is not the sample size but the number of samples?
And lastly, information is added in many databases through “infilling”.
What ==> ” For many cases the number estimates are good enough for the purpose at hand” Yes, that is often so — but does not do for science that affects national energy policies etc.
A guess that the Earth climate is warming is OK with me — it is warming (thank goodness! The Little Ice Age was not pleasant in the US Northeast or Europe).
But GAST/GATis not a scientific measure of that…..
I would add that is likely that the LIA was world wide.
But according to the Essex paper there is no way of knowing if the LIA or the MWP was warmer or colder than today.
From proxies or thermometers, no. As for the LIA, there are historical records that can tell us that the period was colder than today. Probably much colder. Now can we get 0.002 degrees of uncertainty, gosh no. Somewhat the same thing goes for the MWP. There are historical records of what grew where. How civilization flourished. Again, history would lead one to believe that it was warmer. There are glaciers retreating and uncovering flora and fauna that tells us it was at least as warm as now and probably warmer.
If you cannot average temperature to get a global average, how on earth do historical records allow you to know what the global average was?
According to Essex it’s impossible to tell if current times are warmer or colder than any period in the past because the concept of a global temperature doesn’t even exist.
Nor can you tell by the global average temperature because there are infinity ways of averaging the global temperature which if you try hard enough can cause the change to reverse direction.
“There are glaciers retreating”
That doesn’t prove the cause is global temperature changes according to Essex. That would be tantamount to temperature at a distance.
“Somewhat the same thing goes for the MWP. There are historical records of what grew where. How civilization flourished. Again, history would lead one to believe that it was warmer.”
Essex et al, argue that it’s meaningless to argue the MWP was warmer than present because there the physical quantity of global warmth does not exist.
You don’t have a clue on what is needed to measure “‘HEAT” do you. Temperature alone will not do it. A temperature of 100 @ur momisugly 15% humidity is not the same as 100 @ur momisugly 90% humdity. Their enthalpies are different, e.g., the heat that the atmosphere contains is different. An average of 100 and 100 equals 100. But you will never know that the heat content of the atmosphere was drastically different. That is why a global average temperature is “ill-posed”.
Have you ever wondered why you see papers and news articles that everywhere on the globe is warming faster than everywhere else? Have you ever wondered why changes in species everywhere on the globe is because of global warming? Have you ever wondered why there are more violent storms everywhere on the globe because of global warming?
Why would enthalpy be the same globally and dependent only on a global temperature? Is the enthalpy in Australia the same in July as it is in the Southern U.S.?
Warmists somehow and for some reason believe that everywhere on the globe is experiencing unprecedented warming. You need to ask yourself if the sun’s insolation only affects temperature and doesn’t have any effects whatsoever on clouds or humidity which actually determine enthalpy.
What has this got to do with the LIA or MWP?
It doesn’t allow you to determine a temperature directly. It allows you to set boundaries.
During the LIA people skated and had ice fairs on the Themes River. Do you see that today? What judgement can you make from that?
Trees and human paraphernalia are being uncovered by melting glaciers. Does that probably mean that temperatures were at least as warm for some period of time in the past?
“It doesn’t allow you to determine a temperature directly. It allows you to set boundaries.”
According to the paper, the only way you can know if one planet is colder than another, is if there is no overlap in the range of temperatures. Do you think that was true of Earth during the LIA.
“During the LIA people skated and had ice fairs on the Themes River. Do you see that today? What judgement can you make from that?”
Very little. The Thames was very different then. Even when temperatures have been colder in the 20th century, the Thames hasn’t frozen over.
And even if it did prove colder temperatures, that would only show one small part of London was colder, and you can’t say London was colder because there’s no such thing as an average London temperature, let alone a global one.
“Does that probably mean that temperatures were at least as warm for some period of time in the past?”
So now you’ll accept a probabilistic argument?
You won’t accept that temperatures have risen over the last 150 years, because they may have cooled if you use a geometric average with temperatures raised to the 25th power, but a few trees are enough to assure you that it was probably colder all over the earth during the entire LIA.
And that’s ignoring the arguments that a global temperature simply doesn’t exist, you know from that proof on page 6. So how can the statement it was colder globally have any physical meaning?