Guest Essay by Kip Hansen – 21 August 2022
This series of essays concerns Numbers. Not the government-controlled lottery game type of numbers, or the older version run by crime organizations in every U.S. city, but just this: “A number is a mathematical object used to count, measure, and label. The original examples are the natural numbers 1, 2, 3, 4, and so forth.” Much of science (in nearly all disciplines) concerns itself with measurements of all types – measurements most often expressed as numerical quantities – as numbers.
Part 1 of this series made the point that “Numbers are just Numbers”. Lots of interesting things can be done with numbers and lots of even more interesting things can be done with sets of numbers – data sets and time series – through the magic of statistical analysis and statistical maths programs. However, what can be done with the numbers is not the same of what can be done with the “things” that the numbers enumerate. Such things as kilograms, hertz–frequency as cycles per second, lengths, temperatures in various degrees, color as frequency of light emitted or reflected, density, hardness – all the measurable properties of physical matter including those that are qualities. When the numbers of a thing are treated as if they are (or are the same as) the thing(s) enumerated, troubles ensure – reification has taken place, someone has come to “…think of or treat something abstract as a physical thing.”
Part 2 of this series dealt it the reasons why “One cannot average temperatures”. This fact is a bit harder for most to understand as it is a common everyday practice to average temperatures, speak of “the average temperature” of some day, city, region, or even the whole globe. Thus, when shown that the practice is scientifically improper and the results of such are nonsensical (except in the most simplistic, daily pragmatic senses), confusion and objection results.
This third and final part of the series will expand on the reasons – the underlying why — that temperatures cannot be averaged and why when it is attempted, the results do not represent what they claim to represent.
In this essay, I will limit the “averaging of temperatures” to its present-day use in Climate Science in which average surface temperatures, measured over time in disparate locations, are used as evidence that the Earth’s climate, as a whole, is retaining more energy and thus “becoming hotter”. As expressed at Climate.gov:
“By adding more carbon dioxide to the atmosphere, people are supercharging the natural greenhouse effect, causing global temperature to rise. According to observations by the NOAA Global Monitoring Lab, in 2021 carbon dioxide alone was responsible for about two-thirds of the total heating influence of all human-produced greenhouse gases.”
Or this from the NY Times section “The Science of Climate Change Explained: Facts, Evidence and Proof — Definitive answers to the big questions”:
“We know this is true thanks to an overwhelming body of evidence that begins with temperature measurements taken at weather stations and on ships starting in the mid-1800s. Later, scientists began tracking surface temperatures with satellites and looking for clues about climate change in geologic records. Together, these data all tell the same story: Earth is getting hotter.”
There are a lot of varying opinions about the whether that statement is strictly factual, but my point in quoting it is only to show that “global temperature” is presented as a measure of “global heating”. But as I showed in Part 2, temperature is not a measure of heat (or heat content). So, even if global temperature (if there is such a thing) is rising that metric [“a system for measuring something“] will not tell us if the Earth’s climate is gaining heat or not.
[ Note: As I have said before, it is my understanding that the Earth’s climate has been warming since the mid- or late 1700s – as the Earth comes up out of the Little Ice Age. ]
How is temperature not a measure of heat?
The following definitions and formulas are taken from an engineering site, BrightHubEngineering.
“Total Heat Content of the Air — The total heat content of the air is the sum of the sensible heat of the air and the latent heat of the air. Thus,
Total heat of the air = SH + LH
The sensible heat (SH) depends on dry bulb temperature of air while latent heat (LH) depends on dew point temperature of the air, hence the total quantity of heat in the air depends on the dry bulb and dew point temperature of the air. Further, for any combination of the dry bulb and dew point temperature, there can be only one wet bulb temperature, hence the total quantity of heat in the air also depends on the wet bulb temperature.”
The current versions of global mean surface air temperature (and there are many) are often reported in “anomalies” (differences) of some current-period average temperature (daily, monthly, annual) over some previous 30-year base period average temperature (there is no standard – Earth Observatory – the previous link – uses 1951-1980 – other reported anomalies use 1981-2010 and 1991-2020). These anomalies are difference of averages from some other average and used as if the numerical results can be reported in degrees (usually °F or °C) as if the number was an actual temperature. In no case — even if the number actually represented a temperature — would the reported numerical figure represent any measure of heat, either greater or lesser. As in the paragraph above, to find heat from temperature one needs more information.
[ Again, what follows are the formulas for determining the heat content of any quantity of air — think, maybe, the cubic meter of air surrounding a MMTS or Stevenson Screen at a weather station. It is not strictly necessary to understand these formulas to understand the point of this essay – readers can glance through them if not particularly interested in the gory details. ]
We need to first determine Sensible Heat (most simply “the heat that can be felt”) which is done as follows:
The sensible heat of the air is calculated as follows:
SH = m*0.133*DBT
Where: m is the mass of the dry air, 0.133 is the specific heat of air in Kcal/kg and DBT is the dry bulb temperature of the air.
We need also to determine the Latent Heat:
The latent heat of the air is calculated as follows:
LH = m*w*hw
Where: m is the mass of dry air, w is the specific humidity of dry air, and hw is the specific enthalpy of water vapor taken from the steam tables as the enthalpy of water vapor at dew point temperature.
When we look at the temperature record of a weather station, we don’t always see the metrics we need to find out how much heat is in the air surrounding the Stevenson Screen or the MMTS weather sensor.
To calculate the Total Heat Content of Air (a specific volume of air) we need the following:
1. The mass of the air under question. The mass of the air requires “volume” and “air pressure” — the mass of air in one cubic meter of air will increase with an increase in air pressure.
2. The Relative Humidity – and here we are getting a little into the weeds as humidity is not simple. But, we are saved by modern technology — as “there’s a web site for that.” In order to sort out these metrics, we can use the handy calculator to Calculate Dewpoint, Wet-bulb Temperature from Relative Humidity.
I hope that readers aren’t expecting me to calculate the heat content of some air at some weather station at some particular time. I just want you to be aware of the fact that it can be done, but that it isn’t being done – and because it isn’t being done, we don’t have a reliable metric for the heat content of the air at any particular point and time thus cannot have a reliable measure of regional or global heat either.
Let’s try to see why it isn’t calculated and used even though the calculator on your smartphone is powerful enough to do the math. Here are the meteorological observations from a CO-OPS weather station, chosen because it reports Temperature, Barometric Pressure (air pressure) and Relative Humidity (not all stations do so or have the information publicly available). Note that this particular weather station is right on the waterfront – literally just meters from the river’s edge.
[Readers can just quickly scan the graphs and explanations – to the line of tildas (~~~)]
This weather station also reports wind speed and direction (hard to see wind direction in this image, see link above):
The wind speed is in meters per second. Our one cubic meter of air surrounding the MMTS sensor is usually not the same from one six-second reading to the next, no less for the six-minute averages.
Just to see the relationship between the three important metrics, I have overlaid them:
Temperature (blue) and Relative Humidity (amber) look to be opposing one another, while Barometric Pressure (green) is more-or-less independent. However, these relationships are tightly linked as this one-day graph shows:
That circled-in-red shift in barometric pressure is a front passing through around midnight causing a radical drop in temperature and a similar radical rise in relative humidity.
~~~~~~~~~~~~~~~~~~~~~
Heat is an extensive property of matter – it is an amount of energy – and thus can be added, divided, and averaged. This is in opposition to temperature, which is a qualitative intensive property. Temperature cannot be added to temperature, thus cannot be averaged (see Part 2).
Much of climate science is about energy retention in the climate system — which may be taking place — but of one thing we be can be sure, averaged temperature records are not evidence of such.
Evidence of increasing heat content of the Earth climate requires scientific measurements of heat over the time period of climate – at least 30 years. There are a lot of proxies which the IPCC and others believe are usable in that regard, including various forms of temperature averages, and even combined averages of the temperatures of different types of objects such sea surface skin temperature calculated from satellite observations and kriged surface air temperature anomalies from averaged thermometer readings. None of these, of course, are valid in terms of the physics of thermodynamics (again, see Part 2).
[ Proxy: “An entity or variable used to model or generate data assumed to resemble the data associated with another entity or variable that is typically more difficult to research.” [ source ] ]
Some of these proxies for the heat (increasing or decreasing) in the Earth’s climate are known to be far from strictly scientific. Sea Surface Skin Temperature, from satellite readings, measures the temperature of the top few millimeters of the sea. It is not the temperature of some volume of sea water or the water below the surface, which changes temperature across depths. The actual temperatures of the sea are extremely complicated and some cannot even be measured.
Obviously, averaging sea surface skin temperatures with 2-meter surface air temperatures doesn’t produce a measure of heat in the Earth climate system either.
Bottom Lines:
1. To support a claim that the Earth’s Climate System is “getting hotter” one has to have a long-term time series of measurements of heat in the climate system.
2. Current Global Mean Temperature data sets do not measure heat and thus can not supply evidence for #1.
3. The lack of such a time-series doesn’t mean that the Earth’s climate isn’t gaining energy (heat) – it simply means we don’t have any reliable measure of it.
4. Climate Science may have some evidence of long-term energy gain or what is commonly labelled “Earth’s Energy Budget” — energy in/energy out — but it doesn’t seem to be dominate in the ongoing climate controversy. The latest paper shows that we can still cannot directly measure instantaneous radiative forcing. “This fundamental metric has not been directly observed globally and previous estimates have come from models. In part, this is because current space-based instruments cannot distinguish the instantaneous radiative forcing from the climate’s radiative response.” It is possible that future satellite missions will be able to measure directly and accurately Earth’s incoming and outgoing energy.
# # # # #
Author’s Comment:
This series has built upon the basics of quantification – counting the numbers of things. Huge and serious scientific errors come about when the things counted are not really the thing one thinks one is counting. One of these errors is the odd un-physical assertion that temperature a proxy for measured heat.
As for the insistence that the Earth is getting “hotter” — the global average temperature (such as it is claimed) currently runs just under 15°C — or about 58.8°F. Coolish by my standards, certainly not hot.
In this specific case, I have presented the concept that temperatures, temperature measurements in whatever degrees, are intensive properties of matter and not subject to being added, multiplied or subsequently divided, which precludes creating averages of temperatures. One can surely find a number by adding the temperature of Los Angeles on noon today to the temperature of Chicago yesterday noon, and dividing by 2 but the result will not be a temperature of any place at any time. This extends to one of the problems of Global, Regional, State, National, weekly, annual temperatures and their anomalies over various periods of time and space.
Temperature Averages (or their averaged anomalies) also share all the problems of averages in general (and Laws of Averages Part 2 and Part 3).
A lot of people are real fans of Global Average Temperatures….but let me remind you of their true application, as illuminated by Steven Mosher: “The global temperature exists. It has a precise physical meaning. It’s this meaning that allows us to say… The LIA was cooler than today…it’s the meaning that allows us to say the day side of the planet is warmer than the nightside…The same meaning that allows us to say Pluto is cooler than earth and mercury is warmer.” [ source ] And I agree wholeheartedly. But, just that and that alone.
# # # # #
Oh, the trendologists will be out in force whining about this one, Kip.
Monte ==> I suppose they will….I don’t object to trends of actual existing data, do object to the common practice of pretending that trends extended beyond the data.
Kip—I use this word instead of climate science because the global average air temperature averages cannot tell you much of anything about climate. So instead of studying climate, many study nothing but trends.
Might as well be treading a hamster wheel, around and around it goes: nowhere.
And you are quite right about extrapolating into the future, many pretend that they don’t, but they do. Witness substituting “projections” for “predictions”.
Carlo ==> “The Name Game”…..
Monckton doesn’t comment that often, apart from on his own posts.
Look, the Ship of Fools has docked.
Kip,
how about a bit of consistency? You start by saying that you cannot average temperatures and thus global temperature doesn’t exist since you can’t average temperatures and you end by stating that you agree wholeheartedly with Steven Moshe that “The global temperature exists”.
So which is it? Does a global temperature exist or not?
Izaak ==> One is a fact and one is a fantasy — I object to the fantasy — Global Average Surface Temperature as a calculated number is a fantasy.
Of course, the Earth as a celestial object must have a temperature. The questions are about the calculation of a numerical Global Average Surface Temperature (or any of its many many derivations).
The problem is that we cannot stick a thermometer into the Earth as an object and take its temperature.
Why not, in a manner of speaking? If one took a Fleer out into space to the Moon’s orbit and measured Earth’s temperature, what would it read?
James ==> You’d have to get a lot further away than that or you’d get a spot temperature. But in general, that’s how they determine planetary temperatures, I believe.
Kip: You said above that, “Earth, as a celestial object, must have a temperature.” In response to James you agreed that you could get a spot temperature with what he called a Fleer. (Did he mean FLIR?) What temperature would be measured? How would the measured value relate to the temperature at Earth’s core (~6000 degrees C), the temperature in Honolulu, or the air temperature at 10,000 metres above Honolulu? The idea that Earth has “a” temperature seems inconsistent with the premise of your three-part post.
I agree that a global average surface temperature (GAST), and even more so a global average temperature anomaly, is not a valid thermodynamic property. However, I also agree with what I believe is one of Nick Stokes’ points, which is that that an increasing GAST is a not-unreasonable indicator of “change.” To me, the problem with the GAST is less that it is thermodynamically invalid and more that it is touted by alarmists as a “doomsday” metric (“Earth has a fever,” etc.) and that the spatial and temporal structure of temperature changes is completely ignored by them. It’s hard to see much downside for humans from higher temperatures at high latitudes, during the winter, and/or at night. (Some might argue that a higher GAST means melting glaciers and sea-level rise, but an increase from -55 to -50 in Antarctica won’t melt any ice and tide-gauge data suggests no acceleration in the rate of rise due to CO2 emissions. Also, the Dutch solved sea-level rise hundreds of years ago.)
Part of the problem is that water vapor is not static nor does it stay in one place. So today I might have a high humidity but low temperature but tomorrow a low humidity and higher temperature. What has the heat done? Likewise, that humidity may have been moved by the wind and someplace else may be affected. Just averaging the temperatures won’t tell you where the “heat” is and what it is doing.
Jim ==> Yes, the reason I include the graphs for a weather station including the wind speed — the air usually changes at the temperature sensor every few seconds….
Randy ==> I said that we’d have to get pretty far from Earth to do that — in the same general way that NASA determines the surface temperature for distant celestial objects — such as the other planets in our solar system or exo-planets.
From a sufficient distance, Earth is reduced to a “spot” — suitable to having a temperature that can be spot measured. That measurement would tell us nothing about Earth’s core temperature.
Kip==> That’s what I expected you would say, and I agree. It just wasn’t clear to me what specific temperature you were referring to when you said Earth has “a” temperature. The clarification was helpful.
” Global Average Surface Temperature as a calculated number is a fantasy.”
But what you agreed with was
“The global temperature exists. It has a precise physical meaning. It’s this meaning that allows us to say… The LIA was cooler than today”
So how can it have that precise physical meaning if you can’t calculate it? How can you say that the LIA was cooler than today?
That is easy. The last Thames Ice Fair in London was in 1818.
Which proves what about global temperatures?
The fact the Thames changed a lot in character since then, including removing the old London Bridge might have more to do with frost fairs than global temperatures.
“That is easy. The last Thames Ice Fair in London was in 1818.”
And now we have 40°C in Lincolnshire. And buckling rails. Global warming is proved.
HORRORS!
RUN AWAY!
Continuously welded rail lines are a post-ww2 development and introduced in UK only around 1960 so buckling due to heat is a poor proxy for temperature.
Chris ==> Welded trails reduce the noise of trains passing close-by.
The local track near our home was recently welded (after some rails were replaced) to our great relief — no more clack-clack-clack.
And bad railroad engineering—the US has welded rail everywhere that doesn’t buckle.
The Poms are hogging the missing heat Nick-
Snow cover and freezing temperatures hit Blue Mountains, Lithgow, Orange and Bathurst, as winter nears its end – ABC News
That’s bushfire country in summer but you and your clever mates will average it out for us all and save us from the sneaky plant food moving around and creating havoc like it does.
The normal meaning of “global warming” in use today is ‘what humans done did’. No evidence of warming in itself can be considered evidence of any specific cause.
The ‘unprecedented’ temperatures predicted gave rise to the fear that rails would buckle but I am not aware of any incident where a welded line actually suffered damage from the temperature peak recorded at an airfield in Lincolnshire.
On the day in question I recorded a 38C peak near my home but less than half a mile away, in a more rural setting nearby an ancient Thames tributary, the temperature only managed 34C. Which temperature tells us most about the state of our climate? Neither IMO.
Correction: we have 40C at RAF Coningsby measured at a station next to an asphalt road and a few meters from a runway.
As a flight of 4 Typhoons took off.
ah! Now I understand, that is AGW!
As 4 Typhoons took off. Look at the bump in the data.
But not AGW. I noticed you left out that salient descriptor.
And now we have 40°C in Lincolnshire
At an RAF airbase and I don’t think we had many of those in 1818. RAF Coningsby would have been a field just like Heathrow was a collection of fields and the western limit of London was Hyde Park. But hey, let’s just ignore all those changes to the landscape in the past 200 years to pretend global warming is real.
Nick are you saying that the Thames Ice Fair of 1818 was in the in the summer? Or are you comparing apples and oranges?
You could insert a /sarc tag.
The point is not that an accurate measure of the earth’s surface temperature is invalid, it is that the raw data are manipulated and extended and estimated and infilled, etc. well beyond validity to produce the doomsday scenarios. Not to mention the malfeasance revealed by the ClimateGate emails. If that is as trustworthy as your associates are, I don’t trust them at all.
That is one winter in one city.
Not evidence of any global trend.
A small correction; the last frost fair was February 1814.
According to CET, the winter that year was 0.4°C, 4th coldest on record. But still warmer than 1962/3 which was -0.3°C.
Nick ==> For a smart guy you ask stupid questions. You conflate the concept of “a global temperature” with the pretense of calculating a Global Average Temperature.
Everything HAS a temperature — that doesn’t mean we can calculate it to any degree at all.
Or that our current methods or processes claiming to measure it are scientifically sound.
You seem to slip over into nonsensical trolling at times.
“Everything HAS a temperature”
I’m confused. I thought the point of the paper from last time, was that “a” temperature didn’t exist unless unless everything was in equilibrium. Wasn’t that the point of the “proofs” on page six of the paper?
Bdellman ==> You’ll have to troll the Gormans and others….I don’t participate.
I’m not trolling anyone. I’m just trying to figure out what’s being said. Either it’s possible to have a temperature representing a system that isn’t in equilibrium, or it’s not.
I’ll give you a concise answer but won’t argue with you.
If this is what you learned from Part 2, and all the messages exchanged with you, then you are absolutely incapable of dealing with thermodynamic questions.
Read the essay again and again until you learn what is being asserted.
Here is another site that may help.
https://www.e-education.psu.edu/earth103/node/1005
This is the important part.
Jim ==> Thanks for that link!
You are welcome!
That’s not a concise answer. You just say I’m wrong and then giving a quote that has nothing to do with the question.
Is it possible for a system that is not in thermal equilibrium to have a single temperature representing that system?
Sir, if the system is not in thermal equilibrium, then parts of it are at different temperatures by definition, and it would not be possible. Depending on the situation, you may be able to calculate some sort of weighted average or effective temperature that is representative of the whole system. For example, the radio tells me that the temperature in Houston is 93°F. Obviously, that is not the exact temperature for most of that city, but probably not too far off. For other systems, an average is not meaningful. One of the points being made in this series of essays is that the temperature is an indicator of the actual heat content of the system, which is what we should be monitoring.
Kip, writing on temperature in part 2: “Multiplying temperatures as numbers can be done, but gives nonsensical results partially because temperatures are in arbitrary units of different sizes but most importantly because the temperatures do not represent the heat energy of the object measured but rather relative “hotness” and “coldness””.
Then in the top post “part 3” I observe Kip “Multiplying temperatures as numbers” in:
SH = m*0.133*DBT
So…why then is Kip’s SH result (and thus SH + LH for the “Total heat of the air” in Kip’s terms) NOT a “nonsensical result”?
Trick ==> You misunderstand the formula presented. In words, the formula reads: “Sensible Heat equals mass times 0.133 times dry bulb temperature. “This formula converts the intensive property “temperature” into an extensive property “sensible heat”.
No misunderstanding Kip, as Kip writes confirming again: “0.133 times dry bulb temperature.”
Kip thus is shown to be “Multiplying temperatures as numbers” which according to Kip in part 2: “gives nonsensical results”.
Trick ==> Multiplying temperatures by temperatures…..or pretending that 60 degrees F times 2 results in 120 degrees F is “twice as hot”.
So…according to Kip pretending “0.133 times dry bulb temperature.” is 0.133 times as hot a dry bulb temperature. No.
As Kip does correctly write, temperatures are an intensive property so cannot be added.
Correctly, the thermometer temperatures of the GHCN can be converted to effective temperatures to arrive at the global median Tse ~ 288K.
It does turn out that the same answer Earth’s global median Ts ~ 288K is obtained just by averaging relevant thermometer temperatures so the complex work to properly compute Tse each time does not change the global answer & is thus skipped by many authors.
Trick ==> I can’t teach a class in thermodynamics here….or even one in the properties of matter.
The formula for Sensible Heat is correctly given in the essay.
Which thermometer temperatures of GHCN ? This year? Last year? 1950? Or can they all “be converted to effective temperatures to arrive at the global median Tse ~ 288K.”
Averaging which “relevant thermometer temperatures”?
Use the GHCN (or subset) thermometer temperatures for the period observed.
Kip, to be physically correct & more helpful, convert the top post essay in part 4 to only use enthalpy (H) which is an extensive property. LH and SH are archaic terms that have been more accurately replaced in modern times with the formula for H (the first letter of heat).
Kip will be more helpful to readers using enthalpies of vaporization, fusion, and sublimation. So-called heats of reaction are enthalpies of reaction. And so on.
The formula for Sensible Heat cannot be correctly given in the essay because as Kip correctly wrote earlier the formula “gives nonsensical results”.
Trick ==> Where are we going to get the data sets to convert currently recorded GHCN temperatures to Heat?
Where? Nowhere. Heat of the atm. would be its total thermodynamic internal (thermal) energy which cannot be known so is useless. Temperature is a useful measure of the local avg. thermal energy.
Trick ==> half agree….anytime the IPCC crowd wants to give up GMT or GAST and stop pretending it is a measure of heat. Temperature is a poor proxy for heat energy.
Temperature is not a good proxy.
Read this site.
https://www.e-education.psu.edu/earth103/node/1005
It tells you that air with a higher percent of water will both absorb more heat and conversely will lose heat slower.
Think about a site in a desert at 90 °F @ 10% humidity and one at 90 °F @ 70% humidity. Which one has more heat in a similar volume of air surrounding an LIG thermometer?
You left out the mass. The 0.133 fudge factor is important to the whole equation, not just part of it. Dry air (principally N and O) has a known heat capacity per unit mass, so the mass x temperature x fudge factor is what is important.
“As Kip does correctly write, temperatures are an intensive property so cannot be added.”
Multiplying is just consecutive addition. If you can’t add temps then you can’t multiply them together either.
Not only that, but if you do a dimensional analysis, multiplying temperatures result in a power function. That is, °K * °K = (°K)^2
It may be enlightening to know that specific heat values are like:
J / (g*°K)
J / (kg*°C)
When multiplying by temperature the temperatures cancel. This is a functional relationship designed to use temperature as a measurement. Nowhere is Part 2 does it say that temperature can not be part of a functional relationship. There are many thermodynamic relations that do so.
Multiplying 60 F by 2 is not multiplying temperature by temperature. It’s either scaling the temperature by 2, or adding the temperature to itself.
And apart from the Essex paper, who says it will twice as hot. That paper claims it’s valid to square Celsius values, in order to take a geometric average. I pointed out why that was wrong.
“Multiplying 60 F by 2 is not multiplying temperature by temperature. It’s either scaling the temperature by 2, or adding the temperature to itself.”
Temperatures don’t add. 60F here and 60F there don’t add to 120F.
It looks like I misread Kip’s post. He was talking about two different things, multiplying temperature by temperature, and scaling temperature.
Not sure who suggested multiplying temperature by temperature, or who thinks doubling temperature in Celsius would double the temperature. (Apart from that Essex paper).
Of course you can multiply temperature scales by arbitrary constants, how else would you convert Celsius to Fahrenheit?
Any particular numbers for temperature in different places under different conditions cannot be said to mean the same thing in terms of heat or energy, thus the numbers of those different places and times can’t be rationally used in calculations that are only meaningful in terms of heat or energy.
However, using a particular temperature in a particular place under a particular condition can be used in a calculation involving other measurements (humidity, wind speed, …, anything related to temperature) about that place at that time. This is not contradictory.
Willfully ignorant or stupid? A simple dimensional /units analysis shows that the formula is valid:
[kg] x [J/kg.Deg] x [Deg] after canceling out leaves units of energy
“This formula converts the intensive property “temperature” into an extensive property “sensible heat”.”
Which was the point I was trying to make last time. The whole you cannot average intensive values is a red-herring, as you can always multiply them by an extensive property to get meaningful totals.
Multiply temperature by surface area to get the average surface temperature. Multiply temperature by time to get average monthly values.
The catch is that you have to have the figures (mainly moisture content) to calculate those extensive values. That can’t be assumed to be approximately equal in all places at all times.
I’m not talking about averaging enthalpy, just temperature.
Degree-hours sort of count for cold or warmth requirements for plant germination or flowering. if you’re using degree metres^2, how much area is allocated to each temperature reading? For of that matter, what does 500 Kelvin m^2 mean, and how do you average it? Is it 1 K over 500 m^2, 250K over 2m^2, 500K over 1m^2?, 1000 K over .5 m^2?
If you are not dealing in enthalpy then you are not dealing in HEAT, i.e., energy.
An average temperature is just that, temperature and only temperature. Temperature does not tell anything about the energy that is being radiated away at any one location.
That was my point. This post keeps talking about heat when it should be talking about enthalpy.
“To support a claim that the Earth’s Climate System is “getting hotter” one has to have a long-term time series of measurements of heat in the climate system.”
This is both confusing heat with enthalpy and with hotness.
Jim and Bellman ==> “en·thal·py [ˈenTHalpē, ənˈTHalpē]
NOUN
physics
Heat is not enthalpy.
And I’m not interested in heat or enthalpy at this point, just temperature, the measure of hotness.
If you want to know if the world is hotter now than it was during the LIA, you need to know how the temperature has changed, not the enthalpy.
If you want to understand energy budgets or whatever, you might need to know about global enthalpy.
The irony is that whenever someone does try to show that enthalpy is rising, it’s immediately claimed to be a hoax to distract from “the pause”. Remember all Monckton’s “the oceans ate the warming” jokes?
Now all you have to do is show that this is in fact done. What size Stevenson Screen? That varies the column of air.
Certainly you can get numbers that are arithmetically correct but what is the meaning of a termperature measure multiplied by (1200m x 1200m)?
AndyHce: “Certainly you can get numbers that are arithmetically correct but what is the meaning of a termperature measure multiplied by (1200m x 1200m)?”
The most intuitive meaning is a barrier’s resistance to heat flow given one watt of power. Insulation effectiveness is typically specified by R-Value which has SI units of K.m2/W. The higher the K.m2 component the higher the R-Value. And the amount of heat transferred across the barrier is Q = A(Th – Tc) / R. Again, notice that higher K.m2 values increase R thus decreasing Q.
LOL get out the handyman’s hand book bdgwx is in the house.
OK, but that has nothing to do with the practice of applying a thermometer reading to a large area and pretending the large area is at that temperature and pretending that the calculation tells something meaningful about the area in question.
It’s a description of temperature by area. Just as a degree day is a description of temperature by time.
The point is it allows you to determine an average temperature. The point of the previous post was that this wasn’t possible because intensive properties cannot be added and so cannot be averaged.
NOTE: ” A degree day compares the mean (the average of the high and low) outdoor temperatures recorded for a location [on a specified day] to a standard temperature, usually 65° Fahrenheit (F) in the United States.”
It is a number of degrees above or below 65°F the Tmax+Tmin/2 pseudo-average temperature (Tmean) for the day.
It is a back of the envelope quess-timate used to determine how much heating or cooling will be needed to maintain comfortable temperatures in buildings.
A degree day is a unit of temperature by time. You can use it for a specific purposes such as estimating the cost of heating or cooling, but my point is you can multiply any temperature by time to get a measure in degree days. An extensive property that can be added to get a meaningful sum, and the sum can be divided by time to get an average temperature over time.
“It is a number of degrees above or below 65°F the Tmax+Tmin/2 pseudo-average temperature (Tmean) for the day.”
Better not let Tim hear you say that.
It’s still the *OLD* method. The newer method is much more accurate, which results in more efficient in sizing HVAC systems – meaning less cost to the consumer!
This is the calculation of the energy stored in the dry air. M is the mass of the air (not a temperature) and 0.133 is the heat capacity of air (also not a temperature) in appropriate units for the mass and the dry bulb temperature (this is a temperature). So we haven’t violated the rule of not multiplying two temperatures together.
I use a different equation since I learned my thermodynamics as a chemical engineer, rather than as a meteorologist, but it does the same thing and gives the same answer.
Loren, Kip wrote in part 2: “Multiplying temperatures as numbers can be done, but gives nonsensical results” which is not your “multiplying two temperatures together.” Kip violated his own rule in part 3.
Kip,
What do you mean by saying that “Everything HAS a temperature”? You spent all of part 2 saying that it was not scientifically correct to average temperatures and that the results produced were nonsense. Now you are saying the opposite.
Take for example a sealed box that is half filled with water and with air in the remaining volume. Does it have a temperature?
What about if I take an iron rod and heat one end. Does the rod have a temperature and if so can I calculate it by averaging the temperature along the length?
Izaak ==> Temperature is a SPOT measurement of an intensive property.
The box of water will HAVE a temperature….calculating it might be a problem, but not much if the temperature was at equilibrium.
Each spot of your iron rod would have a temperature. If you calculated the Average Temperature — a silly thing to do– and picked up the red-hot end, you would get burned.
Your average would mean nothing except at equilibrium when the rod is at one temperature.
If you apply heat at one end, a gradient ensues controlled by conduction. Conduction is seldom a linear function, so again an average would mean nothing.
Do you know what a gradient or a diffusion equation looks like?
OK IZAAKIE you tells us what is the earth temperature?
Hansen: For a smart guy you wrote a stupid article.
Ricxhard ==> I do so appreciate my fans….
Silly Hansen word games
Global average temperature is a statistic based on measurements, guessing (infilling) and adjustments. Any change of trend provides information that could be useful. Evidence that climates vary between warming or cooling trends. There is no “normal” temperature. Any unusual trend slope could be important.
The question of how accurate and useful that statistic is needs to be debated. To say that it is a fantasy is claptrap.
Richard,
I have submitted Part 1 a relevant article in 2 parts for WUWT to consider. Your last line theme is addressed more in Part 2. Geoff S
That brings up an interesting question.
Do a calculation based on one temperature measurement and some other relevant measurement(s) of the same place at the same time and you can get a real result. However, it has an uncertainty in value that involves the uncertainty of each measurement. Its precision and accuracy are, at best, limited to the least accurate and least precise of the individual measurements.
Combining a large number of different measurements (of surface temperature measurements) with different instruments in different places at different times (each value is +/- 0.5C), then finding the difference between the total’s average and some other multi-component average (an anomaly) gives a value that is very small relative to the temperature measurements’ uncertainty. That is, the anomaly’s uncertainty is far greater than the uncertainty of any individual measurement.
The interesting question is:
Ignore the particulars of the multiple uncertainties, but assume that the many measurements that have gone into the averages that were used to compute the anomalies were done consistently with the same instruments over time (i.e. by day, by month, by year, etc, but pick only one and stick to it). Then, if there is a trend in the values of the anomalies over a long period when many such anomalies have been calculated for their period’s duration, can that trend be reasonably interpreted, since its value is much smaller than the noise?
If not, is there any explanation of how a trend could persist (e.g. some particular figment of the particular statistical analysis)?
Oh boy, did you step into a quick sand trying to explain this to some of the denizens of this site.
Do you mean my question is so poorly described that many will fail to understand it?
“Do you mean my question is so poorly described that many will fail to understand it?”
“Combining a large number of different measurements (of surface temperature measurements) with different instruments in different places at different times (each value is +/- 0.5C), then finding the difference between the total’s average and some other multi-component average (an anomaly) gives a value that is very small relative to the temperature measurements’ uncertainty.”
You’re question is well posed. If the uncertainty overwhelms the anomalies then it also overwhelms any trend line developed from those anomalies.
+100
“One can surely find a number by adding the temperature of Los Angeles on noon today to the temperature of Chicago yesterday noon, and dividing by 2 but the result will not be a temperature of any place at any time. This extends to one of the problems of Global, Regional, State, National, weekly, annual temperatures and their anomalies over various periods of time and space. One can surely find a number by adding the temperature of Los Angeles on noon today to the temperature of Chicago yesterday noon, and dividing by 2 but the result will not be a temperature of any place at any time. This extends to one of the problems of Global, Regional, State, National, weekly, annual temperatures and their anomalies over various periods of time and space.”
It seems that a number of those who have added to this discussion have either missed or misunderstood the above extract from Mr. Hansen’s paper.
Solomon ==> Yes, you have certainly spotted the problem in the comments….many readers seem to have failed to read the conclusions — Bottom Lines.
“One can surely find a number by adding the temperature of Los Angeles on noon today to the temperature of Chicago yesterday noon, and dividing by 2 but the result will not be a temperature of any place at any time.”
+100
Radiation is an important part of the earth’s system. So is conduction and convection.
Do you believe that temperature is a good proxy for HEAT, ie., energy?
Does the temperature on top of Pikes Peak and the temperature in Denver tell you anything when you average them? No infilling, no homogenization.
When the uncertainty of the data overwhelms the differences used to establish a trend line what good is the trend line?
Surely the Earth has a large multitude of temperatures. Any single number must be some combination of those multiple tempreatures, whether average or some other calculation or selective measurement.
AndyHce ==> What you say is trivially true of ANY macro-object.
Andy
from a light year away the earth would have an average temperature, good luck measuring it with any accuracy though.
biob ==> Yes, precisely — or “it would have a temperature that could be approximated by some means”.
But does that measurement have any real meaning?
The problem is that the combination assumes that there is a “field” of temperatures where they all interact and that there is a an average.
Try this experiment find a temperature map of a big state. Try drawing a line between 3 fairly separated locations. Does the center of the line match the average of the two ends. You will find some that do, but many more that don’t. This simple experiment shows that temperatures don’t act together in unison to establish an “average temperature”. Now do the same thing with two cities, one in the northern hemisphere and one in the southern. Does the middle point have the average temperature?
I think that is an argument against homogenization.
It is. Part of the assumptions about temperature is that they are connected. They are not. They may be autocorrelated to a certain extent based upon things like zones, seasons, etc., but they don’t create the temperatures surrounding a location nor at a large distance from themselves, the sun and other interactions in the atmosphere does that.
That is my point. There are many different temperatures. What meaning does a single number for them have, no matter from where the measurement is made?
There is no global temperature that can be measured
There is a global temperature statistic.
It does not predict the future climate,
Not even trends that last 35 years, like the
global cooling from 1940 to 1975, which did not predict
the global warming from 1975 through 2022.
The two periods are hardly comparable, either in duration or extent.
That’s what I said.
One trend did not predict the next trend.
I did not say they were identical lengths.
In addition, the warming from 1910 to 1940, assuming the measurements made sense, did not predict the cooling frend from 1940 to 1975, which was much steeper when originally reported in 1975.
I could also add that the originally reported 1940 to 1975 cooling trend, as reported in 1975, did not predict the current much flatter 1940 to 1975 cooling trend, as reported in 2022 !
I have often challenged people to look at the time series for the “Global Average temperature” for July 1935. When asked why, I suggest it must be a quantum number as it changes each time it is observed. Though I would think quantum changes would be more random, July 1935 has only ever gone down with each dataset release.
Owen ==> Nice — have you got a link or a graph showing this? Can you create it and post here? like Time against July 1935 average temp?
Kip,
Someone posted the original on this site years ago and I downloaded a copy to my work and home computer. (Both crashed earlier this year and I now can’t find it on the site.) The plot in question was a moving GIF of the whole timeseries from the late 1800s to 1972. Each frame was the published timeseries advanced one month in publication. I was struck by the changes in the 1930s and so picked a July and recorded the number, frame-by-frame. I had a PowerPoint slide that showed the “temperature” for that date as published up through about 2010. Unfortunately, until I can find the graphic I got it from, I can’t recreate it. It didn’t convert the true believers, but it did peak the curiosity in some of my coworkers to start checking. They are all too busy with their teaching and personal research to spend much time on digging into it though.
It’s too bad the wayback machine doesn’t record database contents on the day of the snapshot. It would be nice to be able to pick the published output for each date of publication and compare. The person who made the original graphic (who was credited on my slide – my brain is fire and forget on names, so once I cited it, my brain decided it was done with it and purged it), had been actually saving the dataset to their computer for 20 years in separate files.
Owen ==> Thanks
READERS ==> Anyone remember the graphic in question, a GIF animation of “the time series for the “Global Average temperature” for July 1935″ which shows how the record of it has changed?
That’s because it is just as difficult to “predict” the past temperature as it is to predict the future temperature. Historical temperature data keep changing.
That’s because NASA GISS has been tampering with / adjusting the data.
Frank from NoVA said: “That’s because NASA GISS has been tampering with / adjusting the data.”
Would you mind showing me where in the code GISS is tampering with the data? I’ll modify that section of code and we’ll see what happens. You can download the code here. In fact, you get this up and running on your own machine in a just a few minutes and then run your own experiments where you disable or alter any sections of the code that you feel are nefarious. I have yet to find anything even remotely malicious or fraudulent.
It really isn’t about tampering so much as the way homogenization checking works. For some reason, each time the dataset is homogenized it works with the whole set to the beginning of time and adjusts based on the new criteria. So even though all the new data is for the current month plus stragglers from the last year or so, homogenization changes stations all the way to the beginning. I really think the past should stand and the new data should be analyzed against it, but that isn’t the way it works. (That’s an analysis decision made by the senior scientists.)
Other changes to the past were due to the Time of Observation changes, but I am still not convinced that for a min/max thermometer it really matters whether it is read at 9AM or 5 PM or some other time altogether, other than a high or low temperature might get attributed to the wrong day once in a while. If the high for the day is at one minute past midnight due to a cold front passing, that is still the high for the day.
I don’t believe most of the folks working the issue are purposely increasing the trend. There are a few that are, but they tend to generate the policies the working scientists have to follow. Cooling the past readings happens due to groupthink and no one having the gumption to say “hey! this doesn’t smell right.” They all expect the modern record to be hotter and thus when they see it, no warning flags go up. History started yesterday, or maybe the day before, and we are the smartest ever, so those folks in the early 20th century couldn’t possibly tell us anything.
+100
Chart is dishonest since the cooling was much deeper than .1C
More than dishonest. Total garbage. 1960 and 2000 were the same temperature.
That is HadAT2. At 500mb the 1960 value is 0.0 C vs 0.16 C in 2000. And the 5yr centered means are 0.03 C and 0.40 C respectively. You can download the data here.
“One of these errors is the odd un-physical assertion that temperature (is) a proxy for measured heat.”
It’s a good enough proxy for a qualitative argument like Mosher uses it. It’s not good enough for comparing the fractions of a degree changes of measurements of GTA with model predictions e.g. high confidence that at least half the 0.6 degree of warming since 1950 was due to fossil fuel use. That is treating a very poor proxy as a measure.
Or, for example, how the average temperature of the Moon is less than the Earth. Even without an atmosphere, the storage and transport of heat around the globe by the oceans should make the Earth have a mean of about 30 degrees greater than the moon even if everything else was equal (insolation, albedo, emissivity).
[I was tempted to write cooler and hotter]. In this case, it’s just due to the sum of T^4 over the whole surface area needs to be the same while T varies from 25K at the poles of the moon to 390 K at the equator at noon, and only a 30 K spread on the surface of oceans. That average does nothing to explain any average thermodynamics – it just indicates that the two worlds are very different.
And I’ll bring up my big bug bear. The modelling starts with what would the average temperature of a black-body orb would be this far from the Sun instead of what would the average be of a sphere made up of an infinite number of individual black bodies. My rough calculation suggests almost a factor of three warmer (sorry, larger) for the former with energy in and out being exactly the same.
The former would have a constant temperature over it’s surface, while the latter would be 0 K over half it’s surface, some illuminated areas would much hotter. If the illuminated area was a constant, T^4 would be twice of the black body to make up for being zero on the dark side, but T only 25% more. Divide 1.25 by 2 and the mean temperature should be a little over 60% of the black body. A bit more complicated to take into account the spread of temperatures on the illuminated side is not constant but spread from near 0 K at the poles, dawn and sunset, to much hotter at noon on the equator.
This is a theoretical emission temperature, so not the same as the daily mean from a weather station.
The dog is hassling me to take him to the beach, so apologies for not spending a bit more time on this comment.
Robert B ==> Yes., indeed, mostly agree “It’s a good enough proxy for a qualitative argument like Mosher uses it.”
And my best to the dog….
Excellent part three. You are of course correct. Anthropogenic global warming involves heat retention per air mass (via impaired radiative IR cooling). That is not the same as a temperature, or even temperature anomalies. At best, temperature is a poor proxy that is then improperly treated by ‘averaging’. And we also know there are still serious measurement problems with the temperature proxies.
IMO a worse ‘numbers’ sin is averaging the climate model spaghetti graphs and assuming the result is meaningful. It cannot be. Most of the models are provably wrong (having a tropical troposphere hotspot that does not exist in reality—the sole exception is the Russian INM CM4.8/5.0 in CMIP6 which means it is least wrong). Averaging provably wrong stuff does not make the resulting average meaningful, let alone ‘right’. And averaging anomalies hides the fact that the wrong models themselves also disagree by +/-3C at initialization. Judith had a post on that factoid years ago. I stuck her example into the ‘Climate models’ essay in ebook Blowing Smoke, with footnote attribution.
The mean of meaningless models is a meaningless mean.
Ed ==> Now that’s just . . . . mean.
Guilty as charged. 😉
“I know what I know if you know what I mean.”
“Don’t know much about history
Don’t know much biology
Don’t know much about a science book
Don’t know much about the French I took…
“Don’t know much about geography
Don’t know much trigonometry
Don’t know much about algebra
Don’t know what a slide rule is for…”
Huh…I’m pretty f*&#%@g stupid. Suddenly, I’m depressed.
Michael ==> GREAT song….
Segue, into Paul Simon “After all the crap I learned in high school, it’s a wonder I can think at all…”
Odd tangent, but Edie Brickell and Paul Simon are married.
Scissor ==> I like ““I know what I know if you know what I mean.” but not a fan of the Edie Brickell—-
Being unaware of so many things (true of everyone) strongly suggest that some unknow but probably not small number of those things that we “know” are not true. And, at least to some extent, the least able person to really understand an individual is that individual himself.
And green in a green and greening world, albeit with anomalous Green local and regional blight.
Rud ==> “a worse ‘numbers’ sin is averaging the climate model spaghetti graphs and assuming the result is meaningful.” Averaging chaotic results is a silly exercise…..
And yet that is exactly what weather models like GEFS, GEPS, EPS, and NBM (which is an ensemble of ensembles) do to extend the useful skill of weather forecasts from 6-8 days to 8-10 days. The technique is so effective that the NWS uses NBM almost exclusively for their official forecasts now. It’s also why the NHC uses TVCN and IVCN almost exclusively for their official forecasts now. Averaging chaotic results is a very powerful technique for improving predictions.
bdgwx ==> Read Real Science Debates Are Not Rare : https://wattsupwiththat.com/2014/10/06/real-science-debates-are-not-rare/
Thanks for that – I read it when posted, but it’s one to bookmark.
Have you ever kept track of the 10 day forecast for temperature and precipitation to see how correct they are?
Start on Jan 1 and write down what the previous 10 day said would happen, then write down what did happen. Do that every day of the year. You’ll be very sad.
It is something I have been considering doing, making a matrix of false-positives and false negatives for the 5-day precipitation forecast, moving forward one day at a time. Sometimes the forecast is wrong even on the day of the forecast.
Temperatures appear to be more reliable, but then historical averages are usually available even if model outputs aren’t.
This is already being done for us by the National Center for Environment Prediction here.
IMO a worse ‘numbers’ sin is averaging the climate model spaghetti graphs and assuming the result is meaningful.”
Averaging the climate model predictions presents the consensus view of the future climate. Every model is programmed to predict whatever the programmers want to predict, and whatever satisfies their “management”
The average is a consensus of opinions in numerical form..
The average is very meaningful.
The model average / consensus is numerical evidence that humans can not predict the future climate. If you looked at only one model, the Russian INM, you might think humans can predict the future climate. In my opinion that one less inaccurate prediction must be a lucky guess, or a 1975 to 2022 trend extrapolation. The knowledge of every climate change variable does not exist to build an accurate climate model by design. And that’s assuming the climate can be predicted. Maybe it can never be predicted?
In my life time the world cooled, and then warmed, apparently having stopped warming after increasing global temperatures by ~0.8C. Plant life is prospering. Epidemic famines are a thing of the past. Grazing animals are procreating. Predator animals are feasting. Weather disasters are down from previous numbers. The only thing raging is the left’s hysteria about a world that might reject them is they don’t adopt authoritarian green orthodoxy. I really don’t see where the observance to the hysterical is worth several trillions in tax payer dollars.
They are coming back soon, but not due to climate but because the World Economic Forum wants to reduce the population by 6 billion or so. Government policy (looking at you Denmark and Canada) will provide the next epidemic famine.
So Kip, would it suffice to say, for the sake of an after-dinner discussion with family / friends, that Temperature value is NOT a physical reality, and therefore should not be being used to determine whether there are any concerning changes in the ambient sensible + latent heat present in Earth’s numerous climates?
Mr. ==> You have to be careful — it is anything presented as a Global Average Surface Temperature (in degrees or anomalies) that is not a physical reality — and certainly not a measure of heat contained in the Earth climate (or any portion of it).
Just tell them you have lived with global warming for the past 47 years and you loved it. Tell them you are sure the climate will get warmer, unless it gets colder. If that doesn’t shake them up, tell them Trump was the greatest president in American history, and Biden is the worst. If they are leftists, they will go berserk.
Yes all of that works, but what I would also like to see are the spittle-flecked rants about “the science” when I say the temperature values they present in numerous colorful graphs are not physical realities of heat energy, and “so what else have ya got?”
Here in Michigan we tell friends our property was under a mile of ice 20,000 years ago and all the ice melted in the next 10000 years in spite of no coal power plants and SUVs — no manmade CO2 emissions at all.
That usually ends the climate conversation. Leftist friends don’t want to hear about past climates. But they love scary predictions of the future climate.
To cheer them up I tell them scientists say the world is going to end from global warming in 27.85 years unless drastic actions are taken. And they believe it. Because “scientists say”. And two decimal places is “real science”. I have actually done this quite a few times in the past 25 years. I just make up some scary climate prediction and they believe it every time. Like children hearing a scary campfire story.
Yes that’s perplexing –
what are climate catastrophists so incurious about past climate behaviors, where we can physically observe (geology etc) what actually occurred, yet they are so fixated on conjectures about what might / could/ maybe / perhaps occur in the future?
Rationality seems to be the ONLY casualty of the AGW conjecture.
Same people who touch the woodwork if they see a wet paint sign.
One should consider the possibility that they are just being polite and don’t want to get into a confrontational exchange.
Life well-lived is all about confrontational exchange.
But, some people will avoid confrontation at all costs. You can draw your own conclusions about the quality of their life.
Good article, Kip. I would like to add a point. Computation-intensive models are good for some things, and in this case I see the value of the ERA5 reanalysis produced by the ECMWF (European Centre for Medium-Range Weather Forecasts). A wide range of hourly values of atmospheric properties are available, on a 1/4 x 1/4 degree grid.
This link below is to a time series plot of the hourly “vertical integral of total energy” per square meter for a gridpoint near where I live for all of 2019. Total energy includes internal energy (involving temperature), the latent energy of water vapor, kinetic energy, and potential energy due to altitude.
I have expressed the vertical axis in Watt-hours per square meter. The rapid and very large changes are obvious in the plot. The single-digit theoretical “forcing” commonly used for the direct radiative warming effect of a doubling of CO2 concentration since pre-industrial times is 3.7 W/m^2.
So what is the point? The very small incremental increase in the radiative coupling between the surface skin and the atmosphere vanishes in the vertical scale as total energy increases and decreases thousands of times more than that on short time scales in the column of air above the surface.
So just as you emphasize that from temperature alone one cannot infer heat gain or loss, neither can one assume that the incremental energy involved in the static GHG radiative warming effect experienced at the surface looking toward space MUST result in a reduction in outgoing longwave energy. The energy available in the vertical column of the atmosphere is huge in comparison and it is mostly from that stored energy that longwave emission is powered.
?dl=0
Dibble ==> Thanks for that….Can you give the link to the “ERA5 reanalysis produced by the ECMWF (European Centre for Medium-Range Weather Forecasts). A wide range of hourly values of atmospheric properties are available, on a 1/4 x 1/4 degree grid.”
https://cds.climate.copernicus.eu/cdsapp#!/dataset/reanalysis-era5-single-levels?tab=form
Dibbell ==> ERA5 is modelling and re-analysis….not direct observations….not sure it is anything like Reality Based.
Let me know if you have a different opinion.
The reanalysis is fed a large volume of acquired data to keep it as close to the real thing as possible.
It suffices to illustrate that the energy content of the entire column of the atmosphere – which can only be computed, not directly measured – is huge and highly variable as best understood in numbers for comparison to the static GHG warming effect expressed in a number. This came to mind as you pointed out that it takes a computation involving not only temperature but also pressure and humidity to determine the heat content of air.
Dibbell ==> I only comment that ERA5 is not a measure of heat content or heat energy that could be averaged and then converted to a “temperature”
The ERA5 “vertical integral of total energy” is a computed estimate of a quantity that could indeed be summed and averaged and trended. But to what end? The reason I posted my comment was to jump to the logical extension of your valid point that temperature itself cannot be summed or averaged or trended to indicate accumulation or loss of heat content in the atmosphere. Even when you do estimate total energy properly, as in the ERA5 computations, you quickly discover that it varies so much and so rapidly above a location on the surface, as to make it absurd to claim a few watts per square meter of incremental static radiative warming effect can ever be isolated for reliable attribution. There is no suggestion here that the vertical integral of total energy would be or should be convertible into a temperature or a temperature trend.
Dibbell ==> Yes, I got that the first time….just wanted readers to know that ERA5 was not measured quantities.
ERA5 produces measured quantities. It just happen to use an extremely complex measurement model. Using the terminology from GUM JCGM 6:2020 it is a multi-stage measurement model. The fact that it is astonishingly complex does not make it any less of a measurement. It’s vertical integral of total energy is a measurement.
“It is possible that future satellite missions will be able to measure directly and accurately Earth’s incoming and outgoing energy.:
So what?
They are not in balance
There are icehouse and greenhouse conditions
Lots of changes in between them
And a direct measurement on one day
that tells you nothing about the future climate
What percentage of people are living live with rising, falling or steady local temperatures over time?
How about local warming or cooling that affects real people where they live and work?
Changes of local weather over time?
Not just an average temperature — the TMAX trend. the TMIN trend, % days over 90 degrees F.. % days over 95 degrees F. etc.
Accurately measure Earth’s incoming and outgoing energy and you still don’t have the answer of how climate change is affecting people locally where they live and work. You don’t know if they like the changes in their local weather over time, or even if they have noticed. The local weather trend here in SE Michigan is warmer winters since the 1970s. We like that trend and want it to continue. We don’t live in the average temperature. No one does.
If I lived in Siberia and my TMIN was getting warmer, that’s good news. If I lived in the tropics and my TMAX was getting warmer, that’s bad news.
The theme of these articles: “One cannot average temperatures” is ridiculous. There are many monthly averages. So it can be done. And it is done. “Are they accurate averages” is another question. “Is a single global average useful” is another question.
Would a +1 degree C. increase in the average temperature over 100 years mean much? I doubt it. Would a +5 degree C. increase over one year mean much? You bet it would. These three articles serve no purpose in refuting the coming climate crisis fantasy or Nut Zero.
Richard ==> Why do you keep insisting (time after time) that every word here has to ” refut[e] the coming climate crisis fantasy”?
The original WUWT masthead was
“Watts Up With That? Commentary on puzzling things in life, nature, science, weather, climate change, technology, and recent news by Anthony Watts”
I stick close to that original outline.
I stated the articles do not refute climate fantasies or Nut Zero. THAT IS A FACT. We climate realists are losing the climate change propaganda war and the even worse Nut Zero project is being launched at great expense to ruin electric grids. Our climate realist side need all the help we can get. When a good author like you devotes three articles to less important subjects, IMHO. that may make you happy, but doesn’t help win the propaganda battle.
Richard ==> This series is about the fact that GMST (or GAST) is not proof or evidence for the basic assumption of Global Warming theory.
Hansen
CAGW is just a prediction
It does not exist.
CAGW is barely related to the global average temperature statistic. It is mainly a wild guess of ECS multiplied by 2x to 4x by a water vapor positive feedback fantasy.
It is not an extrapolation of the global average temperature statistic. If it was, the prediction would be for future AGW (not scary, and a reasonable theory), not for CAGW (climate fantasyland)
The average temperature statistic is evidence of warming or cooling. It is used for that reason and used to define ECS, which has about 365 different guesses — one for every scientist. It is an imperfect statistic. It does not represent heat, but it does represent an average of temperature measurements infilling and adjustments.
It is a real statistic, not a fantasy.
It will keep being used long after this series of articles is forgotten.
There should be a debate on the accuracy the average temperature statistic. You could have pointed out that claimed margins of error are not reasonable margins of error. Or that margins of error can’t be calculated with infilling of numbers that can never be verified. But you did not.
And i suppose now we are armed with information to refute all the scientists in the world who use and debate the global average temperature statistic? I think not.
I think that the real issue is that the tribe believes the shaman, not that the shaman is wrong or right. People in general believe their party or their tribe spokesman. Anything that challenges that is just noise unless it has immediate and direct consequences. The CO2 myth is even better than the rumbling of the gods for effectiveness. Any conflicting claims, no matter how presented, are like the proverbial water off a duck’s back..
So many questions, Grasshopper . . .
They remove one scalpel from the climate anomaly quiver, which improves anthropogenic global probability of viability to breathe, to live another day amidst a social wash of wicked solutions.
Can you provide me with a gibberish decoder ring?
Richard ==> In your Fruit Loops cereal box….
Hansen hired a joke writer !
“Drink more Ovaltine.”
~Annie
H.R. ==> Ovaltine (malt) every morning for me!
“To support a claim that the Earth’s Climate System is “getting hotter” one has to have a long-term time series of measurements of heat in the climate system.”
This is just silly, and unsuccessful, playing with words. The whole essay is pointless. Hot does not mean that some object has high heat content, latent or otherwise. It means it has high temperature, as in red-hot etc. It is what burns you. It determines the speed of reactions.
The significance of temperature is that it moves heat around. It induces heat fluxes, which it is useful to be able to calculate, conserving energy. It will induce a heat flux into (or out of) you, which is the part of weather that we are most aware of.
Nick, “hot” and “cold” mean different things to different people.
The Missus and I can never agree whether our bedroom feels “hot” enough to warrant the ceiling fan on 1 or 3 setting, or on at all.
Same with temperatures values.
Our neighbor gets the vapors if her weather app says that it’s 28 C outside.
Me, I reckon that’s ideal conditions for mowing the lawn.
Don’t ask Nick if the wind is blowing, and how fast.
Oh, am I psychic or what?
Nick ==> Come on now, you know perfectly well that the whole Global Warming “thing” is based on the assertion, the hypothesis, that the Earth’s climate is gaining heat (energy) because of increasing CO2 concentrations in the atmosphere. I quoted two statements to that effect.
It is not I that uses Global Average Surface Temperature to support that hypothesis, but others.
THEY, not I, use the term “getting hotter” and point to the non-physical metric GMST (GAST or whatever).
If I want to know how comfortable I will be on any given day when I go outside, I look at the weather stats (in present time) to see the temperature and the humidity which will tell me how “hot” it will feel — because those two measurements will indicate relative heat content.
Only in the wild wild world of CliSci is temperature alone considered a proxy for heat.
Almost 8 billion first hand witnesses to up to 47 years of global warming since 1975 and no one asks them if they liked or hated their local climate change, or even if they noticed. We noticed here in SE Michigan and we love the small amount of warming (mainly the winters) since the 1970s.
Kip,
“I quoted two statements to that effect.”
Neither of the statements you quoted was about gaining heat energy. They were both about temperature rise. They say so.
“to see the temperature and the humidity which will tell me how “hot” it will feel — because those two measurements will indicate relative heat content.”
They don’t. To echo the buzzword of a few days ago, they are intensive variables, and are potentials. Temperature sets up a gradient that conducts heat into your body, and humidity inhibits a moisture gradient that might take it away. Doesn’t involve the heat content of anything.
Nick ==> The IPCC et al (NYTimes and Climate.gov) both clearly state that the energy retained by the Earth system due to CO2 is proven by the alleged increase in temperature.
That is the entire basis of the Global Warming Hypothesis – can’t see how you deny it….
Kip,
You have it totally the wrong way around. Global warming, as the name says, is about gain in temperature. That is what affects us. To sustain a temperature difference, you need a flux of heat, a generally conserved quantity. That you can study and measure, and so work out energy retained. It tells you how the world works, but the end result is temperature.
Nick ==> You are playing loose and fancy with the arrow of cause and the IPCC et al. definitions of Global warming.
Have you ever been to say Galveston when it is 90 and 90%. How about the desert when it is 90 and 10% humidity. If you have, you shouldn’t be saying it is all about temperature. It is not.
How do you ever expect to analyze the atmosphere properly if you ignore water vapor and its causes and results? You are just looking at part of the sun’s energy when you ignore water vapor.
It is energy content that drives the atmosphere, not temperature. It is energy content that tells you how many joules are available to drive a thunderstorm, i.e. how much moisture can you lift and how far you can lift it. Temperature won’t tell you that!
Why do you suppose there is a tornado alley and a hurricane alley? Why aren’t tornadoes common everywhere if it is only temperature that drives the climate. Minnesota has 80F temps just like Kansas but MN has few tornadoes. What’s the difference?
Nick,
What are the units for incoming radiation? The last I looked it was W / m^2 or more explicitly, Joules per second per square meter. Now if the incoming energy is in Joules, don’t you think you should be dealing in Joules? That is enthalpy, and not temperature!
As I’m still learning about thermodynamics, could you be clear that when you say “heat” you actually mean enthalpy?
Bellman, heat is energy in transit. Heat is only recognized as it crosses the system boundary. Heat is really internal energy that moves from one system to another due to a temperature gradient.
Enthalpy is H = U + PV.
U is internal energy and PV is pressure and volume. So no when they say heat they shouldn’t be meaning enthalpy.
I hope this helps. A web site called hyperphysics may help.
Kip ==> I get your point regarding averaging atmospheric temperatures where pressure and humidity mean equal temperatures don’t mean equal heat content. However, in the lab we frequently are required to precisely control temperatures in incubators, ovens, furnaces and baths. In most cases we need to control both average temperature and temperature distribution (uniformity). We therefore must measure temperature with a number of distributed sensors – e.g. a 4 x 4 x 4 grid of 64 sensors*. The data recorded from these at regular intervals can certainly be averaged since both pressure and humidity are equal throughout the space. Thus equal temperature between two sensors does imply equal heat content and variation in temperature implies proportional variation in heat content.
*Typically such gridded measurements are done in a calibration process and related to a single point control sensor so that the space is useable.
Rick ==> Of course, there are many things that apply “in the lab” that do not apply in the Rest of World.
In any small, tightly controlled space, one can pragmatically guess that averaging the readings from several sensors inside that space can be used to guesstimate “average temperature”.
You could, of course, with a few more sensors, actually calculate the heat content of the entire furnance.
The books on my shelf Mesoscale Meteorology by Markowski and Richardson and Dynamic Meteorology by Holton and Hakim use average temperatures. And it’s not just temperature. It’s other intensive properties like density, pressure, vorticity, etc. The uses are prolific and encompass spatial extents of the atmosphere.
And I already mentioned this above but the ensemble models GEFS, GEPS, EPS, NBM, etc. all average temperature and all show superior forecasting skill as compared to their non-ensemble counterparts. Averaging temperature (plus hundreds of other properties) clearly works in the real world too. It’s why the NWS uses the NBM almost exclusively for their official forecasts now.
Short-term weather forecasting (< 4 days outlook) is reasonably reliable.
In places whose climates are fairly docile.
(It’s a lot trickier in climates like the American Pacific North West – ask Prof Cliff Mass).
But weathers are not climates.
Climates forecasting has been atrocious.
(The Farmers Almanac seems to have been somewhat useful, with ~ 50% near misses)
I do enjoy reading Cliff Mass’ blog. He’s a smart guy.
1) I’m not appealing to authority. I’m appealing to evidence. And I always cite my sources so that I’m not accused of making stuff up, so that I’m not accused of plagiarism, and so that others can quickly find, verify, and learn the details of the evidence themselves.
2) And yet averaging model outputs has proven to be so effective that ensembling techniques are used almost exclusively now for weather forecasts especially beyond 3 days.
3) Weather forecast skill is quantified by various skill scores including root mean squared error (RMSE), anomaly correlation coefficient (ACC), brier skill score (BSS), equitable threat score (ETS), etc. For example, notice that the GEFS (multi-model average) destroys the GFS (single-model counterpart) in forecast skill of 500mb height forecasts extending the useful skill (ACC > 0.6) from 7 days to 10 days in 2021. It is undeniable that averaging is an incredibly powerful tool for reducing prediction error and increasing prediction skill.
That seems the principle defect of the “Greenhouse Effect” itself. Believing an open system to behave like a small, contained one is inherently nonsensical.
As I tried to point out in the other threads, a triple-point bath in a laboratory is completely different from averaging air temperatures in Timbuktu and Kalamazoo. The average of the bath temperature is meaningful because it is not changing, or changing only tiny fractions of a degree. The bath temperature is needed for calibrating thermometers, for example.
Rick C
Thank you for that comment, even though it represents a special set of paribus certeris type conditions.
If you have measurement data on the uncertainty of the constancy of temperature in a water bath, esp near room temperature, would be keen to see the results.
Interesting e.g. to compare with claims for Argo floats in open oceans.
Geoff S
This is a special case because the medium is water, the system is close to equilibrium, small and well mixed. Therefore when calculating the heat content at each sensor position and then recalculating an average temperature from average thermal energy content everything would just drop out of the calculation and you are left averaging temperatures.
I think that the thrust of this essay is summed up very well by a quote from Goldratt’s The Goal:
‘I don’t check the calculations, the math is nearly always correct, if there are mistakes they are in the assumptions’ (or something like that)
Certainly small spaces make such an assumption more viable but let me point out a small logical inconsistency. If you assume the temperature between any two sensors is equal to the average, then you can eliminate two sensors and put only one in between where the two would have been. Then following that same assumption, you put one senor in between each of the remaining. What you end up with is one sensor located with the device being calibrated.
The only other conclusion is that there are gradients throughout the water bath and multiple sensors are needed to validate that a constant temperature has been achieved throughout. My surmise is that those gradients are in constant flux and an average doesn’t tell you how the middle is actually changing.
However, that is far from saying that we have constant temperatures throughout the globe.
Actually, the use of multiple sensors is often required to determine the magnitude of temperature gradients and variability. Often a test procedure specification with specify the required mean temperature, say X +/- 0.2 and the maximum deviation of any single location say X +/- .5. Or, in a standard calorimeter where high precision water bath temperature change is measured, the bath is constantly stirred to assure uniformity and a single sensor is used. In the old manual type a mercury in glass thermometer with graduations of 0.001 C read with a microscope was used.
How is possible for the LIA to be cooler than today if average temperatures don’t exist?
According to the Essex paper from the last post
Bellman ==> “It makes problematic the claim that Earth’s temperature field is warmer or cooler today than it was a hundred years ago, or that one century is hotter than another century.” It is problematic — because Global Average Temperature is a non-physical concept.
It does not mean that there might not be a reality to warmer and cooler climates.
I have expressed my opinion freely — it is not, of course, based on any one’s calculated GAST or GMST.
Well, these events aren’t held on the Thames any more for one thing –
?itok=FTB303NA
Yes. Not even in December 2010 or the winter of 1962/3. Despite them being a lot colder than when some of these frost fairs were happening.
Fear of drowning.
What was intended was that you could not average, let’s say Lagos, Nigeria and somewhere in Iceland and arrive at an average temperature at a spot equidistant from each of the locations. TEMPERATURES do not act at a distance to warm or cool a spot far removed.
I’ll say it again, if you want to address the energy arriving at earth in terms of “joules per second per square meter”, then you must also address the spatial distribution of those joules. That means both sensible heat and latent heat.
Temperature only addresses sensible heat, guess what deals with both latent and sensible heat?
“TEMPERATURES do not act at a distance to warm or cool a spot far removed.”
No, but temperatures do have spatial auto-correlation over short distances. That is why gridding programs for making isopleth maps use some form of inverse weighting that is a function of the distance from the interpolated point.
I would appreciate the minuscule amount of effort to at least point out what in the above the down-voter disagrees with.
Don’t know who downvoted but it was wrong to do so.
I thought more about what you said. Here is my take.
Temperatures are not fundamental properties that occur on their own. Temperatures are created out of other properties and thus can’t tell you anything about the properties that create them, nor define the properties of separate “objects”.
An example is PV = nRT. “T” becomes PV / nR. If you measure the same T at two locations, you still can’t determine the other values. Averaging T won’t even give you the averages of any of the other variables. Averaging T is simply a fruitless exercise.
Temperature cannot be added to temperature, thus cannot be averaged
======
Speed is an intensive property like temperature. It can be averaged.
Speed = distance/time thus
Avg(speed) = total distance/ total time.
Reduce temp to energy/mass then it can be averaged
ferd ==> Of course, you are absolutely correct.
You are not reducing temperature to anything with energy/mass, you are converting a temperature to some measure of heat — which is the whole point. Energy/Mass is not measured or reported in degrees (on any scale) but in Kcal/kg.
If one converts intensive properties to extensive properties, the extensive properties CAN be averaged.
Speed is not a fundamental property, it is a combination of length and time which are.
Geoff S
A more correct analogy would be
Avg Speed = (D1/T1 + D2/T2) / 2
This will give you an average speed, but it doesn’t actually exist. You are computing a number that is not real.
Nowhere in the distance traveled did that average speed exist. This is the equivalent of the Essex paper saying that the end points don’t work at a distance. Velocity 1 doesn’t determine the velocity at all points, and neither does Velocity 2.
Take 2 cubic meters of dry air at 10C and 1 cubic meter of air at 20C, if tou mixed these the average temp of the resulting 3 cubic meters of dry air would be 1.33C
Intensive properties like speed, temperature and density can be averaged by converting rhem to extensive properties..
ferd ==> “Intensive properties like speed, temperature and density can be averaged by converting them to extensive properties..”
Yes, one MUST convert them to extensive properties to be able to average them But one is no longer averaging temperatures, but heat in joules or calories etc.
That would be energy in joules can be correctly averaged which can be known. Heat is the total thermodynamic internal energy which is not known. Temperature is a measure of local avg. energy. .
A car travels for 2 hours at 10mph and 1 hour at 20mph. What is the average speed.
The average speed is D/T = 40/3.
This is the exact same problem I presented above using speed in place of temperature.
Note you cannot calculate average speed as (10+20)/2
The correct answer is (10+10+20)/3
ferd ==> The analogy doesn’t apply –speed is a function of distance and time.
ferd ==> “dividing an extensive property of matter such as the space traveled (x) between another extensive property of matter such as time (t).” results in an intensive property == speed (which is intensive) .
To get back to an averageable extensive property, one has to reverse that process. Speed is already enumerated by some like miles per hour. When you then add two distances and times — you are working with extensive properties and then converting them back into an intensive property.
You cannot however average the speed that the Earth travels around the Sun with that of Mercury and the other planets and claim to have found the average speed of planets — there is no such thing.
It still gives you a number that is not real. Did you ever travel at 40/3 mph?
An average is 13.3 mph. Were you traveling at 13.3 mph @ the 20 mile midpoint in your journey? According to your problem you were traveling at 10 mph at 19.99999+ miles and 20 mph at 20.00001- miles. Now in real life you had a gradient from 10 to 20 mph. The average there would be 15 mph assuming a linear increase.
Your point is well taken. I am just trying to point out that there are other issues involved when solving physical problems. Simple arithmetic averages generally don’t work out well unless your data has a normal distribution.
intensive properties of matter and not subject to being added, multiplied or subsequently divided,
=====
Most intensive properties are fractions created by dividing an extensive property by another extensive property.
You can add, subt, mult, div these so long as you first convert them to a common denominator.
For example, addition:
A/b + c/d is not (a+c)/(b+d)
It is (ad+bc)/(bd)
Thus you can add intensive properties. The rules of fractions can be used for +-/*
ferd ==> This is the last time that I am going to point out that what you say is exactly what I am saying. You needn’t keep saying it.
Only this bit “Thus you can add intensive properties. The rules of fractions can be used for +-/*” is wrong.
Temperature itself is enumerated in degrees (F, C, K). Heat, however, is in joules and is a measure of the extensive property “heat” (heat content or energy).
It is not a matter of “common denominators — but of different properties of matter.
To be more technical, air can also have gravitational potential energy(height above earth) and kinetic energy (wind). The height & wind are less important because they don’t contribute to long term energy changes. The compression or decompression of air at varying altitudes & locations change temperatures while overall energy stays the same. Solar emissions & gravity from all other celestial bodies have some influence over water/atmospheric movement (fluctuations).
Being unable to model all the physics & influences accurately means the models resort to shortcuts, approximations, factors ignored & unscientific limits applied to parts of the models so the model gives sensible output with the semblance of possibility & scientific basis.
Measuring the earths temperature from space relies upon the S-B law, which is a 4th power relationship.
Because this is non linear, the average temperature as measured from space will be affected by both the mean and the variance. Surface temperatures are not corrected fo this.
1^4+3^4=82
2^4+2^4=32
In the above exanole, thr average temp is 2 in both cases but there is a difference in outgoing radiation.
This also applies in reverse. For the exact same outgoing average radiation you can get different average surface temperature.
This is referred to as the rectification effect in academic literature like that of Trenberth et al. 2009. For Earth the rectification effect is about 1 K or 6 W/m2. You can avoid the rectification error by doing a spatial integration of the SB law. However, to do the reverse, determine the average temperature from the average outgoing radiation, you most know and apply a correction for the rectification effect.
Thanks for this, bdgwx!
“Rectification effect” — what an odd name for it.
But 6 W/m² = 1°C seems reasonable, to me. However, it seems surprising that Trenberth 2009 would acknowledge that such a large difference in RF makes such a small difference in temperature. I found where they estimate 6 W/m², but I didn’t find the 1K estimate:
AR6 suggests a much smaller ratio (larger temperature difference):
“One cannot average temperatures”
Actually it is not hard to do ;))
Three articles to prove heat and temperature are different?
Perhaps the author is being paid by the word?
The explanation should have required one paragraph:
Is temperature the same as heat?
Heat is a measure of change, never a property possessed by an object or system. Therefore, it is classified as a process variable. Temperature describes the average kinetic energy of molecules within a material or system and is measured in Celsius (°C), Kelvin(K), Fahrenheit (°F), or Rankine (R).
Add to that Article 2’s erroneous assertion of what you cannot do sensibly with a meaningless sum of any Intensive variable (contra to what every empirical scientist knows that for example a useful statistic of the sample mean can be obtained by dividing by N) and that it ignored the fact that sums of Extensive variables can also be meaningless. Thus you get from Article 2 an Extensive vs Intensive “nothing-burger” and worse than that; mathematical nonsense.
My disappointment is that one of the best authors here thought three articles on this subject were a valuable use of his time.
Fortunately, a new article here on Wednesday analyzed the uncertainty of temperature measurements themselves:
Uncertainty Estimates for Routine Temperature Data Sets. – Watts Up With That?
Richard ==> Yes, a fine essay.
Reread my series on The Laws of Averages
No read the statistical literature including text books. “Averaging averages is only valid when the sets of data — groups, cohorts, number of measurements —
are all exactly equal in size (or very nearly so), contain the same number of elements, from…”
https://wattsupwiththat.com/2017/07/24/the-laws-of-averages-part-3-the-average-average/.
This is WRONG. Where do I start? You could try reading this paper by colleagues of mine on averaging across random and fixed effects in Linear Mixed Models
(https://doi.org/10.1111/j.1467-842X.2004.00334.x). My paper here (https://doi.org/10.9734/arrb/2021/v36i1230460) uses a meta-analytic LMM approach using MCMC sampling to model (i.e. average via the LMM) means with unequal sample sizes at the first sampling level which is handled by
using an extra fixed variance component. I do not need to read your series I have seen enough of your amateur statistical theory prognostications.
I thought you were a science journalist. You have a penchant for lecturing the world on mathematical statistics principles and I saw a post where you threatened to lecture us on thermodynamics as well. What exactly are your qualifications to lecture us all on these technical areas? I have seen enough of your misconceptions in my field of
mathematical statistics (applied and theory) and your lack of knowledge of the peer-review literature (unless you can cite some references to your papers in peer-review literature or
say a text book you have published) to be concerned about WUWT articles like these spreading fake-news statistical methods.
Who are you talking to?
The appeal to authority logical fallacy won’t sway many opinions here. Why should it?
If the data are baloney, their average is baloney too. That’s my “statistics”
I gave references to the statistical literature on why Kip’s statement I quoted was wrong. That is argument to peer-review literature that has been tested and validated by professional statisticians. That is how science works. Not argument to personal authority but to published authority. These blogs do not reference fairly the peer review literature.
That’s because they are just blogs. No discipline of fairly considering and referencing the published peer-review literature is required or even encouraged. Publishing in the peer-review literature is hard work and takes scholastic discipline unlike blogs like this.
Steven G. Candy ==> Your paper (linked above) “Long-term Trend in Mean Density of Antarctic Krill (Euphausia superba) Uncertain” demonstrates the near impossibility of obtaining sensible, real world results, when attempting to do what you claim — whether it is with linear or chaotic data sets. In the end, you yourself find that using seven different models gives results that are not only not numerically precise or particularly accurate but are, in the end, still quite uncertain. Such models do not and cannot give precise numerical results and are only “possibly useful”.
I appreciate that there are lots of ways to attempt to find answers to questions asked of disparate data sets and that your paper demonstrates a good faith attempt when forced to work with poorly constrained data — one has to use what one has.
R G Brown, at Duke, is working on ways to make predictions from known-to-be-chaotic data sets. He hasn’t had much luck, other than some vague statistical probabilities that the future might lie more often in this direction than that.
But the idea that statistics provide real world answers to real world problems or even accurate and precise answers to scientific questions “because we are really good with numbers and models” is extremely questionable.
Statistics and modelling have their uses — but cannot perform magic such as making the “averaging of averages of averages” a valid mathematical scientific practice.
Kip ==> “Statistics and modelling have their uses — but cannot perform magic such as making the “averaging of averages of averages” a valid mathematical scientific practice.”
Here is a simple 2-level nested model and Y_bar is the average of the sampling-level 1 averages (i.e. averaging of averages), assuming repeat sampling with n_i fixed then you get the following exact results. For n_i random an approximation can be used assuming say a truncated Poisson for the n_i. So what is mathematically invalid about the results below? (I can also post the truncated-Poisson n_i approximate version and its accuracy. NB approximations are a valid mathematical tool when their accuracy can be quantified).
There was a notational error in the last line. Corrected below.
Corrected below.
In my paper modelling longterm trends in mean krill densities (which in terms of the nature of the data as unbalanced cross-sectional and longitudinal samples has similarities to weather station long term measurements where stations are added or removed, and there are confounding effects on the “true” longterm trend eg urban heat island effects etc) I considered the n_i fixed. There are justifiable reasons for doing this and statistical inference is all about making supportable assumptions for the inferences that are required. In this case what was of interest is the mean trend and its uncertainty given (i.e. conditional on) the realisation of the survey sampling design (with “design” used quite loosely for that particular multi-national dataset) which was not of interest in and of itself.
Statisticians and empirical scientists are always debating the merits or otherwise of the particular assumptions made for the required inferences given the data and competing models. What they do not debate, however, is patently incorrect generalised assertions such as “averaging averages is mathematically and/or scientifically invalid”.
In a consultancy I am currently working on, the mean is not of interest but the survey design and how it determines the variance of the mean for future surveys is the statistical inference of interest. In that case I am treating the n_i as random and using the zero-truncated Poisson approximation. Doesnt make a huge difference whether n_i is considered fixed or random but “horses-for-courses” as they say.
It is incorrect when you are averaging intensive variables as a proxy for thermodynamic heat. There is nothing keeping one from averaging temperatures if all you are after is an average temperature. What you can’t do is then say, I know what the energy distribution of the earth is. Temperature is NOT a complete measure of energy nor can total energy be inferred from temperature, only enthalpy can give you that.
Jim, intensive variable temperature (local avg. energy) can be converted to extensive variable energy then physically averaged as energy and converted back to temperature. That step isn’t needed when the emissivity at each thermometer site doesn’t vary much; nature is being kind in that regard.
If there are enough thermometers measuring local avg. thermodynamic internal energy in the total space of interest and they avg. higher after a certain time with enough statistical confidence, then the total thermodynamic internal energy in the space of interest will be confidently higher even though it is unknown. In that specific case as applied to climate, temperature is indeed a good proxy for whether the total thermodynamic internal energy in the space of interest has increased (or not) over climate timeframes.
NB: FYI, I wasn’t the downvote on your 5:57 am noted at time of my reply.
But you don’t have a clue about the value as compared to other distant locations. You can’t assume that everywhere increases the humidity the same. This was an assumption of the GHE theory and it hasn’t proven to be correct. The last time I checked, total global rainfall had not increased significantly.
The point of science is to eliminate change in as many variables as possible when dealing with just one. In other words a controlled experiment. If you do not include water vapor in your measurement, then all you are doing is guessing.
Jim, water vapor effect on local temperature is included in the thermometer measurements. The increase and trend in global water vapor and other global pertinent climate variables have now been measured with 95% confidence during the satellite era.
You will have to find something else to guess about that is left out of the measured GHE warming trend other than humidity, rainfall, and wv.
You need to go study some physics and thermodynamics. Water vapor latent heat can not be measured by a thermometer. Latent means “hidden”. Sensible heat which a thermometer can read is a measure of the translational (movement in space) kinetic energy of atoms and molecules. Latent heat does not result in translational movement so it is not “included” in thermometer readings.
Read this paper.
Thermo.2007.pdf (utah.edu)
Jim 3:33 pm, I have encouraged commenters to use enthalpy of vaporization instead of the archaic and confusing term latent heat. Using enthalpy of vaporization will help you see your phrase “Water vapor latent heat” is not used coherently in your comment.
Jim also writes at 4:20 am: “Temperature is an intensive variable and it only measures part of the energy that our atmosphere holds”.
Jim will need to brush up on meteorological CAPE in sec. 7.3 of his own link.
Steven Candy ==> Just to be clear, I never claimed that “averaging averages is mathematically and/or scientifically invalid”. Just to be sure I didn’t mis-speak in any of the three parts of this essay, I checked (text search) and confirmed I had not used the word “invalid” even 0nce, about anything.
What I did say was “mean temperature”, as is being used in CliSCi today, is a physically improper metric.” and “Part 2 of this series dealt it the reasons why “One cannot average temperatures”. This fact is a bit harder for most to understand as it is a common everyday practice to average temperatures, speak of “the average temperature” of some day, city, region, or even the whole globe. Thus, when shown that the practice is scientifically improper and the results of such are nonsensical (except in the most simplistic, daily pragmatic senses), confusion and objection results.”
And “this is just about the improper practice of averaging the intensive property temperature.” and ““mean temperature”, as is being used in CliSCi today, is a physically improper metric.”
I have written in my series on The Laws of Averages that one has to be very careful if one attempts to average averages…. “Averaging averages is fraught with danger and must be viewed cautiously. Averaged averages should be considered suspect until proven otherwise.”
Kip Hansen ==> “averaging averages is mathematically and/or scientifically invalid” is a fair paraphrase of this direct quote from your article “Averaging averages is only valid when the sets of data — groups, cohorts, number of measurements —
are all exactly equal in size (or very nearly so), contain the same number of elements, from…” (my bolding)
https://wattsupwiththat.com/2017/07/24/the-laws-of-averages-part-3-the-average-average/.
So the “only valid” restriction under the conditions you set is an incorrect assertion mathematically and/or scientifically. You did not check for the term “valid” and “only valid” means “invalid” outside of the “only”! Also I gave the direct quote for you to confirm anyway.
You did not acknowledge my simple example which disproves your assertion that involved the words “only valid”. Playing semantics does not cut it in the real world of empirical science.
And this also is a direct quote from your earlier post
““Statistics and modelling have their uses — but cannot perform magic such as making the “averaging of averages of averages” a valid mathematical scientific practice.”
Steven ==> You didn’t complain about that specifically — but nonetheless, my statement about is true…. averaging of averages of averages is not “valid” in the sense discussed in my previous essays on averaging. I am not, of course, the only person in the world you understands that there are deep problems with the wily-nilly averaging of averages of averages.
True to engineers and pragmatists — statisticians have a different view (about most everything….)
Past of the problem many have is that climate scientists and statisticians never, ever carry through with determining the variance of each stage of processing. Each mean, when used as a distribution descriptor must also have a variance or the mean is meaningless.
I just picked a couple of days from my records this summer. Averaging Tmax and Tmin has a mean in the mid-80’s and a variance around 300, and just for individual days. Averaging 3 or 4 days did not change the descriptors to any extent.
Since these are samples, the SEM is the standard deviation which is in the 17 – 20 degree range.
Does anyone ever publish these figures for daily, monthly, annual, global averages. Not that I’ve ever seen. It seems like they are assumed to diminish through the numerous averages. If you can’t quote a variance with a calculated mean, then the mean is meaningless!
“It seems like they are assumed to diminish through the numerous averages.” That is often the case. In the nested sampling design theory I posted you can see (lets assume for simplicity n_i=n for all i) that the level-1 variance is divided by m*n whereas level-2 variance is divided by just m so the contribution to the variance of Y_bar of the level-1 variance approaches zero at a much faster rate than that of the level-2 variance as m increases. That is not just an assumption it is a mathematical property of the sampling design and the corresponding basic linear mixed model.
You are avoiding the issue. The statement in your blog/essay was WRONG specifically “Averaging averages is only valid when the sets of data — groups, cohorts, number of measurements — are all exactly equal in size (or very nearly so), contain the same number of elements, from…”
Of course what you are averaging has to be considered and be sensible (e.g. percentages) whether it is from multiple levels of sampling or not. The issue of weighting (by the n_i) is that the unweighted and weighted estimators are both unbiased so both are valid and useful but which is the minimum variance estimator can be studied. (I will leave it for your homework). I gave specific mathematical results and peer-review statistical references but all you gave were incorrect sweeping generalisations (“only valid“) with no technical support and internet blog posts. You seem to have some animus for statisticians since they are apparently not pragmatic. I can assure you that empirical scientists (including applied statisticians) need to be pragmatic to know which mathematical/statistical issues are more important than others in delivering best-achievable real-world results. If your version of pragmatism is “dont worry about being technically correct in the fundamentals of the science/maths” well I am pleased not to be that sort of “pragmatist”. That’s my description of your unstated position due to your avoidance of my maths example that proves your statement was incorrect and therefore your unwillingness to admit such and correct it but in fact you double down and will not admit that specific technical statement was wrong.
Hubris much: “but nonetheless, my statement about is true…. averaging of averages of averages is not “valid” in the sense discussed in my previous essays on averaging”
Despite its many flaws, the peer-review process forces the author(s) to address errors in methods unlike a blog like this where the author can ignore, deflect, or “bluster” their way out of demonstrating scholastic discipline and integrity.
You quote some blogs eg https://math.stackexchange.com/questions/95909/why-is-an-average-of-an-average-usually-incorrect
I checked some of the supposed answers and they were incorrect or incomplete. The answer that considered X and Y samples and combined them failed to use a correct nested sampling design. For the answer that used the school averages example and correlated n_i with Y_bar_i its not weighting that saves the day as that poster asserted but you have to consider the n_i as a random variable and predictor variable as well as level-1 sample sizes in a linear mixed model if Y_bar_i is a linear function of n_i. You can fit nonlinear mixed models or generalized linear mixed models as well. Mixed models are commonly used in serious empirical research when appropriate. It was a major part of my published stats and applied research. Unless you have done a course in or applied them with knowledge of the underlying theory you havent got off first base. Moral: DO NOT trust everything you read on the internet.
The problem with those lacking skills in statistical theory is that they do not know how to calculate expectations, variances, and distributions of sample statistics/parameter estimates under experimental/survey designs for population or super-population sampling frames. Also they do not understand principles of statistical inference adequately. These give the theory which justifies/validates/disproves the various practices proposed for data analysis. That’s why you need to go to the statistical methods literature including text books.
I still think you are one of the best authors here but only wonder why you thought this subject was so important it needed three articles. I think the global average temperature statistic is only accurate enough to tell us if there is a long-term warming trend or long-term cooling trend. A coin flip would be right half the time.
Besides the haphazard temperature measurements, no one lives in the global average temperature. And even more important, so far no one can predict the future climate. CAGW is nothing more than a prediction of climate doom. Wrong for over 50 years so far. Not reality.
Show a reference where you can divide the “sample mean” by N to get a statistical descriptor of the sample mean.
Read this.
Sampling Distributions (stattrek.com)
Pay attention to this equation.
σx = σ / sqrt(n)
where σx is the sample mean standard deviation
σ is the population standard deviation
n is the sample size
One needs to decide if the temperature database is a population or a group of samples. Most folks call it a group of samples and then calculate the mean and standard deviation of the sampling distribution. In turn they divide by N to get an “SEM”. When you divide a sample distribution by N, you get a meaningless number.
If you declare the temperature database as a population, there is really no reason to do sampling. You have all the data, calculate the mean and the standard deviation and you have all the statistical descriptors that you need.
I never said “divide the sample mean by N” I was referring to dividing the sum of the sample values by N. I don’t need you twisting my statements and I definitely don’t need a lecture on statistical methods 1.01 from you. I have a PhD in applied statistics, 3 sole and 1 senior papers in applied statistics journals and many many more in applied journals.
Pay attention to this y = Xα +Zβ+ e’ (details in https://doi.org/10.9734/arrb/2021/v36i1230460). This is one modern statistical approach to modelling a time series of means with a high degree of spatial x temporal imbalance.
Part of the problem you have is you don’t follow what Kip is trying to relate. Temperature is an intensive variable and it only measures part of the energy that our atmosphere holds. Temperature doesn’t vary when divided or multiplied which are simply analogs to summing and subtracting. This is the same regardless of the math of statistics.
As I’ve said several times the sun’s energy is apportioned between sensible heat (temperature) and latent heat (water vapor). Much of what climate scientists deal with is temperature which only measures part of what the sun provides. Latent heat varies over the surface of the earth so averaging only temperature leaves a large amount of energy missing in the calculations.
Imagine scooping a net of krill and measuring the mass to determine how many are in that scoop. Now imagine there are krill that have no mass and that they can be a large part of what you captured or there may be none in the scoop. Call them latent krill. They can be eaten and provide energy. How do you, through statistics, only deal with what you do measure but also include what you don’t measure? You can’t! Your only option is to deal with the massless krill in another manner, i.e., measuring the latent krill in another fashion (call it humidity) and include that in your calculations.
So “there are krill that have no mass” (i.e. imaginary krill) that “can be eaten and provide energy”!! Go figure, you are only going to get imaginary krill oil out of that imaginary lunch!
Bottom line: No one can say for sure what a global average surface temperature anomaly is, what it’s accuracy limit is or what the perfect number should be for the earth.
So all political efforts designed to “reduce heat in the atmosphere” can never be measured to see if they are effective or not. Seems like a waste of trillions of dollars for the effort in my mind. Doing nothing accomplishes the same results.
The Earth receives energy and entropy from the sun and radiates energy and entropy to outer space.
However, such as with any open fluid thermodynamic system, the Earth exports vastly more entropy than it receives.
This is due to the material thermodynamic response in the system to absorbed and emitted radiation.
Vast motions and diffusion of mass, frictional dissipation of winds, molecular diffusion of heat, the phase changes of water, and the existence of biological systems all represent expressions of entropy production.
In its simplest terms, a change in entropy dS is equal to the heat transfer (dQ) / Temperature.
The fundamental constraint for the purpose of understating Earth’s climate is that the change in rate of entropy production (S) must be greater than or equal to zero. This is an expression of second law. Only in steady state will dS/dT ≈ 0.
Some people confuse this concept. For example, some argue that a cold object influencing the temperature of a warmer object is incompatible with second law.
What they are really arguing about is the rate of entropy production. In the context of climate, their arguments depend on the fluid thermodynamic climate systems of the Earth.
In terms of a ‘backradiation’ perturbation in the atmosphere, the backradiation itself is not the violation. The violation of second law is omitting the consequent change in entropy production.
The Earth system response to backradiation perturbation must be a positive increase in entropy production. This, then, must minimize the temperature change.
For example, a slight increase in moisture content may be sufficient to minimize temperature rise. This is an apparent paradox and poses interesting questions pertaining to the relevance of clausius clapeyron assumptions.
What we’re really interesting in is total water suspended in air, vapour or not, to understand climates.
Some physics curricula and most engineering programs teach the concept of exergy, the inverse of entropy production, to aid understanding.
In the climate literature exergy has been called ‘available energy’ or ‘available enthalpy’.
Inversely, entropy is defined as the ‘available potential energy’, or ‘unavailable enthalpy’ i.e. ‘unavailable energy’.
The definitions are useful tools to allow application of second law in discussion of climates. Not least the interactions of radiations, mass, winds, and water in all its phases.
The material interacting dynamic systems will not remain static in response to a forcing. Coincident changes in the rates of dissipation, diffusion of heat and mass, and water phase changes must persist according to second law.
What is often omitted in discussion of climates is that both radiative process and material process are responsible for ‘energy availability’, or best described by ‘energy unavailability’, IMO.
The question is, how much is ‘temperature’ change minimized by the material response in the total system i.e. all dynamic interacting sub-systems.
The backradiation deniers should argue that S increases sufficiently as to offset any observable change in temperature of the Earth system, at whichever layer of interest.
JCM ==> Interesting, thank you.
The question I raise here is whether or not GMST (even if it could be calculated) would be a direct proxy for energy in the Earth system.
GMST guesstimates may give some semblance of thermal available energy near the surface. This is a function of energy fluctuations between different reservoirs, both between available energy reservoirs, and the partitioning to ‘unavailable’ reservoirs.
Any change to GMST certainly represents a type of climate change, but it tells us nothing of energy partitioning, or total Earth system energy.
The root of Clausius’ statement on second law, and it’s relation to first law (total energy conservation), is that ‘Heat can never pass from a colder to a warmer body without some other change‘.
This is why climate accounting must include entropy budgets, to account for partitioning to unavailable reservoirs. Energy conservation (first law) doesn’t work without also including associated entropy budgets (second law).
Clausius frequently wrote of water phases, knowing its critical importance to quantifying such variables.
Most of the energy in Earth’s climate system is in the oceans. It is the reason the surface temperature varies so little.
There is reasonable indication that the heat content of oceans is increasing. ARGO buoys have wide coverage now.
Where climate science fails is understanding WHY the heat content is increasing.
The dominant reason is the slowing down of the water cycle as the average temperature of the northern land masses increase due to increasing solar intensity. That slows down the water cycle resulting in atmospheric water increasing while evaporation reduces and the ocean thermocline deepens. Global runoff has been reducing for the last 70 years. No climate model predicts this trend.
It is impossible to warm ocean surface from the top down in a matter of decades; that requires conduction working against convection and it does not happen in decades to depths of 2000m. The only way to warm the deep ocean is to slow the rate of evaporation. That recuces the cool deep currents that draw cool water from high latitudes to low latitudes. The oceans warm up because the circulation slows down
RickWill ==> The oceans are one of the two coupled non-linear chaotic systems that make the Earth climate. We don’t really understand the oceans enough to make broad claims — though we know more each year.
“It is impossible to warm ocean surface from the top down in a matter of decades”
Even 1000 years won’t be enough.
How long does it take for ocean water to cycle?
1000 years
“According to NOAA It takes almost a 1000 years to complete a cycle”
Humans are having zero affect on the deeper ocean cycle unless they did over a 1000 years ago. Of course no humans had any affect back then either and the deeper oceans are now responding to what happened almost a 1000 years ago.
Kip, thanks for a very informative series. Having dealt with trying to extract meaningful data from weather stats over the last 30 years in irrigation I appreciate how you can only compare apples with apples. One often forgets to differentiate between the intensive and extensive qualities. When considering your series as a whole and digesting it clisci needs to be thoroughly re-evaluated. If one considers average temperature, say in a desert with 45deg day and close to zero at night versus a tropical day of 33 with a night of 12 they are nowhere near the same “average” temperature.
(45 + 0) / 2 = 22.5 and (33 + 12) / 2 = 22.5. What am I missing here?
Why did you add the Tmin to the Tmax in both cases?
That was the only averaging method I was able to perform given the limited amount of data provided. My guess is that Taschas’ point is that given more information and a more accurate method of computing the daily average you could discover that the true average and the trivial approximation of it using only Tmin and Tmax diverge.
For example consider the following sets of values.
A: {45, 40, 35, 25, 15, 0}
(max + min) / 2 = 22.5
Σ[X_i, 1, 6] / 6 = 26.7
and
B: {33, 30, 25, 18, 15, 12}
(max + min) / 2 = 22.5
Σ[X_i, 1, 6] / 6 = 22.2
Notice that the (max+min)/2 approach only approximates the true average and that even though A and B have the same value they clearly have different true averages.
Only Taschas can clarify the post. I’m only speculating. That’s why I asked what I was missing.
I was just curious in the comment above. Bdgwx – you have done a fine job of finding the mathematical mean between 2 sets of numbers but it isn’t an average temperature in either case: you need to look at the mode average, not the mathematical mean. When you find the most commonly recorded temperature for each location over the time period in question, then you will have the average temperature, not some meaningless number.
In my example above with sets A and B what are their modes?
Why would the modes of A and B above be more meaningful than mean or median?
In the above example where you just have a high and low temperature figure, I’d agree that you would be unable to find the mode but with most automatic temperature stations there are readings every few minutes or every hour so it is possible to find the mode. The difference being that the question involved finding the average temperature, not an abstract number in the middle of a range of numbers. And in the case of the sets of numbers you provided above, they are obviously not a set of temperatures taken over a day so not particularly valid either really.
I think the mode could be very misleading. It is not uncommon for daytime highs to clamp out and hover for awhile at the superadiabatic point. The mode in that case would be very close to the Tmax. It is not uncommon for the nighttime lows to clamp out and hover for awhile at the dewpoint as well. The mode in that case would be very close to the Tmin.
Have you ever experienced such temperatures? This emphasises exactly what Kip said about avearging non aveagable numbers. In the desert you are hot as hell all say and freezing at night whilst in the tropics you just sweat like a pig all day and night. Try average that!
Yes. I think so. I’ve to both desert and tropical climates so I know what they feel like. The way it feels subjectively or the fact that the diurnal range can be both large (desert) or small (tropical) does not adversely affect our ability to determine the daily average. What affects our ability to determine the daily average the most is the limited observations we might have to work with.
You are lost in the forest with no way out! Do you not recognize the variance between the two are far, far from equal?
How do you combine two means with different variance? Do you know?
I’ll give you a headstart.
45 – 0 –> s^2 = 1012.5
33 – 12 –> s^2 = 220.5
I thought so, no answer! Math is HARD.
It’s not hard, but as usual you are vague about what you want.
If you want to average two random variables (X and Y) with different variances, the variance of the average will be
var((X+Y) / 2) = (var(X) + var(Y)) / 4.
If you want to take an average of two instances of each variable, then we have
var((X + X) / 2) = (var(X) + var(X)) / 4 = var(X) / 2.
So taking each of these averages and averaging them, you get
(var(X) + var(Y)) / 8
But this isn’t what you are doing if you take the mean daily temperature. You are not taking two random measurements in each location, but two very specific values. The variance of the day is not (max – mean)^2, let alone what you said. IT makes more sense to me to treat TMean as a set value you are interested in (even if it isn’t the exact true mean) and then look at the variance of that value over the period you are averaging.
Taschas ==> You’re welcome — I take it you work in some field of agriculture or soil moisture?
If you found this interesting, you might read my series on the Laws of Averages, which starts : https://wattsupwiththat.com/2017/06/14/the-laws-of-averages-part-1-fruit-salad/
Kip- As ridiculous as this entire discussion is, even more ridiculous is the notion that you seem to think this is somehow going to improve people’s ability of understand atmospheric physics or climate change. All you are doing is giving voice to the science heretics who claim there is no greenhouse effect or we can’t possibly know if the atmosphere is warming or not because there is no such thing as an average temperature and if there is, we can’t possibly know its value.
Define “science heretic”.
Tom.1 ==> You’ll have to re-read from Part 1 — this time for understanding (as my high school science teacher said so many times.)
When you finally get through Part 3 again (this essay), read the Bottom Lines.
Thank you.
By saying that we should ònly ever combine “intensive”,, (that is, conserved), quantities, the author appears at times to lock himself into an untenable position? In his summation he says,
“temperature measurements in whatever degrees, are intensive properties of matter and not subject to being added, multiplied or subsequently divided, which precludes creating averages of temperatures.”
However, the author doesn’t really mean the above quote, since he immediately turns around and says that planetary temperature averages do generally have some sort of meaning. At some point he even makes the common sense observation that a claimed average temperature of 15 degrees Celsius just doesn’t seem that hot to him.
All of which just reminds me of how problematic the usual practice of climate trending really is, with practitioners always plotting arbitrary comparisons, or ‘anomalies’, with no emphasis on absolute temperatures, and no error bars. Just include the best practice on the absolute error, gentlemen, whether you are determining average sea level, average temperature or any other statistical average deemed to be of interest, and maybe we’d all have a better perspective perhaps?
David ==> You are conflating two very different concepts — which are not contradictory.
Refer to Parts 1 and 2 of this series to clear up your confusion.
The “radiation” temperature from a long distance away from earth will give somewhat of a consistent temperature. However, one must recognize that there are lots of variation within the sphere we call earth. It is kind of like the sun or the moon. You can readily see that there are different temperatures at spots, yet the totality is pretty constant.
Dealing with averages within the sphere can lead on astray on what is happening at different points. If I told you that lighted part of the moon was 400 degrees on average, what do you think the shaded portions from the tall mountains might be? 400 degrees also?
Kip,
As you say, temperature is a poor proxy for energy content. It is more accurate to say that temperature is a proxy for the energy density in the immediate area around the sensor. Because it does not take into account latent heat, it’s still not a good general proxy, but when the humidity is known, it can be quite useful, depending on the situation.
Kip,
Another way of looking at this is that because of the high correlation between enthalpy and the temperatures of surficial materials, particularly air, temperatures have utility as an index for retained global heat. In effect, temperature acts as a proxy for enthalpy, and probably is better than tree rings.
However, it argues against claiming that high precision in averaged temperature measurements have utility in analyzing the problem. That is, temperature measurements representing different materials with different specific heat capacities, may give a sense for the direction (sign) of heat change and approximate magnitude. Similarly, sampling only air temperatures, with variable absolute humidity, averages provide some insight on the movement of heat. However, the practice of claiming precision to two or three places to the right of the decimal point is implying more value than is warranted.
It is not unlike determining the weight of various pieces of fruit in a fruit salad. The information can be used to arrive at an estimate of the average weight, but because of variations in the size and proportion of the various kinds of fruit, the estimate is, by necessity, of low accuracy and precision. The estimate should be used accordingly.
Kip,
A very good series.
I would hope this would provide a kick in the butt to climate science that they have wasted 50+ years dealing with simple arithmetic averages, simple statistics, and simple trends.
Thermodynamics IS a very complicated subject by itself. Complicating it with clouds, winds, topography, etc. makes simple answers simple. GCM’s will never be good until they begin to deal with these complications.
You mentioned steam tables. How many complainers do you reckon have a clue as to what they are and how they are used. I’ll bet you carried a book of these in college as did most of us pre-mid80’s.
50+ years making always wrong predictions of climate doom unrelated to any prior climate change observations.
Relative humidity often moves in the opposite direction as temperature because of its definition. A fixed mass of air at a constant temperature can hold a certain amount of water vapor. When this limit is reached, the air is said to be “saturated”. No more water can be added to the air as water vapor. Any additional water vapor will condense (ignoring supersaturation). The relative humidity is the current amount of moisture in the air divided by the amount at saturation, usually expressed as a percentage. If you plot the amount of water vapor in air at saturation versus temperature, the graph is an exponential curve concave upwards. This is because the saturation point of water is directly related to its vapor pressure, and the vapor pressure of water (and everything else) is an exponential function of temperature. When the air temperature goes up during the daytime, the air can carry a lot more water before it becomes saturated. This causes the relative humidity to drop because the divisor is getting larger. If you calculate the actual mass of water in the air as vapor, it does not change that much during the day/night cycle unless a front comes through bringing drier or wetter air, or it rains.
On the subject of energy content, to calculate the enthalpy (thermodynamic definition of energy content) you integrate the heat capacity of air from absolute zero to the current temperature of air, and then add in the mass of water in the air multiplied by its heat of vaporization in appropriate units. The change in air pressure must also be included as P x V. This calculation assumes an ideal gas, which is close enough for this application. Fortunately, we can set the enthalpy to a fixed value at a convenient temperature and pressure, and then integrate from there to the current temperature. NIST has a program which will calculate the enthalpy of dry air over a wide range of temperature and pressure so we don’t have to.
One can certainly calculate an average of temperatures, but there are multiple ways of doing so.
For instance, you’ve already discussed an “average temperature” which takes differing heat capacities into account. A useful definition of “average temperature” for multiple bodies with different heat capacities would be a weighted average, weighted according to their heat capacities. That would give the eventual resulting uniform temperature if the bodies being averaged were in contact (or mixed) and were all allowed to come to equilibrium, with no external energy flows.
There are also multiple possible definitions for an “average temperature” for the surface of the Earth. The common definition is simply the obvious area-weighted average temperature. But another definition would be a uniform temperature which would result in the same magnitude as the actual observed radiative emission strength. Rather than being a simple area-weighted average temperature, that would be the fourth root of the area-weighted average of the fourth power of temperature in Kelvins.
That second sort of “average” temperature would be higher (“warmer”) than the first.
I’ve actually never seen a discussion of this, anywhere; everyone seems to just ignore it.
Let’s consider the simplest case, in which there are two identical plots of land, at two different surface temperatures. Let’s make their temperatures 10°C and 20°C, respectively. Their “average temperature,” calculated by the usual (simple) method, would be 15°C.
But the total radiative emissions with both plots at 15°C would be lower than with one plot at 10°C + the other at 20°C.
I think “everyone” calculates simple, area-weighted averages, for the Earth’s surface temperature, for simplicity. But if you want the right “average temperature” for the Earth’s actual surface emissions, you have to do the calculation the other way.
Let’s calculate the “average temperature” which would result in the same radiative emission strength as one plot at 10°C and the other at 20°C:
10°C = 283.15K
20°C = 293.15K
∜ (( 283.15K⁴ + 293.15K⁴) / 2) = 288.280059K = 15.130059°C
That’s only slightly above 15°C, which might be why the difference between the two sorts of “averages” is usually ignored.
Addendum:
I see that this has already been discussed by ferdberple and bdgwx, here:
https://wattsupwiththat.com/2022/08/23/numbers-tricky-tricky-numbers-part-3/#comment-3584728