Daily Averages? Not So Fast…

Guest Essay by Kip Hansen (with graphic data supplied by William Ward)

 

featured_imageOne of the advantages of publishing essays here at WUWT is that one’s essays get read by an enormous number of people — many of them professionals in science and engineering.

In the comment section of my most recent essay concerning GAST (Global Average Surface Temperature) anomalies (and why it is a method  for Climate Science to trick itself) — it was brought up [again] that what Climate Science uses for the Daily Average temperature from any weather station is not, as we would have thought, the average of the temperatures recorded for the day (all recorded temperatures added to one another divided by the number of measurements) but are, instead, the Daily Maximum Temperature (Tmax) plus the Daily Low Temperature (Tmin) added and divided by two.  It can be written out as (Tmax + Tmin)/2.

Anyone versed in the various forms of averages will recognize the latter is actually the median of  Tmax and Tmin — the midpoint between the two.  This is obviously also equal to the mean of the two — but since we are only dealing with a Daily Max and Daily Min  for a record in which there are, in modern times,  many measurements in the daily set, when we align all the measurements by magnitude and find the midpoint between the largest and the smallest we are finding a median (we do this , however,  by ignoring all the other measurements altogether, and find the median of a two number set consisting of only Tmax and Tmin. )

This certainly is no secret and is the result of the historical fact that temperature records in the somewhat distant past, before the advent of automated weather stations,  were kept using Min-Max recording thermometers — something like this one:

Min-Max_therm

Each day at an approximately set time, the meteorologist would go out to her Stevenson screen weather station, open it up, and look in at a thermometer similar to this.  She would record the Minimum and Maximum temperatures shown by the markers, often she would also record the temperature at the time of observation, and then press the reset button (seen in the middle) which would return the Min/Max markers to the tops of the mercury columns on either side.  The motion of the mercury columns over the next 24 hours would move the markers to their respective new Minimums and Maximums for that period.

With only these measurements recorded, the closest to a Daily Average temperature that could be computed was the median of the two.  To be able to compare modern temperatures to past temperatures, it has been necessary to use the same method to compute Daily Averages today, even though we have recorded measurements from automated weather stations every six minutes.

Nick Stokes discussed (in this linked essay) the use and problems of Min-Max thermometers as it relates to the Time of Observation Adjustments.  In that same essay, he writes

Every now and then a post like this appears, in which someone discovers that the measure of daily temperature commonly used (Tmax+Tmin)/2 is not exactly what you’d get from integrating the temperature over time. It’s not. But so what? They are both just measures, and you can estimate trends with them.

And Nick Stokes is absolutely correct — one can take any time series of anything, find all sorts of averages — means, medians, modes —  and find their trends over different periods  of time.

In this case, we have to ask the question:  What Are They Really Counting?  I find myself having to refer back to this essay over and over again when writing about modern science research which seems to have somehow lost an important  thread of true science — that we must take extreme care with defining what we are researching — what measurements of what property of what physical thing will tell us what we want to know?

Stokes maintains that any data of measurements of any temperature averages  are apparently just as good as any other — that the median of (Tmax+Tmin)/2 is just as useful to Climate Science as a true average of more frequent temperature measurements, such as today’s six-minute records.      What he has missed is that if science is to be exact and correct, it must first define its goals and metrics — exactly and carefully.

So, we have raised at least three questions:

1. What are we trying to measure with temperature records? What do we hope the calculations of monthly and annual means and their trends, and the trends of their anomalies [anomalies here always refers to anomalies from some climatic mean], will tell us?
2. What does (Tmax+Tmin)/2 really measure? Is it quantitatively different from averaging all the six-minute (or hourly) temperatures for the day? Are the two qualitatively different?
3. Does the currently-in-use (Tmax+Tmin)/2 method fulfill the purposes of any of the answers to question #1?

I will take a turn at answering these question, and readers can suggest their answers in comments.

What are we trying to measure?

The answers to question #1 depends on who you are or what field of science you are practicing.

Meteorologists measure temperature because it is one of the key metrics of their field.  Their job is to know past temperatures and use them to predict future temperatures on a short term basis — tomorrow’s Hi and Lo, weekend weather conditions and seasonal predictions useful for agriculture.  Temperature predictions of extremes are an important part of their job — freezing on roadways and airport runways, frost and freeze warning to agriculture, high temperatures that can affect human health and a raft of other important meteorological forecasts.

Climatologists are concerned with long-term averages of ever changing weather conditions for regions, continents and the planet as a whole.  Climatologists concern themselves with the long-range averages that allow them to divide various regions into the 21 Koppen Climate Classifications and watch for changes within those regions.   The Wiki explains why this field of study is difficult:

“Climate research is made difficult by the large scale, long time periods, and complex processes which govern climate. Climate is governed by physical laws which can be expressed as differential equations. These equations are coupled and nonlinear, so that approximate solutions are obtained by using numerical methods to create global climate models. Climate is sometimes modeled as a stochastic [random] process but this is generally accepted as an approximation to processes that are otherwise too complicated to analyze.”     [emphasis mine — kh]

The temperatures of the oceans and the various levels of the atmosphere, and the differences between regions and atmospheric  levels, are, along with a long list of other factors,  drivers of weather and the long-term differences in temperature are thus of interest to climatology.  The momentary equilibrium state of the planet in regards to incoming and outgoing energy from the Sun is currently one of the focuses of climatology and temperatures are part of that study.

Anthropogenic Global Warming scientists (IPCC scientists)  are concerned with proving that human emissions of CO2 are causing the Earth climate system to retain increasing amounts of incoming energy from the Sun and calculate global temperatures and their changes in support of that objective.  Thus, AGW scientists focus on regional and global temperature trends and the trends of temperature anomalies and other climatic factors that might support their position.

What do we hope the calculations of monthly and annual means and their trends will tell us? 

Meteorologists are interested in temperature changes for their predictions, and use “means” of past temperatures to set an expected range to know and predict when things are out of these normally expected ranges.  Temperature differences between localities and regions drive weather which makes these records important for their craft.  Multi-year comparisons help them to make useful predictions for agriculturalists.

Climatologists want to know how the longer-term picture is changing — Is this region generally warming up, cooling off, getting more or less rain?  — all of these looked at in decadal or 30-year time periods.  They need trends for this. [Note:  not silly auto-generated ‘trend lines’ on graphs that depend on start-and-end points — they wish to discover real changes of  conditions over time.]

AGW scientists need to be able to show that the Earth is getting warmer and use temperature trends — regional and global, absolute and anomalies — in the effort to prove the AGW hypothesis  that the Earth climate system is retaining more energy from the Sun due to increasing CO2 in the atmosphere.

What does (Tmax+Tmin)/2 really measure? 

(Tmax+Tmin)/2, meteorology’s daily Tavg, is the median of the Daily High (Tmax) and the Daily Low (Tmin) (please see the link if you are unsure why it is the median and not the mean).   The monthly TAVG is in fact the median of the Monthly Mean of Daily Maxes and the Monthly Mean of the Daily Mins.  The Monthly TAVG, which is the basic input value for all of the subsequent regional, statewide, national, continental, and global calculations of average temperature (2-meter air over land), is calculated by finding the median of the means of the Tmaxs and the Tmins for the month for the station, arrived at by adding all the daily Tmaxs for the month and finding their mean (arithmetical average) and adding all the Tmins for the month, and finding their mean, and then finding the median of those two values.    (This is not by a definition that is easy to find — I had to go to original GHCN records and email NCEI Customer Support for clarification).

So now that we know what the number called monthly TAVG is made of,  we can take a stab at what it is a measure of.

Is it a measure of the average of temperatures for the month?  Clearly  not.   That would be calculated by adding up the Tavg for each day and dividing by the number of days in the month.  Doing that might very well give us a number surprising close to the recorded monthly TAVG — unfortunately, we have already noted that the daily Tavgs are not the average temperatures for their days but atare the medians of the daily Tmaxs and Tmins.

The featured image of this essay illustrates the problem, here it is blown up:

featured_image_800

This illustration is from an article defining Means and Medians, we see that if the purple traces were the temperature during a day, the median would be identical for wildly different temperature profiles, but the true average, the mean, would be very different.  [Note: the right hand edge of the graph is cut off, but both traces end at the same point on the right — the equivalent of a  Hi for the day.]  If the profile is fairly close to a “normal distribution” the Median and the Mean are close together — if not, they are quite different.

Is it quantitatively different from averaging all the six-minute (or hourly) temperatures for the day?  Are the two qualitatively different?

We need to return to the Daily Tavgs to find our answer.  What changes Daily Tavg?   Any change in either the daily Tmax or the Tmin.  If we have a daily Tavg of 72, can we know the Tmax and Tmin?  No, we cannot.   The Tavg for the day tells us very little about the high temperature for the day or the low temperature for the day.  Tavg does not tell us much about how temperatures evolved and changed during the day.

Tmax 73, Tmin 71 = Tavg 72
Tmax 93, Tmin 51 = Tavg 72
Tmax 103, Tmin 41= Tavg 72

The first day would be a mild day and a very warm night, the second a hot day and an average sort of night.  The second could have been a cloudy warmish day, with one hour of bright direct sunshine raising the high to a momentary 93 or a bright clear day that warmed to 93 by 11 am and stayed above 90 until sunset with only a short period of 51 degree temps in the very early morning.    Our third example, typical of the high desert in the American Southwest, a very hot day with a cold night.  (I have personally experienced 90+ degree days and frost the following night.) (Tmax+Tmin)/2 tells us only the median between two extremes of temperature, each of which could have lasted for hours or merely for minutes.

Daily Tavg, the median of Tmax and Tmin,  does not tell us about the “heat content” or the temperature profile of the day.  If daily Tmaxs and Tmins and Tavgs don’t tell us the temperature profile and “heat content” of their days, then the Monthly TAVG has the same fault — being the median of the mean of Tmaxs and Tmins —  cannot tell us either.

Maybe a graph will help illuminate this problem.

Boulder_Tavg_Tmin_800

This graph show the difference between daily Tavg  (by (Tmax+Tmin)/2 method) and the true mean of daily temperatures, Tmean.  We see that there are days when the difference is three or more degrees with an eye-ball average of a degree or so, with rather a lot of days in the one to two degree range.  We could punch out a similar graph for  Monthly TAVG and real monthly means, either of the actual daily means or from averaging (finding the mean) of all temperature records for the month).

The currently-in-use Tavg and TAVG (daily and monthly) are not the same as actual means of the temperatures during the day or the month, they are both quantitatively different and qualitatively different — they tells us different things.

So,  YES, the data are qualitatively different and  quantitatively different.

Does the currently-in-use (Tmax+Tmin)/2 method fulfill the purposes of any of the answers to question #1?

 Let’s check by field of study:

Meteorologists measure temperatures because it is one of the key metrics of their field.   The weather guys were happy with temperatures measured to the nearest full degree.  One degree one way or the other was not big deal (except at near freezing).  Average weather can also withstand an uncertainty of a degree or two.  So, my opinion would be that (Tmax+Tmin)/2 is adequate for the weatherman, it is fit for purpose in regards to the weather and weather prediction.  For weather, the weatherperson knows  the temperature will vary naturally by a degree or two across his area of concern, so a prediction of  “with highs in the mid-70s” is as precise as he needs to be.

Climatologists are concerned with long-term ever changing weather conditions for regions, continents and the planet as a whole.   Climatologists know that past weather metrics have been less-than-precise — they accept that (Tmax+Tmin)/2 is not a measure of the energy in the climate system but it gives them an idea of temperatures on a station, region, and continental basis, close enough to judge changing climates —  one degree up or down in the average summer or the winter temperature for a region is probably not a climatically important change — it is just annual or multi-annual weather.  For the most part, climatologists know that only very recent temperature records get anywhere near one or two degree precision.  (See my essay about Alaska for why this matters).

Anthropogenic Global Warming scientists (IPCC scientists)  are concerned with proving that human emissions of CO2 are causing the Earth climate system to retain increasing amounts of incoming energy from the Sun.  Here is where the differences in quantitative values, and the qualitative differences, between (Tmax+Tmin)/2 and a true Daily/Montly mean temperature comes into play.

There are those who will (correctly) argue that temperature averages (certainly the metric called GAST) are not accurate indicators of energy retention in the climate system.  But before we can approach that question, we have to have correct quantitative and qualitative measures of temperature reflecting changing heat energy at weather stations. (Tmax+Tmin)/2 does not tell us whether we have had a hot day and a cool night, or a cool day and a warmish night.  Temperature is an intensive property (of air and water, in this case) and not properly subject to addition and subtraction and averaging in the normal sense — temperature of an air sample (such as in an Automatic Weather Station – ASOS) — is related to but not the same as the energy (E) in the air at that location and is related to but not the same as the energy in the local climate system.  Using (Tmax+Tmin)/2 and TMAX and TMIN (monthly mean values) to arrive at monthly TAVG does not even accurately reflect what the temperatures were and therefore will not, and cannot, inform us properly (accurately and precisely) about the energy in the locally measured climate system and therefore  when combined across regions and continents, cannot inform us properly (accurately and precisely) about the energy in regional, continental or the global climate system — not quantitatively in absolute terms and not in the form of changes, trends, or trends of anomalies.

AGW science is about energy retention in the climate system — and the currently used mathematical methods —  all the way down to the daily average level — despite the fact that, for much of the climate historical record, they are all we have — are not fit for the purpose of determining changing energy retention by the climate system to any degree of quantitative or qualitative accuracy or precision.

Weathermen and women are probably well enough served by the flawed metric as being “close enough for weather prediction”.  Hurricane prediction is probably happy with temperatures within a degree or two – as long as all are comparable.

Even climate scientists, those disinterested in the Climate Wars, are happy to settle for temperatures within a degree or so — as there are a large number of other factors, most which are more important than “average temperature”, that combine to make up the climate of any region.  (see again the Koppen Climate Classifications).

Only AGW activists insist that the miniscule changes wrested from the long-term climate record of the wrong metrics are truly significant for the world climate.

 

Bottom Line:

The methods currently used to determine both Global Temperature and Global Temperature Anomalies rely on a metric, used for historical reasons, that is unfit in many ways for the purpose of determining with accuracy or precision whether or not the Earth climate system is warming due to additional energy from the Sun being retained in the Earth’s climate system and is unfit in many ways for the purpose of determining the size of any such change and,  possibly,  not even fit for determining the sign of that change.   The current method does not properly measure a physical property that would allow that determination.

# # # # #

Author’s Comment Policy:

The basis of this essay is much simpler than it seems.  The measurements used to form GAST(anomaly) and GAST(absolute) — specifically (Tmax+Tmin)/2, whether daily or monthly) are not fit for the purpose of determining those global metrics as they are presented to the world by AGW activist scientists.     They are most often used to indicate that the climate system is retaining more energy and thus warming up….but the tiny changes seen in this unfit metric over climatically significant periods of time cannot tell us that, since they do not actually measure the average temperature, even as experienced at a single weather station.  The additional uncertainty from this factor increases the overall uncertainty about GAST and its anomalies to the point that the uncertainty exceeds the entire increase since the mid-20th century.  This uncertainty is not eliminated through repeated smoothing and averaging of either absolute values or their anomalies.

I urge readers to reject the ever-present assertion that “if we just keep averaging averages, sooner or later the variation — whether error, uncertainty, or even just plain bad data — becomes so small as not to matter anymore”.  That way leads to scientific madness.

There would be different arguments if we actually had an accurate and precise average of temperatures from weather stations.  Many would still not agree that the temperature record alone indicates a change in retention of solar energy in the climate system.  Energy entering the system is not auto-magically turned into sensible heat in the air at 2-meters above the ground, or in the skin temperature of the oceans.  Changes in sensible heat in the air measured at 2-meters and as ocean skin temperature do not necessarily equate to increase or decrease of retained energy in the Earth’s climate system.

There will be objections to the conclusions of this essay — but the facts are what they are.   Some will interpret the facts differently,  place different importance values on different facts and draw different conclusions.  That’s science.

# # # # #

 

 

0 0 votes
Article Rating
504 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Roy Spencer
October 2, 2018 11:11 am

For climate, this is not very relevant because we only have long records from many stations of Tmax and Tmin. So, we are forced to use what we have. You can use hourlies over the last 40 to 50+ years or so if you want, but then the time of observation isn’t exactly on the hour, either…it varies. In fact, as a former NWS weather observer, I can tell you it’s generally not even at the reported time (e.g. 1753 GMT) because of observer laziness. And ALL results will change if the sensor height is only 1 meter rather than 2 meters. This stuff is splitting hairs. There are bigger issues to deal with (UHI) which are being ignored.

Editor
Reply to  Roy Spencer
October 2, 2018 11:56 am

Roy ==> It is only relevant if we are concerned with whether or not the tiny change –< 1degree — is fit-for-purpose of judging AGW validity and its effects.

John Francis
Reply to  Kip Hansen
October 2, 2018 1:35 pm

I really liked this essay. It’s a factor I have often thought about, but this is a good discussion of a very real issue

Greg
Reply to  John Francis
October 3, 2018 5:09 am

Anyone versed in the various forms of averages will recognize the latter is actually the median of Tmax and Tmin — the midpoint between the two.

Anyone versed in the various forms of averages will know that this is a tie breaker solution where the MEAN is SUBSTITUTED for the median due to lack of data !!

When you have only two data points , talking of the median is meaningless. As anyone versed in the various forms of averages will recognize. Apparently the author is not so versed.

Clyde Spencer
Reply to  Greg
October 3, 2018 8:24 am

Greg,
You said, “…the MEAN is SUBSTITUTED for the median due to lack of data !!” it seems to me that you stated it backwards. Two points are selected from a much larger (potential) sample population (that could be used to calculate a useful mean), and used in lieu of the many points that could be used to construct a PDF. One goes from a collection of a large number of values, to two values, that are then further reduced to the mid-point of those two values!

Reply to  Greg
October 5, 2018 9:43 am

Kip, please go back to your Khan Academy definitions. They say a Median is: “Median: The middle number; found by ordering all data points and picking out the one in the middle (or if there are two middle numbers, taking the mean of those two numbers).” And, they are correct.

Please notice that even your source says that, if there are two middle numbers, then take the Mean of those two. For a two number set (Tmax & Tmin), it may be a difference without a distinction, but the proper naming of the process – either way – comes down to taking the Mean of the two numbers involved. You may call it what you like, but using Median when Mean is proper may devalue your essay in the eyes of many readers.

JRF in Pensacola
Reply to  Kip Hansen
October 2, 2018 3:18 pm

I think you were very clear that the data are:

1) fit for the purposes of the meteorologist; 2) fit for the purposes of the climatologist; but, 3) not fit for the purposes of those splitting hairs of fractions of a degree. Roy is correct that its all we have for long-term data but that doesn’t mean its fit for purpose. Similar argument: getting significant figures correct.

JRF in Pensacola
Reply to  JRF in Pensacola
October 2, 2018 3:32 pm

Clarification: The sentence “Roy is correct that its all we have for long-term data but that doesn’t mean its fit for purpose” should have said “fit for purpose for analyzing change in fractions of a degree.” My apologies.

Also, Kip, I see some discussion about median/not a median farther down and you were very clear also in stating you were talking about a median of a two-point dataset. People don’t read anymore.

steven mosher
Reply to  Kip Hansen
October 2, 2018 7:09 pm

tiny change?

hardly.

the LIA was only about 1.5c cooler than today. is it safe to go back to that cool time.

you sure?

show your work, if you answer

Bruce
Reply to  Kip Hansen
October 3, 2018 8:32 am

Using Tmax NOAA says USA:

August 2011 is warmest August. only .04F warmer than 1936.

July 1936 is warmest July (followed by 1934 and 1901/2012 tied.)

It isn’t hotter than the 1930s.

Editor
Reply to  Kip Hansen
October 3, 2018 9:37 am

—-seem to have truncated a sentence in mine above. The first sentence should read:
“All the absolute GAST values for the current century fall within the uncertainty range for GASTabsolute — thus cannot properly be said to be larger or smaller than one another.

Bruce ==> Quite right — current temperatures are about the same as the Dust Bowl days, but without the horrible drought in the midwest.

Phoenix44
Reply to  steven mosher
October 3, 2018 1:29 am

Says the man who never shows his work! And if you want us to answer a question, make it answerable – what is “safe”? How was it “cool”?

And you have to tell us then why you believe temperatures 20-50 years ago were optimum, since that is what your question implies.

Oh and show your working for your assumption.

MarkW
Reply to  Roy Spencer
October 2, 2018 11:58 am

All errors matter.
Having a large list of reasons why the numbers aren’t fit for the purpose they are being used for helps to drive home the point.

coaldust
Reply to  Roy Spencer
October 2, 2018 1:17 pm

It’s true that the historical records are not what we would like. That doesn’t mean we should stay with the current system of measurement. We can create a better measurement system now even though we are stuck with Tmin and Tmax for the historical record. After all in 30 years the measurements we make now will be part of the history. Why not fix the issue since we have the capability?

D. J. Hawkins
Reply to  coaldust
October 2, 2018 4:46 pm

You can get the data from the recently implemented USCRN on a 5-minute basis if you want, but the fact is, if you want to compare today to the same date in 1885, Tmax and Tmin are all you have to work with. Only Mosher or Stokes would claim you can “infill” or “reconstruct” a daily temperature profile from 133 years ago.

steven mosher
Reply to  D. J. Hawkins
October 3, 2018 4:56 pm

huh.
you can in fact estimate the second by second temperatures using tmin and tmax and an empirically derived diurnal cycle.

wont be especially accurate. isnt needed however

Retired_Engineer_Jim
Reply to  coaldust
October 3, 2018 11:17 am

Let’s hope that all those every-six-minute data points have been archived as raw data and that no one has homogenized them. Then, if future researchers want to look at the actual average temperature, the data will be available.

gnomish
Reply to  Roy Spencer
October 2, 2018 1:55 pm

tmaz = apples
tmin = oranges.
you don’t average them to get anything meaninful
you do 2 separate charts.
this is an example- not so well performed, but properly done

Reply to  Roy Spencer
October 2, 2018 2:18 pm

Roy,
“In fact, as a former NWS weather observer, I can tell you it’s generally not even at the reported time (e.g. 1753 GMT) because of observer laziness.”
Actually, DeGaetano made a neat study of observing times. In the US, the observers filled out (B-19) not only the min/max, but also the temperature at time of observation. Analysing the diurnal pattern, you can deduce the average time of obs and compare with the stated time. It compared pretty well.

ATheoK
Reply to  Nick Stokes
October 2, 2018 3:22 pm

“Analysing the diurnal pattern, you can deduce the average time of obs and compare with the stated time. It compared pretty well”.

Tosh.

As, proven by your following statement; “It compared pretty well”.
It is neither “pretty” nor “well”.
It is sloppy reasoning.

Reply to  ATheoK
October 2, 2018 4:06 pm

Here is the plot of trend of observations, using both stated times and times inferred from the temperature recorded. Judge for yourself. I think the match is good.

comment image

Michael Jankowski
Reply to  Nick Stokes
October 2, 2018 5:05 pm

“…stated times and times inferred…”

What is missing is the actual time standard for comparison.

Even aside from that, your plot shows “Percent of HCN stations with morning and afternoon observation times.”

Geoff Sherrington
Reply to  Nick Stokes
October 3, 2018 3:22 am

Nick, “It compared pretty well.”
I’d say that it was a bloody horrible result.
Also, it does not answer the criticism put.
Geoff

Reply to  Roy Spencer
October 2, 2018 2:59 pm

so lets add “observer laziness”

to the long list of reasons

not to trust surface temperatures,

not to mention more infilled grids
than those with actual data.

It’s too bad so many surface “thermometers”
do automatic readings now because if they were
still the old fashioned glass thermometers,
the global warmunists would be trying
get more “global warming”
by finding shorter and shorter people
to read those thermometers —
preferably dwarfs and midgets,
to get a sharper upward vision angle
= more global warming!

But seriously now:

we should all give Dr. Spencer three cheers
for being one of the last honest climate scientists left,

for providing unbiased estimates
of surface temperatures
that are real science,
not junk science
over-adjusted,
excessively infilled nonsense,
with a pro-warming bias,
political agenda.

It is a HUGE conflict of interest
that the same government bureaucrats
who make warming predictions,
also own the surface temperature
actuals, and can adjust them
at will, to make their predictions
come true.

HotScot
Reply to  Richard Greene
October 2, 2018 5:08 pm

Richard Greene

I’m not a climate scientist, nor a scientist, nor even well educated, but I am a keen observer of human activities.

I have posted time and again that the historic records of temperatures are wholly unreliable as human intervention was vital and as you pertinently pointed out, the height of the one reading the thermometer is but one interesting variable.

When the global significance of temperature data wasn’t quite as closely scrutinised by the media, the public and everyone with a profitable interest in climate change itself, record keeping would have been a hit and miss affair.

The scientist with the responsibility for reporting local temperature measurements over the last 100 years or so couldn’t possibly have been in attendance for every hourly measurement 24/7/365 so the job would have been delegated.

The delegated individual was probably not as conscientious as the reporting scientist, so he probably despatched the tea boy, who went out in the snow/wind/rain/heat for a quick ciggy in a sheltered place and recorded the temperature as it was the day before.

The guy who chucked the bucket overboard to sample water temperature wouldn’t have been the officer on watch, it would have been the cabin boy, when conditions allowed, with readings taken on a heaving deck, in the wind/snow/rain/heat when he would rather be having his tot of rum.

All this, of course, in addition to the other work they had to do.

Then there’s the condition of the screens themselves. Were they painted with the correct material. Highly doubtful as even localised paint makers had their own versions of white paint. Indeed, were they maintained at all, and if so, where’s the evidence of that?

We know that modern satellite data isn’t perfect. We know that modern land based temperature data is riddled with UHI distortions. And we know that modern ARGO buoy data isn’t conforming to the party line so is largely sidelined.

So why do we imagine that data from anything before satellite and digital data recordings should be accurate to within less than 1˚C? Instead, those calculations are, as far as I can gather, relied upon to within the margin of error reserved for contemporary digital, 24/7/365 measuring devices.

Do I see allowances for this made in historical data? Well, from a layman’s perspective, no I don’t, but perhaps allowances have been made, I just don’t see them in any error bars which should be enormous from 100 years ago.

Spalding Craft
Reply to  Roy Spencer
October 2, 2018 4:22 pm

Also, when we talk about temperature change, we talk about anomalies, right? So the actual composition
of the metric is less important. The factors that make the metric less accurate, like UHI and other poor siting, would seem more important.

Kristi Silber
Reply to  Spalding Craft
October 3, 2018 2:34 pm

Spalding Craft,

If you haven’t already, you might check out this page, some of which talks about siting issues:
https://www.ncdc.noaa.gov/monitoring-references/faq/temperature-monitoring.php

Herbert
Reply to  Roy Spencer
October 2, 2018 6:26 pm

Roy,
At the National Centers for Environmental Information, ncei.noaa.gov, we are informed that the August 2018 Temperature across global land and ocean temperatures was 1.33 degrees Fahrenheit above the 20 th Century Average of 60.1 degrees Fahrenheit.
August is said to mark the 404th consecutive month above the 20th Century Average.
Does it matter if the stated 20th Century Average is wrong or only roughly correct?
What if,in terms of this post, the 20th Century Average is not exactly known but lies between 59.6 degrees Fahrenheit and 60.6 degrees Fahrenheit or some wider margin?
Is it just that we have a smaller or larger anomaly going forward,or are other issues in play?

James G Gorman
Reply to  Herbert
October 2, 2018 7:06 pm

+10

Kristi Silber
Reply to  Herbert
October 3, 2018 3:27 pm

I’m obviously not Roy, but I might take a stab at this question.

It seems to me that since anomalies aren’t based on the global average, but on the difference between absolute temperatures and the local baseline average, that it would depend on whether the local averages are differently biased relative to each other.

The period taken as the baseline is largely irrelevant to the calculation of anomalies – whether the average is 15 or 16 C won’t make a difference to the slope or scatter (variance) of the trend as a whole. This would in turn imply that if all the averages (for whatever baseline period) for all the sites were off by 1.5 C due to error, the anomalies would still follow the same trend.

However, if only some of the baselines were off by 1.5 C, that could make a difference, as it would add to the error (variance) in the anomalies.

Actual baseline, station A: 15.5
Monthly mean, station A: 18
Anomaly: 2.5
Actual baseline, station B: 17
Error in baseline, station B: -1.5
Apparent baseline: 15.5
Monthly mean, station B: 19.5
Anomaly: 4.0

So, in reality the anomalies are the same for these two stations, and the apparent baseline is the same (due to error in measurement) but in station B the error in the baseline gets transferred to the anomaly. This would result in artificial scatter of the data, and a higher “error” (variance due to actual and measurement-error differences) calculated in the trend. If the baseline were off mainly in sites that only have older (or newer) measurements (i.e., the record is only for part of the time period), it could also change the trend of the line. If, on the other hand, all baselines had the same error, that error would be transferred to the anomalies across the board, and the slope of the trend would be the same (just offset by 1.5 degree).

So, baseline measurements do matter, not only to anomalies going forward but those in the past.

Does that make sense? Hopefully others will chime in.

Paramenter
Reply to  Herbert
October 4, 2018 1:24 am

“Does it matter if the stated 20th Century Average is wrong or only roughly correct?
What if,in terms of this post, the 20th Century Average is not exactly known but lies between 59.6 degrees Fahrenheit and 60.6 degrees Fahrenheit or some wider margin?”

Well, canonical answer to that is as follows: we may have significant measurement uncertainties indeed. However, having very large sample size all those errors should average and cancel out. Therefore we should be able to detect changes very accurately.

Kristi Silber
Reply to  Kip Hansen
October 4, 2018 2:49 pm

Kip,

Why does that not hold? Error does get smaller with a bigger sample size, which is one reason error bars are wider in the early part of the century – that and the less precise measurement instruments, but even before they were switched the error narrowed. As I understand it, one reason there are fewer stations in the U.S. than there were decades ago is because they found statistically that coverage was ample.

“The size of our sample dictates the amount of information we have and therefore, in part, determines our precision or level of confidence that we have in our sample estimates. An estimate always has an associated level of uncertainty, which depends upon the underlying variability of the data as well as the sample size. The more variable the population, the greater the uncertainty in our estimate. Similarly, the larger the sample size the more information we have and so our uncertainty reduces.”
https://select-statistics.co.uk/blog/importance-effect-sample-size/

When anomalies are calculated, there are two main sources of error: that associated with the baseline averages, and that of the monthly average of the station measurements. As I’ve said before, one of the purposes of using anomalies is that it reduces the variance due to geographic differences, decreasing the error that is simply a function of where on the globe the station is (latitude, altitude, proximity to ocean, etc.).

Maybe if you think error is incorrectly calculated you should analyze the methods given here https://www.ncdc.noaa.gov/monitoring-references/docs/smith-et-al-2008.pdf, (or in some other relevant paper) then write up and submit your results. If you are right, you should have no trouble publishing it, as scientists want to get their statistics correct – I imagine climate scientists in particular are worried enough about bad publicity that they don’t want to be caught doing things poorly. If rejected and you think it’s reviewed improperly, post the reviews or have a statistician look over it – that is the time to make accusations of wrongdoing. Until then, it may be a bit presumptuous to say that climate scientists are fooling themselves, especially since you have not shown in what way they are doing so without comparing their methods with yours. Or am I missing something here? Where have you actually calculated error based on real-world data or described how error is calculated by climate scientists?

(A related question: when did you compare statistically the results of your way of analyzing monthly means with their way, and find that their way results in significant bias or greater error? How do you know your way is better?)

Geoff Sherrington
Reply to  Roy Spencer
October 3, 2018 3:04 am

But Roy,
Was not the MaxMin thermometer designed to lesen, hopefully eliminate, the errors arising from the time of day that the observer acted?
One can understand the establishment description of TOBS corrections, but surely the time of day bias operated on only a few days while the MaxMin thermometer overcomes most or all of the problem on the many other days.
It is hard to understand a TOBS correction ap[plied on days when it is not needed.

Geoff.

Robert B
Reply to  Roy Spencer
October 3, 2018 6:32 am

I go into more below but if the thermometer record was good, the larger uncertainty is something that needs to be calculated, otherwise, its still a useful indicator.
The thermometer record is a dogs breakfast and getting a global average requires considering this mean/median as an intensive property, which its not.

Admin
October 2, 2018 11:13 am

Thanks Kip

Editor
Reply to  Anthony Watts
October 2, 2018 4:51 pm

Anthony ==> My pleasure and labor of love….

Rocketscientist
October 2, 2018 11:15 am

One would have to assume that in order to determine the maximum and minimum temperatures for each day that numerous measurements and recordings are made throughout the day and then reviewed for max/min values. To where did all the other measurements disappear, and why are they not used?

Roy W. Spencer
Reply to  Rocketscientist
October 2, 2018 11:31 am

the longest-running technology was analog (obviously), and the liquid-in-glass thermometers had little tiny “sticks” in the liquid that got pushed up (for the Tmax) and down (for Tmin), showing the highest and lowest temperatures the thermometers experienced. There did not have to be any intermediate recording of temperatures.

Anthony Banton
Reply to  Roy W. Spencer
October 2, 2018 4:07 pm

Maximum thermometers used by the UKMO that I read back in the day were mercury-in-glass but instead of the an indicator being pushed up the stem by the mercury column, they had a constriction in the capillary that broke the mercury column when the max temp was passed.

jorgekafkazar
Reply to  Rocketscientist
October 2, 2018 11:45 am

Rocket: The min and max were read off a mechanical device which “records” them automatically. See the diagram and the text.

Van Doren
October 2, 2018 11:28 am

“She would record the Minimum and Maximum temperatures shown by the markers” – was it really necessary to follow the PC gender madness? I doubt that many maintainer were female.

Editor
Reply to  Van Doren
October 2, 2018 12:07 pm

Van Doren ==> Many volunteer weather stations are manned by women.

Michael S. Kelly LS, BSA Ret.
Reply to  Kip Hansen
October 2, 2018 6:13 pm

“Manned” by women?

D. Anderson
October 2, 2018 11:31 am

Wouldn’t a median be the temperature where and equal number of samples are greater and less than that.

(Tmax + Tmin)/2.

D. Anderson
Reply to  D. Anderson
October 2, 2018 11:42 am

ie. Not = (Tmax+Tmin)/2

commieBob
Reply to  D. Anderson
October 2, 2018 1:07 pm

Googling, I can’t find any definition of median that is other than the middle value in a set. example In particular, I focused on university sites in case there was a special meaning that I didn’t know about.

commieBob
Reply to  D. Anderson
October 2, 2018 1:32 pm

The article explains why the ‘average’ temperature matters for different applications. At the top of the article is an illustration that shows why the difference between mean and median may matter a lot depending on how the data is distributed.

David Stienmier
Reply to  D. Anderson
October 2, 2018 2:02 pm

(Tmax + Tmin)/2 is the mid-range value.

The median is the value is value with half the samples greater and half the samples less.

The mean is the arithmetic average.

For a two sample set, all three values are the same.

Editor
October 2, 2018 11:33 am

Kip, I got as far as the following comment and I stopped reading. You say:

Anyone versed in the various forms of averages will recognize the latter [ (Tmax + Tmin)/2 ] is actually the median of Tmax and Tmin — the midpoint between the two.

I’m sorry, but the median is NOT the midpoint between the max and the min. It’s the value which half of the data points are above and half below.

For example, the median of the five numbers 1, 2, 3, 4, 41 is three. Two datapoints are larger and two are smaller.

The median is NOT twenty-one, which is the midpoint between the max and the min [(Tmax + Tmin)/2].

And since you started out by making a totally incorrect statement that appears to be at the heart of your argument … I quit reading.

Regards,

w.

Dan
Reply to  Willis Eschenbach
October 2, 2018 11:52 am

Actually, if you have only 2 data points, then mean (average) and median are the same. This article was a little pointless and just added confusion. Mean (average) of TMAX and TMIN is what is used over ANY time frame.

Lonny Eachus
Reply to  Dan
October 2, 2018 2:26 pm

I’m glad Willis and Dan said this because I was going to.

I think the explanation is more than vague… it is downright misleading.

First, as Dan points out, (Tmax + Tmin)/2 is BOTH the median AND the mean… but just for those two values.

The “illustrative” graph further confuses the issue. When you change the temperature profile, the position of the median doesn’t change, but it’s value can (as shown). That contradicts the statement that the median doesn’t change… it can. At least its value can. Only the position is necessarily the same.

The mean’s value can obviously change but its position can vary, with the caveat that it must lie somewhere on the curve.

The second part of the illustration that might confuse is that it’s stated that the right-hand endpoints correspond (and so they must if X is time)… but given the shape of the dashed profile, that endpoint must be some distance off the page, in order for the mean to be shown where it is.

BCBill
Reply to  Dan
October 2, 2018 3:06 pm

Well if we are going to be annoyingly pedantic, an average is not ncessarily a mean:
Average
noun
1.
a number expressing the central or typical value in a set of data, in particular the mode, median, or (most commonly) the mean, which is calculated by dividing the sum of the values in the set by their number.
“the housing prices there are twice the national average”
synonyms: mean, median, mode; More

D. Anderson
Reply to  Willis Eschenbach
October 2, 2018 11:53 am

I see we responded to that at almost the same time (I was first).

itocalc
Reply to  Willis Eschenbach
October 2, 2018 11:58 am

It is called a mid-range.

Clyde Spencer
Reply to  Willis Eschenbach
October 2, 2018 12:01 pm

Willis,

You said, “… the median is NOT the midpoint between the max and the min. It’s the value which half of the data points are above and half below.”

When one interpolates the midpoint between Tmax and Tmin, half the data points ARE above and below the interpolated median. As I have pointed out previously, when dealing with with a set of even numbered points, it will always be necessary to interpolate between the two innermost values in the sorted list. Tmax and Tmin can be thought of as a degenerate, even-numbered list consisting of only the two innermost intermediate values.

You complain that the median is “NOT twenty-one.” Yet, as a measure of central tendency, 21 is closer to the arithmetic mean of 25.5 than 3 is, which is what one would normally expect.

In your example, depending on just what is being measured, one might justifiably consider the “41” to be an outlier, and be a candidate for being discarded as a noise spike or malfunction in the measuring device.

I think that you are being unnecessarily critical. The point that Kip was making is that interpolating the midpoint between two extreme values (Whatever you want to call it!) results in a metric that is far more sensitive to outliers than an arithmetic mean of many measurements.

D. Anderson
Reply to  Clyde Spencer
October 2, 2018 12:10 pm

I don’t think it is unnecessarily critical to point out an error in his post. It should be easy to correct.

Clyde Spencer
Reply to  D. Anderson
October 2, 2018 1:29 pm

D. Anderson,
If it NEEDS to be corrected. Willis has yet to respond to defend his complaint.

Editor
Reply to  Willis Eschenbach
October 2, 2018 12:15 pm

w. ==> Nonsense — ANY time one arranges the data in a data set in value order, largest to smallest, and then finds the mid-point, one is finding the median. The median of a two value set is found by adding the two and dividing by two. It is the same as the mean of the two values, but not the same as the mean of the whole set. It is the MEDIAN of the Max and the Min. It is the procedure that tells us.

D. Anderson
Reply to  Kip Hansen
October 2, 2018 12:22 pm

Ok, so the next question is, are two samples a day enough to characterize the daily temperature?

Editor
Reply to  D. Anderson
October 2, 2018 12:59 pm

Andersen ==> Read the essay.

D. Anderson
Reply to  Kip Hansen
October 2, 2018 1:51 pm

I got distracted by your bizarre definition of median,.

steven mosher
Reply to  D. Anderson
October 3, 2018 4:53 pm

yes.
You have tmin
you have tmax
you have TAVG
you dont have TMEAN,
but the trend in TAVG is an unbiased estimator of the trend in TMEAN.

trend is what we care about.

would TMEAN be best? yup, but not needed.

we can after all test against TAVG.

Clyde Spencer
Reply to  steven mosher
October 3, 2018 5:01 pm

Mosher,
On what do you base the claim that “the trend in TAVG is an unbiased estimator of the trend in TMEAN.”? Medians are not amenable to parametric statistical analysis. Variance and SD are not defined for a median. Yet, from what I have read here, the monthly ‘average’ is the median of the arithmetic mean of the monthly Tmax and the arithmetic mean of the monthly Tmin.

Reply to  Kip Hansen
October 2, 2018 2:52 pm

Thanks, Kip. In that case you really need to emphasize that that is only true for a two-point dataset. However, for most temperature datasets these days that is far from the truth. Most temperatures are taken with thermistors sampled at regular intervals, and in that case, your statement is far from true.

And as you yourself say:

In the comment section of my most recent essay concerning GAST (Global Average Surface Temperature) anomalies (and why it is a method for Climate Science to trick itself) — it was brought up [again] that what Climate Science uses for the Daily Average temperature from any weather station is not, as we would have thought, the average of the temperatures recorded for the day (all recorded temperatures added to one another divided by the number of measurements) but are, instead, the Daily Maximum Temperature (Tmax) plus the Daily Low Temperature (Tmin) added and divided by two. It can be written out as (Tmax + Tmin)/2.

However, given that there are a number of “temperatures recorded for the day”, then (Tmax + Tmin)/2 is NOT the median of the daily temperatures.

You then say:

“… we are only dealing with a Daily Max and Daily Min for a record in which there are, in modern times, many measurements in the daily set, when we align all the measurements by magnitude and find the midpoint between the largest and the smallest we are finding a median (we do this , however, by ignoring all the other measurements altogether, and find the median of a two number set consisting of only Tmax and Tmin. )

Here, you claim that when you “align all the measurements by magnitude and find the midpoint between the largest and the smallest we are finding a median”, but then you say you are only finding the median of a two number set. In that case, you are NOT finding a median of “all the measurements”. And you note this later, which makes your earlier statement very misleading.

Are your statements correct? I guess so, if you read them in a certain way and kinda gloss over parts of them. You say that “we are finding a median” of all of the measurements, and then immediately contradict that and say we are finding a median of just two points.

Are they confusing as hell? Yep, and if you look at the comments you’ll see that I’m not the only one who is confused.

OK, now that I understand your convoluted text, I’m gonna go back and read the rest.

My thanks for the very necessary clarification,

w.

Editor
Reply to  Willis Eschenbach
October 2, 2018 3:05 pm

w. ==> You have confused yourself.

gnomish
Reply to  Kip Hansen
October 2, 2018 6:51 pm

lol @ convoluted…
fonzie says: w…w…w…willis

Kristi Silber
Reply to  Kip Hansen
October 2, 2018 9:06 pm

Kip, but in this case the two values are the whole set.

I don’t know why you insist on using the term “median” if it ends up being confusing for people.

“This illustration is from an article defining Means and Medians, we see that if the purple traces were the temperature during a day, the median would be identical for wildly different temperature profiles, but the true average, the mean, would be very different.[Note: the right hand edge of the graph is cut off, but both traces end at the same point on the right — the equivalent of a Hi for the day.] ”

This doesn’t make sense to me. The graph is of temperature on the X axis and frequency on the Y axis, right? Could you send the link? Take a look at this illustration, you might see my confusion: http://davidmlane.com/hyperstat/A92403.html (Another thing is that what you’re calling the high for the day can’t be right because the line on the more normal distribution drops to zero – the min and max for each line is different)

Tmax 73, Tmin 71 = Tavg 72
Tmax 93, Tmin 51 = Tavg 72
Tmax 103, Tmin 41= Tavg 72

Are these not all showing the same estimates of daily heat radiating from the Earth’s surface? Sometimes the heat is much higher during the day, sometimes it’s spread out. It’s not exact, no, but given the number of estimates it seems to me you get a pretty good total estimate.

I think there’s probably a reason monthly average is calculated the way it is. It surprises me that it’s the median of daily averages, and I can’t figure it out at the moment, but I’m inclined to give the experts the benefit of the doubt. Silly, huh? Naive to trust the researchers to know what they’re doing, rather than assume they’re frauds, eh?

“This graph show the difference between daily Tavg (by (Tmax+Tmin)/2 method) and the true mean of daily temperatures, Tmean. ”

How is the “true mean of daily temperatures” calculated in your graph with the blue lines?

……………………………………………..

“Anthropogenic Global Warming scientists (IPCC scientists) are concerned with proving that human emissions of CO2 are causing the Earth climate system to retain increasing amounts of incoming energy from the Sun and calculate global temperatures and their changes in support of that objective. ”

So those who worked on the IPCC are now “Anthropogenic Global Warming scientists” rather than climate scientists? All versions? That includes the skeptics?

The idea that climate scientists are out to “prove” (a completely non-scientific term) anything is just more propaganda, Kip. Scientists try to discover what is happening. What they are finding is that most of the warming in the last several decades is anthropogenic.

Scientists don’t prove a hypothesis, they test it. They accumulate evidence through hypothesis testing, and if enough evidence supports is, they eventually call it a theory.

If a scientist came up with a different explanation and had lots of supporting evidence for it, and others validated the results, he would be instantly famous. Nobody has.

Scientists have tested the theoretical foundations developed over a hundred years ago through satellites that measure outgoing radiation at the top of the atmosphere, statistical models that look at different forcing mechanisms that might account for global temperature change, paleoclimate reconstructions, and GCMs. Scientists have been working on this steadily for half a century. Researchers from Exxon and Shell were estimating the temperature increase due to anthropogenic fossil fuel emissions in the 1980s (and kept their findings from the public). Were they out to “prove” AGW, too?

You are trying to discredit the ability of 1000s of scientists and spread distrust of the science. Do you really think they are all idiots??? It’s either that or all frauds. I just don’t understand!!! This question is more important to me, and to our society as a whole, than whether AGW is a problem. When people distrust any scientist that believes AGW is true, and trusts anyone who thinks scientists are making things up, no matter how little evidence they can muster, it shows how little truth matters in society today and how driven we are to see the Other as the enemy. And it shows how pervasive and successful the propaganda has been. Likewise, the alarmist liberal media profit from spreading propaganda and hatred. What is the country coming to?

I don’t want my fellow Americans to be my enemies. I don’t want them to think of me as the enemy. I bet if I sat down with most of you (not all at once) over a beer or a coffee, we could have a nice chat. I like all kinds of people, and people generally like me (believe or not!). I DON’T like manipulation, which is rampant on both the right and the left. …Sigh. I’m sorry. This is off topic.

gnomish
Reply to  Kristi Silber
October 2, 2018 10:47 pm

“If a scientist came up with a different explanation and had lots of supporting evidence for it, and others validated the results, he would be instantly famous. Nobody has.”

deliberate logical fallacy ^
your attempts to cause disturbance of sane consciousness is aggressively manipulative and disrespectful.
you are saying that truth = popularity.
i have an allergy to stupid.
don’t give me tourettes.

Kristi Silber
Reply to  gnomish
October 3, 2018 1:12 pm

gnomish,

Logical fallacy? Where? It’s a simple “if…then” statement.

Editor
Reply to  gnomish
October 3, 2018 4:55 pm

Kristi ==> The fallacy is that because someone hasn’t come up with a “better” explanation for the warming since the end of the LIA, the current obviously wrong explanation (pixies, unicorns, evil spirits, or CO2 concentrations) must be true.

Nonsense, no truer than your grandmother’s folk wart remedy, which, after all, “worked” for Uncle George in 1902.

Remy Mermelstein
Reply to  gnomish
October 3, 2018 5:06 pm

Kip, to date, CO2 offers the best explanation for the current warming. If you have something better to offer that the majority of scientists will agree to/accept, please post it.

Clyde Spencer
Reply to  Remy Mermelstein
October 3, 2018 5:19 pm

Remy,
You claimed, “…to date, CO2 offers the best explanation for the current warming.” There quite a number of people here who would disagree with your assertion. Can you succinctly make your supporting argument, or cite something that does? Myself, I tend to lean to Occam’s Razor.

Remy Mermelstein
Reply to  gnomish
October 3, 2018 5:26 pm

Clyde, unless you can offer a “better” explanation than CO2, I’m not going to change my mind. You would need to provide an alternative theory, and data to back it up. I don’t care if you disagree with what I’ve said , if you can’t meet my challenge, go away.

Clyde Spencer
Reply to  Remy Mermelstein
October 3, 2018 5:42 pm

Remy Mermelstein,

You said, “…unless you can offer a “better” explanation than CO2, I’m not going to change my mind. You would need to provide an alternative theory, and data to back it up. I don’t care if you disagree with what I’ve said , if you can’t meet my challenge, go away.”

I did offer an alternative theory. Perhaps it was too subtle for you. Occam’s Razor basically says that the simplest explanation is usually the best. Earth started warming after the end of the Maunder Minimum, well before CO2 from the industrial revolution and population exploded, and has continued warming. The simplest explanation is that whatever initiated the warming after the Little Ice Age, it continues to be at least the predominant driver of warming. There is no reason to believe that the natural cycles suddenly stopped working and were replaced exclusively by anthropogenic forcing.

If you don’t want to play nice, I’ll gladly go away.

Remy Mermelstein
Reply to  gnomish
October 3, 2018 5:30 pm

Clyde:

1) I don’t have a wife.
2) Occam’s razor doesn’t explain the recent/current warming

Kristi Silber
Reply to  gnomish
October 3, 2018 5:49 pm

Kip,

There is absolutely nothing in this statement:
““If a scientist came up with a different explanation and had lots of supporting evidence for it, and others validated the results, he would be instantly famous. Nobody has.”

to suggest:
“The fallacy is that because someone hasn’t come up with a “better” explanation for the warming since the end of the LIA, the current obviously wrong explanation (pixies, unicorns, evil spirits, or CO2 concentrations) must be true.”

That is YOUR logical fallacy!

Remy Mermelstein
Reply to  gnomish
October 3, 2018 5:52 pm

Clyde:
.
1) Occams’s razor is not a “theory.”
..
2) The following is not a “theory”: “The simplest explanation is that whatever initiated the warming after the Little Ice Age, it continues”….. WHATEVER is not specified. For all we know “whatever” could be unicorns in your “theory.”
..
3) “natural cycles” is not an explanation. We have a “natural” 24 hour cycle, but that does not explain the recent warming. We have a “natural” 365.25 day cycle, but that doesn’t explain the recent warming. What “natural cycle” are you talking about?????

You have not provided a viable alternative to CO2 to explain the recent warming.

Kristi Silber
Reply to  gnomish
October 3, 2018 6:05 pm

Clyde,

‘The simplest explanation is that whatever initiated the warming after the Little Ice Age, it continues to be at least the predominant driver of warming.”

“Whatever” is not an explanation. “Natural variation” is not an explanation. The null hypothesis is a randomly changing (or an unchanging) climate.

The best-supported hypothesis for the LIA that I know of is that it was triggered by a period of high volcanic activity and exacerbated by another big volcano. After that the influence of relatively strong solar radiation (in the absence of high aerosols) led to warming, but that ended in about 1940, and since then CO2 has been a main forcing agent. In mid-20th C there was a period of high aerosols due mostly to anthropogenic air pollution, leading to cooling, but in the ’70s several countries enacted pollution control measures, and that cooling decreased in importance.

This is not the definitive explanation, but it is Occam’s Razor. “Something did it” is not.

Clyde Spencer
Reply to  Kristi Silber
October 3, 2018 8:43 pm

Kristi,

Carl Sagan was fond of saying, “Extraordinary claims require extraordinary evidence.” It isn’t necessary to look for new and different forcing agents if the climate is within the normal range of temperature changes, which it is. It has been much warmer in the past, before humans evolved. It was warmer during the Holocene Optimum than it currently is. There is poor to no correlation between CO2 and prehistoric temperature reconstructions. More recent temperature proxies from ice cores strongly suggest that temperature increases occur 800 years before CO2 increases.

The fact that we many not know the complex interrelationships between all the natural forcing agents, or have names for them, doesn’t make them any less real. Even the IPCC admits that climate may be chaotic and unpredictable. Basically, to apply Occam’s Razor simply requires one to accept that climate is what it is, and in the absence of unprecedented temperatures, or unusual rates of warming, there is no need to appeal to some new agent forcing temperatures. The most recent episode of warming started before humans began using large quantities of fossil fuels. If temperatures had been declining or flat until WWII, and then suddenly starting climbing, then I’d say that there was a need to explain the change. However, warming started a century before then, and was probably warmer in the 1930s than it currently is!

The impact of historic volcanic activity has been shown to only last two or three years, even for the largest such as Krakatoa. That hardly explains a period of exceptional cold that may have extended from 1300 to 1850. Many competing hypotheses have been offered. But, once again, just because we can’t be sure which one(s) is correct, doesn’t mean that they weren’t in play. Clearly, something happened for which we have multiple lines of evidence. Just because we can’t definitively assign the cold to something in particular, doesn’t mean that we can’t use a ‘place holder’ such as “natural variability” The question is whether recent changes are great enough to warrant a different explanation, such as anthropogenic influences. Looking at sea level rise for about the last 8,000 years strongly suggests a linear rise that doesn’t require “Extraordinary Claims.”

Remy Mermelstein
Reply to  gnomish
October 3, 2018 6:12 pm

Thank you Kristi.

William Ward
Reply to  gnomish
October 3, 2018 9:51 pm

Hi Kristi,

Regarding CO2:

The ancient reconstruction from proxies shows that CO2 was between 7,000 – 12,000 PPM. Over 600 million years there appears to be no correlation between CO2 and temperature. See graph here:

comment image

No one knows how accurate the proxies are, but there is no evidence.

If we look at the 800k year ice core records from Vostok, we do see a correlation between CO2 and temperature, but the cause is the temperature and the effect is atmospheric CO2. Not the other way around. The lag for the effect is about 800 years. I assume you know this and know the reason, but let me know if my assumption is bad.

If we look at the modern instrument record, we see no correlation between CO2 and temperature. CO2 has been rising for 150 years with accelerations of rise in the past 20 years. During the 150 year we have 30-40-year cooling periods, warming periods and periods of “pause” or no upward or downward trend.

Climate sensitivity is defined as the expected increase in average global atmospheric temperature (degrees C) for a corresponding doubling of atmospheric CO2 (ppm). As it relates to the scientific thought around this, I can show you over 30 peer reviewed scientific papers that claim zero or near zero sensitivity. I can show you another 45 that claim a very low sensitivity (0.02C, 0.1C, 0.3C, etc.). I can show you a half dozen that claim the atmosphere will cool with increasing CO2 concentration. I’m sure there are hundreds more papers with higher figures. The IPCC probably gives us the maximum figure – which keeps changing – but I think they are up to 8C. So, the world of science gives us a 400:1 range of results as determined by their ”science”. Actually, the range is infinite if I include the papers claiming zero sensitivity. Darned divide by zero! This shows that the world of science doesn’t have a shadow of a clue about climate sensitivity. Man, who paid for all of these garbage papers?

When V=IR was derived by Ohm, how many papers did it take to finally know he was right? Were there hundreds of competing equations, like V=0.267IR, V=5.937IR, V=1×10^29IR? No. You can test this in any physics lab. When the charge of an electron was first measured, did other scientists come out with values that varied by 400:1? No. There are many, many more examples I could give. If you review the real world of repeatable science, we don’t have these problems that climate “science” brings to us.

There is no record (ancient, long ago or recent) that provides evidence to the theory that CO2 drives climate. We don’t even have an equation that tells us what the relationship is. Many scientists do tell us that CO2 drives climate, but if they are honest, they tell you it is a theory with no actual support. Many scientists speak but speak not in their capacity as scientist. Instead they speak as advocates for a social and political ideology. They propagate a narrative.

You don’t need to solve the riddle to point out that the theory is unsupported. A fair statement is that CO2 might drive climate, but we have no historical or current proof and have no mathematical relationship figured out that would define the process.

Kristi Silber
Reply to  gnomish
October 4, 2018 7:31 pm

Kip,

” You are often way to literal, and can not, apparently, see analogies and parallels.”

When we are talking about a logical fallacy, the only way to address it is literally.

“Alternate explanations are not required in real science to say or show that another hypothesis does not hold up to close scrutiny.”

But when hypotheses do hold up to close scrutiny, both on theoretical and observational grounds, the burden is on the doubters to provide an alternative explanation. “Natural variation” and “coming out of the LIA” are not explanations.

“Climate science has no supportable explanation for the advent or end of the LIA — they have some suggested possibilities, none with strong evidence.”

Depends what you call “strong evidence.” I never suggested, and never will suggest anything is “proven.” That’s not a scientific word. It’s hard to demonstrate with confidence what happened in the distant past, there is no denying that. But there is the process of looking at what factors we know changed (and their effects in the modern record,) lining those up with the past temperature record, and making plausible, supported arguments. Aerosols, solar activity, ice extent, vegetation, written records…these are the kinds of things scientists can take into account. Then they can make a hypothesis. Then others can look at the hypothesis and debate it, come up with other hypotheses, debate those, etc. If over time and after debate a hypothesis is still the best explanation, one can take it as a “working hypothesis,” and build on it.

Considering all the evidence available, there is no better hypothesis for the events of the last 80 years than the one that was posited 120 years ago, despite work by many thousands of scientists over the last 50 years. The evidence keeps accruing. The alternate hypotheses offered have been refuted.

So when is the public going to accept the ideas of the vast majority of climate scientists? Even many “skeptic” scientists agree that AGW is the best explanation, they just don’t necessarily agree on the dangers or sensitivity.

“You are conflating ‘possible causes’, suggested ‘it might have been…causes’ with proven or scientifically supported causes. You seem to grant these possibilities the value of proven due to coming from the right side of the Climate Divide.”

That’s all nonsense! That’s what you want to think, what you assume. You think I’m just a brainless parrot because that makes you able to dismiss me. It says far more about you than about me.

I read your posts and I consider them carefully. Even if you are right about the error problem, you simply aren’t at all convincing because you don’t at any point demonstrate your hypotheses – you don’t do the statistics. You don’t show that the way scientists calculate error (or average) is statistically significantly different using the actual data. (You don’t even know that sometimes you CAN average averages! There is NOTHING WRONG with averaging the averages of sets of 30 (or 31 or 29) numbers – as long as the sets have the same number of values. Your way would be incorrect if even 1 5-minute reading was absent.) Without the statistics, you have nothing, but you still make firm conclusions: scientists are fools. All you are demonstrating is your own bias. It’s very odd.

Alan Tomalty
Reply to  Kristi Silber
October 2, 2018 11:09 pm

The climategate emails prove you are wrong Kristi Silber. You should read them sometime.

Remy Mermelstein
Reply to  Alan Tomalty
October 3, 2018 5:14 pm

Alan, what proof do you have that the stolen emails have not been tampered with?

Clyde Spencer
Reply to  Remy Mermelstein
October 3, 2018 5:24 pm

Remy,
What proof do you have that you have quit beating your wife? 🙂

Remy Mermelstein
Reply to  Alan Tomalty
October 3, 2018 5:37 pm

Clyde, see my comment above re “wife beating.”

gnomish
Reply to  Alan Tomalty
October 3, 2018 5:49 pm

remy, being as how absolutely nobody disputes the veracity of the emails, your question is moot.
like a used condom kind of moot.

Remy Mermelstein
Reply to  Alan Tomalty
October 3, 2018 6:00 pm

Gnomish, being that multiple authorities investigating the emails have found no fraud, deceit or deception on the part of the climate scientists, your response is also moot. If you have a point, could you please make it? For example, do you consider stolen property authoritative?

Kristi Silber
Reply to  Alan Tomalty
October 3, 2018 6:11 pm

Alan,

What makes you think I haven’t?

Remy:
“Alan, what proof do you have that the stolen emails have not been tampered with?”

This is not a very good argument. What has been “tampered with” is the meaning and significance of the emails. Some of the worst accusations based on them are faulty. A few are legitimate. There was a lack of professionalism, but that doesn’t mean there was scientific misconduct.

gnomish
Reply to  Alan Tomalty
October 3, 2018 6:19 pm

do try to keep the focus, [pruned].
nobody disputes the emails veracity.
gish gallop right off, now.

Remy Mermelstein
Reply to  Alan Tomalty
October 3, 2018 6:36 pm

[yes, it was pruned. .mod]
..
..
..
Sticks and stones.

Name calling is a logical fallacy.

Per our host:
..
https://www.realskeptic.com/2013/12/23/anthony-watts-resort-name-calling-youve-lost-argument/

Gnomish has lost the argument.

Geoff Sherrington
Reply to  Kristi Silber
October 3, 2018 3:16 am

Kristi,
It concerns me that I have seldom seen a climate researcher delve into the fine detail of Tmax and Tmin in the way that Kip has here. I have the impression, be it right or wrong, that the topic is glossed over by establishment workers.
If you can show me publications where these points of Kip’s are dissected and discussed and conclusions drawn, then I might agree with you. Until then, I think you are being too kind to the assumption of logical processes in climate science.
It is a little like formal errors. Have you ever seen a (Tmax + Tmin)/2 with an associated error envelope? Ever read how the envelope was constructed? I have not, but there is a high probability that I have not read the appropriate papers. Geoff.

MrZ
Reply to  Geoff Sherrington
October 3, 2018 5:21 am

Geoff,

Here is Boulder last July using CRN 5 minute measurements. http://cfys.nu/graphs/Boullder_CRN_JulyFull_2018.png

As you can see (Tmax/Tmin)/2 is not very precise and the deviation is random. I am not sure it matters for trends over longer time periods but it does i, in my mind, disqualify adjustments like TOBS!

MrZ
Reply to  Geoff Sherrington
October 3, 2018 9:29 am

Hi Kip!

You are welcome to use the graph. I have put the Excel here: http://cfys.nu/graphs/Boulder_July_2018.xls
Please note Excel does not like 8000 entries in a graph and it crashes sometimes if you edit too fast.
I”ll send you a hello mail later today.

Kristi Silber
Reply to  Geoff Sherrington
October 3, 2018 2:19 pm

Geoff,

Ascertaining the error in such a basic calculation is something that scientists would learn in school, not discuss in a research paper. The absence of such a discussion in the literature is no reason to assume that scientists don’t know how to do it. I, for one, and not going to ASSUME that scientists would make such a basic mistake, and in so doing, discredit all of climate science.

More difficult are calculating the errors in a reanalysis of the full dataset. My knowledge of statistics is not good enough to evaluate these. I rely on scientists who read these papers to do such evaluations, and where they find errors in the statistics or better ways of doing reanalysis to account for errors, to publish their results.

In other words, I have trust in the scientific community to make improvements or corrections where applicable. That’s what science is about: improvement. Even if I found an error somewhere, there is no guarantee that the error wouldn’t have already been corrected in another publication. This is why part of a scientist’s job is to keep up with the relevant literature. In my experience, it takes hundreds of hours/year to do so, and that is with the expertise to understand it all.

Am I being kind in trusting scientists? Not in my opinion. I just have the humility to realize that they know more than I do, and I’m not going to distrust them based on no evidence. Nor will I buy into the assumptions made by others that the whole profession is populated by fools and frauds. To me that doesn’t seem a reasonable assumption, especially coming from those who will use any means, however prejudicial, to convince others it’s true.

But that’s just me. Others are welcome to their own opinions.

Clyde Spencer
Reply to  Geoff Sherrington
October 3, 2018 3:24 pm

Kisti Silber,

You said, “Am I being kind in trusting scientists? Not in my opinion. ”

The problem is, you have admitted that you don’t have experience in programming, and have a weak statistics background, so you trust published scientists. However, you dismiss scientists and engineers here who raise issues with what is being done. Implicitly, you are appealing to the authority of those who go through formal peer review because you are personally unable to critique what they are doing. That is unfortunate, because in science, an argument or claim should stand on its own merit and not be elevated unduly because someone is a recognized as an authority. There is the classic case of Lord Kelvin pronouncing the age of the Earth based on thermodynamic considerations, and his stature was such that no one would challenge him. It turns out he wasn’t even close!

Kristi Silber
Reply to  Geoff Sherrington
October 3, 2018 5:42 pm

Clyde,

It’s not that I automatically dismiss the scientists and engineers around here or I wouldn’t read and consider the arguments. However, when people have shown repeatedly that their arguments are intended to promote distrust in the majority of scientists, it diminishes their credibility.

I’m not devoid of ability to evaluate science. Kip’s analysis of anomalies, concluding, ” Thus, their use of anomalies (or the means of anomalies…) is simply a way of fooling themselves….’” (etc.) was not convincing to me because I know enough to realize that anomalies are a far better alternative to absolute temperatures for calculating trends, and Kip didn’t provide any better way of doing it.

Nor am I convinced that, “The methods currently used to determine both Global Temperature and Global Temperature Anomalies rely on a metric, used for historical reasons, that is unfit in many ways…”

simply because it is assumed that scientists don’t know how to handle error given the way the measurements were taken. If he had found a recent paper discussing methods of reanalysis and found statistical errors in it, that would be different.

Many of the posts here include assumptions about how the science is done while bearing little or no demonstrated relation to how it’s actually done. It’s not the same as critiquing a method described in a paper using scientific methods to show that it’s wrong.

There is also a lot of evaluation of science based not on an actual publication, but on press releases, and everyone here should know by now that press releases are not adequate representations of what’s in the original literature. This results in countless cases of erroneous dismissal based purely on assumption (I often do read the original, when available).

Although I’m not able to evaluate the more complex statistics, I do know something about simpler analyses. I am, for instance, aware that tests that are available in Excel (and elsewhere) rely on assumptions to be valid, a fact often ignored or unknown, resulting it the use of such tests indiscriminately and sometimes erroneously.

Unlike many here, I think there is reason to respect the “authority” (expertise) of the scientific community as a whole. I believe the vast majority of climate scientists have scientific integrity. That doesn’t mean I think scientists can’t be wrong – it’s a given that mistakes are made. But that just means taking individual results as “working hypotheses,” not “proof,” and certainly not *assuming* they are wrong or that scientists in general are idiots and frauds. Skepticism is fine; assumptions are not.

And yes, the formal peer review process does matter to me. I believe scientific debate is most productive when done by those who have demonstrated expertise in the field they are debating. Peer-reviewed papers put the research in context of the published literature, and often discuss caveats and limitations of the results.

This doesn’t mean others aren’t able to evaluate climate science, but it takes a lot of effort to educate oneself enough to do it well, and especially to say something original.

Geoff Sherrington
Reply to  Geoff Sherrington
October 3, 2018 9:24 pm

Thank you MrZ,
A picture worth 10 to the power of 3 words, as we scientists say. Geoff

Clyde Spencer
Reply to  Geoff Sherrington
October 3, 2018 9:33 pm

Geoff,
I thought that it was 10^4 words! But, what’s an order of magnitude among friends? 🙂

Kristi Silber
Reply to  Kip Hansen
October 3, 2018 12:35 pm

Kip

(I understand that your use of median for 2 numbers is technically correct, but that wasn’t my question, which is why you use the word in the first place. NCL uses “mean.”
https://www.ncl.ucar.edu/Document/Functions/Contributed/calculate_daily_values.shtml)

Regarding the illustration defining Means and Medians with the green a purple lines:

This doesn’t make sense to me. The graph is of temperature on the X axis and frequency on the Y axis, right? Axes are not labeled. This couldn’t be talking about the same day, since the highs and lows for each line are different.Could you send the link?

Take a look at this illustration, you might see my confusion: http://davidmlane.com/hyperstat/A92403.html

Or look at these two “temperature profiles.” Each has 13 numbers arranged in order, just for convenience of finding the median. They both have a high of 17 and and low of one, and both have the same mean, but the medians are different (unless defined as (Tmax-Tmin)/2. So what is your point in the graph you show?
1 1
1 2
1 4
3 4
3 4
4 4
6 4
6 7
6 7
7 7
10 7
13 10
17 17

Kristi Silber
Reply to  Kip Hansen
October 3, 2018 12:44 pm

Kip,

I don’t know anything about programming, but this appears to use the daily temperature reanalysis dataset to calculate monthly means:
https://www.ncl.ucar.edu/Document/Functions/Contributed/calculate_monthly_values.shtml

Is the information you got based on what is done in reanalyses? Perhaps you could post the reply email you got so we can see just what they said, to avoid confusion?

Kristi Silber
Reply to  Kip Hansen
October 3, 2018 12:52 pm

Kip,

Please explain how you would find daily and monthly averages.

Clyde Spencer
Reply to  Kip Hansen
October 3, 2018 4:53 pm

Kip,
I agree that one should not average averages. However, isn’t the automatic output at 5 minute intervals an average of the temperatures over the 5 minute period? That would be in contrast to taking an instantaneous reading at 5 minute intervals.

Clyde Spencer
Reply to  Kip Hansen
October 3, 2018 5:21 pm

Kip,
And in doing so, reduces the variance!

Kristi Silber
Reply to  Kip Hansen
October 3, 2018 1:04 pm

Kip,

Maybe you could also explain how you would find long-term (at least 30 years) trends using absolute temperatures. Would you account for the variance among stations and seasons (or months) in order to be able to quantify trends in annual temperatures? If so, how so? If not, how would you discern between spatial/temporal variance and the variance due to measurement error?

William Ward
Reply to  Kip Hansen
October 3, 2018 6:37 pm

Clyde, Kip,

I also agree, we should not average averages.

For the USCRN, this link gives you the notes page (decoder ring) for the Sub-hourly records:

https://www1.ncdc.noaa.gov/pub/data/uscrn/products/subhourly01/README.txt

For the field “Air_Temperature” we are referred to notes “F” and “G”, which state respectively:

F: The 5-minute values reported in this dataset are calculated using multiple independent measurements for temperature and precipitation.

G: USCRN/USRCRN stations have multiple co-located temperature sensors that make 10-second independent measurements used for the average.

I suggest 2 alternate approaches, both better. 1) Filter the output of the thermocouples electrically in the analog domain to limit the bandwidth going into the A-D converter. This prevents aliasing. 2) Define and implement a standard thermal mass to the instrument front end. Thermal mass acts as a filter in the analog domain – actually the thermal domain. I don’t think this has been done. To minimize difference of results between the older records obtained with mercury in glass thermometers, electronic thermometers could be modeled to respond similarly. In both cases the sample would occur every 5 minutes without processing.

Kristi Silber
Reply to  Kip Hansen
October 3, 2018 10:06 pm

Kip,

The problem I see with your way of averaging is that then you are potentially computing the average for individual stations differently across time. That means calculating error differently. It also means you could not get a correct yearly average if the two different systems (5 min. intervals vs. min/max avg) are both used within the same year. Nor could you calculate a baseline average over 30 years.

It is fine to average averages as long as the averages have the same number of values (i.e., 2).

While I see your point, I’d guess it’s quite probable that scientists have tested to see whether various systems of averaging produce significantly different results for the intended use.

Another point is that just because one dataset uses a particular way of computing monthly averages doesn’t mean this same method is used in every study. For instance, if researchers are using NCL ( a computer language) and its way of computing monthly averages, the results could be different.

I’d already found the link ftp://ftp.ncdc.noaa.gov/pub/data/ghcn/v3/ on my own, but couldn’t find the documentation for averages (nor could I find the link you posted for GHCN_Monthly, but never mind).

You must feel frustrated getting the zillions of comments you have, often repeated. I admire your capacity for patience in answering them all.

Clyde Spencer
Reply to  Kristi Silber
October 4, 2018 8:39 am

Kristi,
There is something seriously wrong if two different computer languages, implementing the same algorithm, get significantly different results!

Kristi Silber
Reply to  Kip Hansen
October 3, 2018 11:01 pm

Kip, regarding the comment about intensive properties…

Is the difference between August 1999’s average temperature and the average August temperature for 1950-1980 a meaningless property? This is not a temperature, but an index of change.

Anomalies are not temperatures.

Can we not average the changes?

I understand that each station is only really representative of a point. But it seems to me that when you have enough points, and enough change in temperature at those points over time, the average is a meaningful estimate of the total change.

Kristi Silber
Reply to  Kip Hansen
October 4, 2018 3:02 pm

Kip,

Regarding your “feature illustration, you say in the text of your post, “the median would be identical for wildly different temperature profiles,” but as I (and perhaps others) have shown, it’s more accurate to say, “the median CAN be the same.” They are very different statements; the one in the post is very misleading. Perhaps you’d like to correct it.

Kristi Silber
Reply to  Kip Hansen
October 4, 2018 3:14 pm

Clyde,

“Kristi,
There is something seriously wrong if two different computer languages, implementing the same algorithm, get significantly different results!”

1) They are not implementing the same “algorithm.”
2) The way averages are calculated may vary depending on the aim of the research. The computer language function is just a short cut; there is no rule that they have to use it.
3) It hasn’t been demonstrated in this post (or in the comments, as far as I’m aware) that the results would be statistically different the using real-world data.

Kristi Silber
Reply to  Kip Hansen
October 4, 2018 7:35 pm

Clyde Spence, Michael Moon, others –

I don’t mean to ignore any of your comments. Too tired and irritated right now.

Reply to  Kristi Silber
October 3, 2018 10:41 am

Kristi Silber,

“What they are finding is that most of the warming in the last several decades is anthropogenic.”

They have found no such thing. They assume that, and cannot prove that their assumption is true. CO2 absorption of surface radiation has been saturated at less than 10 meters altitude since before the Industrial Revolution. The effect of increasing CO2 in the atmosphere occurs at the TOA, where the altitude at which the atmosphere is free to radiate to space increases, thus lowering the temperature at which the atmosphere radiates to space, thus retaining more energy in the atmosphere.

No one has ever calculated the magnitude of this effect from First Principles, so all the papers about Climate Sensitivity are based on the assumption that all warming since 1850, or 1880, or sometime, IS anthropogenic.

No one can prove that, though…

Kristi Silber
Reply to  Kristi Silber
October 3, 2018 10:20 pm

Clyde and William,

I’ve heard all those arguments. Have you heard all the counterarguments? Maybe you should do some investigating. I’m not going to get into a debate about CO2 right now. Maybe I should write a post I can refer to when these things come up.

Oy, the worst is the “no correlation between CO2 and temperature in historic times”! As if CO2 is the ONLY variable!

William Ward
Reply to  Kristi Silber
October 3, 2018 10:48 pm

Hi Kristi,

You said: “Have you heard all the counterarguments?”

I have heard a lot of them, I assume all of them … but none that made any sense or that actually did anything to change the facts. I can forget the past records and just focus on the past 50-100 years. Still, no correlation – so no causation.

You said: “As if CO2 is the ONLY variable!”

Hey, you are starting to sound like us? Did you do that on purpose? That is what we are saying (and then some). The alarmist position is that CO2 is “*THE* climate control knob”. We are just pointing out that there is no evidence of it, nor any scientific agreement about the mathematical relationship that governs it. If CO2 is one of the variables, but other variables are dominating its effect, then it “ain’t *THE* control knob”.

Sadly, I don’t think many scientists are even looking to introduce other variables into their equations.

You mentioned in a recent post: “If a scientist came up with a different explanation and had lots of supporting evidence for it, and others validated the results, he would be instantly famous.”

I suspect it would go more like this: If a scientist came up with a different explanation and had lots of supporting evidence for it, then he(/she) would be ostracized by his peers, would lose his research funding, would be threatened with his job, perhaps a lawsuit, and have his name added to a dozen websites dedicated to profiling and doxing him as a climate and science denier. His excessive drinking in high school would also be documented. Sad but very true.

Clyde Spencer
Reply to  Kristi Silber
October 4, 2018 8:46 am

Kristi,
You said, ” As if CO2 is the ONLY variable!” As William has already remarked, that is the problem. Advocates of AGW behave as if CO2 is the primary driving force. The reality is, its absorption bands overlap with water vapor and instead of forecasting warming based on a doubling of CO2, the forecasts should be based on a doubling of the sum of ALL ‘greenhouse gases,’ including water vapor. But, there are other influences such as land use changes, which largely get ignored. You act as though us card-carrying skeptics are unaware of the big picture.

Kristi Silber
Reply to  Kristi Silber
October 4, 2018 4:10 pm

Clyde,
“Advocates of AGW behave as if CO2 is the primary driving force. The reality is, its absorption bands overlap with water vapor and instead of forecasting warming based on a doubling of CO2, the forecasts should be based on a doubling of the sum of ALL ‘greenhouse gases,’ including water vapor. But, there are other influences such as land use changes, which largely get ignored. You act as though us card-carrying skeptics are unaware of the big picture.”

“Reality”? Whose “reality”?

I don’t care what “advocates” say, I’m talking about science. Most climate researchers believe that CO2 is the primary (the one having more than half the influence) driving force at least since the 1970s, perhaps since the 1940s (it would depend on estimates of the relative forcing of aerosols vs. CO2; I don’t know the figures). That is not the same as saying CO2 and temperature should be directly correlated! There are other factors involved that affect the relationship. These can be statistically teased out.

Land use change is not ignored! What makes you think that???

CO2 absorbs infrared radiation at wavelengths where there is a “window” in the absorption spectrum of water vapor, which is why it’s important. This is why the atmospheric temperature warms, and that in turn affects the amount of water vapor it can hold. While most believe that water vapor will increase (creating a positive feedback), it is not likely to double because much of it will precipitate out. A rise in aerosols may interact with water vapor to enhance cloud cover, thereby having a cooling effect beyond the direct effects of the light-scattering properties of aerosols. It’s all very complex, but scientists are gradually understanding it better all the time as more data come in and are analyzed.

It’s not that skeptics aren’t aware of the big picture, but we all have varying degrees of understanding of the big picture. In addition, those who get their information primarily from sites like this may have a different version of the big picture from that held by most scientists. It pays to keep in mind that this site and the people who post here are dedicated to influencing opinion in a certain direction.

(It’s worth looking at this paper I found yesterday just for the photo of a volcanic eruption. My uncle is one of the authors; he’s a physicist who studies atmospheric aerosols https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2010JD014447)

Kristi Silber
Reply to  Kristi Silber
October 4, 2018 5:27 pm

William,

“I can forget the past records and just focus on the past 50-100 years. Still, no correlation – so no causation.”

A direct correlation between CO2 and temp is not expected. There are other factors, which is why multivariate procedures are necessary to tease them out. All climate scientists recognize this. This doesn’t change the importance of CO2 in climate change.

You said: “As if CO2 is the ONLY variable!”

“Hey, you are starting to sound like us? Did you do that on purpose? That is what we are saying (and then some). The alarmist position is that CO2 is ‘*THE* climate control knob’”.

I don’t care what alarmists say, I’m interested in the science.

“We are just pointing out that there is no evidence of it”

Yes, there is. You know why I don’t provide evidence? I’m sick of people shrugging it off. “That’s not evidence!” “Scientists are frauds!” “It’s just natural variation!” “The error of satellite measurements of TOA radiation make it meaningless!” “Anyone can cherry-pick their studies!” etc. Plus, it’s not my job to teach people who could easily find the evidence if they actually were interested. “…nor any scientific agreement about the mathematical relationship that governs it.” Not sure what you mean by this. “If CO2 is one of the variables, but other variables are dominating its effect, then it “ain’t *THE* control knob”. I never liked the “control knob” metaphor.

“Sadly, I don’t think many scientists are even looking to introduce other variables into their equations.” This, more than anything else you’ve said, shows that you don’t know what climate scientists do. It is a huge false assumption.

Sadly, I think false assumptions about scientists are all too prevalent among skeptics. It demonstrates how they have been taught to think. This is a major problem! Skeptics are not dumb! Many of them are extremely intelligent. But anybody can be swayed by subtle (or not so subtle) forms of manipulation if they aren’t aware of it.

A great deal of effort goes into teaching scientists to be constantly aware of sources of bias, and ways to minimize it through the tools of science. That doesn’t mean they are immune, but it does make them different from the average.

“I suspect it would go more like this: If a scientist came up with a different explanation and had lots of supporting evidence for it, then he(/she) would be ostracized by his peers, would lose his research funding, would be threatened with his job, perhaps a lawsuit, and have his name added to a dozen websites dedicated to profiling and doxing him as a climate and science denier. His excessive drinking in high school would also be documented. Sad but very true.”

This is your second big assumption. This is mine: if someone really did demonstrate AGW was wrong, using high-quality science that was independently verified, he would be a hero. Think of the publicity! Think of the reaction from politicians! Skeptics would say, “I told you so!” And as long as it meant that the future was less troubling than people suspect, the world would breathe a sigh of relief. People are genuinely afraid! That is one thing many skeptics don’t seem to consider, instead pushing the idea that it’s all about money and politics.

I’ve looked into this question of retribution by going through lists of scientists who people say have suffered negative professional consequences and trying to see what really happened. I’m convinced it’s usually because of the way they have voiced their criticism. It’s not about disagreement, it’s about promoting the idea that scientists can’t be trusted. Dr. Ridd, for example, said on the TV news that whole institutions shouldn’t be trusted. Dr. Curry became active in skeptic blogs and testified to Congress that scientists were subject to all kinds of bias and cognitive errors. Patrick Michaels, Fred Singer, Sherwood Idso and Robert Balling were active in fossil fuel propaganda campaigns to spread distrust. And many of them were spokesmen for anti-AGW think tanks who spread distrust.

I don’t doubt that some vocal skeptics are not popular with their colleagues, but that’s a different thing from getting fired or losing funding. Besides, it goes both ways – look at all the complaints here about Gavin Schmidt and James Hansen and Michael Mann (whom everyone thinks committed fraud, when actually he’s just an unpleasant, unprofessional egotist, in my opinion. And yes, I have read the damaging emails.) I don’t doubt they have received threats – I know that Mann has.

…But maybe I shouldn’t be saying this. I often get attacked for writing what I think. I thank you very much for not making it personal and insulting me. We all have the right to express our opinions and ideas.

William Ward
Reply to  Kristi Silber
October 4, 2018 6:17 pm

Kristi,

You said: “I don’t care what “advocates” say, I’m talking about science.”

Science is just a process. Science is carried out by humans who can and do inject their politics and religious dogma on their work. Calling something science doesn’t make it so.

You said: “Most climate researchers believe that CO2 is the primary (the one having more than half the influence) driving force at least since the 1970s…”

Belief and faith are for religion, not science. Scientists may have a theory, but when no evidence can be produced to support the theory, then its fair game for criticism. It doesn’t matter if you *believe* 2+2 = 4. It is so, whether you believe it or not. F=ma governs how force works whether you believe it or not. F=ma did not become true after a majority of scientists voted it into existence.

You said: [CO2 is > 50% of the climate driving force]…but “that is not the same as saying CO2 and temperature should be directly correlated!”

If one thing is a function of another – if their relationship behaves according to a mathematical equation, then by definition they are correlated.

Many relationships involve multiple variables:
PV=nRT for example. Any one of those qualities can be defined as a function of 3 variables and a constant. They are correlated. They are not “statistically” “teased out”. Either the variables and their relationship is understood, or it is not. With CO2 it is not.

You said: “It’s all very complex, but scientists are gradually understanding it better all the time as more data come in and are analyzed.”

I disagree. The more time goes on, the number of papers with competing models and predictions increases. We have no greater understanding of the effect of CO2 in the atmosphere today than we did 50 years ago. 0 = 0.

You said: “It pays to keep in mind that this site and the people who post here are dedicated to influencing opinion in a certain direction.”

It’s a site where anyone can express any opinion. Some here are dedicated to stopping people from hijacking science for the purpose of implementing their social and political agendas. But we don’t do so by lying and manipulating. We use the science that our opponents have abandoned.

William Ward
Reply to  Kristi Silber
October 4, 2018 7:04 pm

Hi Kristi,
My reply of a few minutes ago was actually to what you said to Clyde, but it turns out that it covers some of what you said in your reply directly to me. Let me try to cover the things I missed because our posts “crossed in the mail”.

I’ll begin with your ending comment: “I thank you very much for not making it personal and insulting me.”

I appreciate you saying that. I’m pretty forceful with my views but it is not intended to be insulting. If I insult then I fail my own personal standards of conduct. But forceful statements can be perceived at insulting so I run that risk.

You said: “Not sure what you mean by this” [Referring to scientific agreement about a mathematical relationship that governs CO2 and climate].

Everything in science that is useful to humans eventually arrives at an understanding of a mathematical relationship – and equation. Example: F=ma. Force = mass times acceleration. If you lift a 10lb weight vertically, you must counter the “acceleration” of gravity. If you know the value of the acceleration (a) and you know the value of the mass of the object (m), then you can calculate the force (F). This is useful. We don’t have 100 scientists with competing views of this equation. (F=0.5ma) (F= square root of mass * 1/a), etc. With CO2 we want an equation that explains how it heats the atmosphere. T=(a)CO2(b)/(c) – with a, b and c being variables or constants. Of course this could be a differential equation, etc. We don’t have this with CO2. Nor can we see a distinct relationship with measurements that show CO2 and temperature acting in such a way that T looks to be a function of CO2. Yes, there could be other variables in the equation and if so, they appear to have a much larger effect on T such that CO2 and T don’t appear to be related. They might “dominate” the equation. If so, it shoots the CO2 theory.

CO2 is something that appears to be increasing because of humans. The view on that is not unanimous – nor is it proven. But, much of the climate alarmism thrust is around showing that humans are screwing up the works. The world would be such a nice paradise if humans would just go away – or at least use “green energy”. So there is definitely a strong current in the climate studies field to force the warming to be from CO2. Not all scientists do this, of course. But the current is strong in that direction.

Look into the cases of well-known scientists who have bucked the system and see what their peers did to them. I won’t expand here or mention names but I’m sure you can find them. Many have participated here and their stories are well documented. I read your comments on this and it is too big to address in one post. I suggest you search further. There are plenty of examples of losing jobs, losing funding, suffering lawsuits, being profiled on “denier” websites.

Many here have worked to do research or apply science for 3,4, 5 decades. We know a lot and have seen a lot. Our criticism is based upon substance. Not every “skeptic” is skeptical because of substance, just as not every alarmist is alarmed because of substance. I don’t consider myself a skeptic. I certainly don’t consider myself a “denier”. I don’t like labels but if someone has to give me one then I chose “climate lie rejector”. I have the background to evaluate the claims of climate alarmism and I reject it.

That’s it for now.

Rick C PE
Reply to  Willis Eschenbach
October 2, 2018 1:03 pm

Willis: While I agree that the Tmax, Tmin average is properly a “mean” it is also true that it is the “median” when there are only two data points available. However, I was taught that use of the term mean is better in this situation. But I do agree with Kip’s main point that use of just the min, max average is a poor way to characterize a daily average temperature. This is just an issue of inadequate sample size. It certainly results in bias as it ignores the length of time during the diurnal cycle when temperatures are on the warm side vs. the cool side of the true average.

This would, I think, tend to bias daily averages lower during mid/upper latitudes in the summer and higher during the winter. Someone with access to high frequency temperature data should be able to compare daily averages using hourly or more frequent readings to the min/max average.

Steve O
Reply to  Rick C PE
October 3, 2018 5:05 am

As long as the measurements are made according to a consistent method, it should not matter. Researchers don’t care about the temperature, per se. What researchers care about are how the measurements change over time.

If you use a thermometer that consistently adds 0.27 degrees to every measurement, the data will be just as useful to a researcher looking at the trends than if the thermometer was perfectly accurate.

Reply to  Willis Eschenbach
October 2, 2018 1:58 pm

Willis,
“I’m sorry, but the median is NOT the midpoint between the max and the min. It’s the value which half of the data points are above and half below.”

That was my reaction too, here. Some responded by pointing to a rule which is sometimes used to resolve a situation where there are even numbers of values, and there are two candidates for middle, so take the mean. But this is just something done in to resolve awkward situation, and usually has no real effect in a large set. It is not in the spirit of median, which should be a member of the set, and it is absurd to apply such an exception to sets with only two data.

Editor
Reply to  Nick Stokes
October 2, 2018 3:11 pm

Nick ==> You are not facing up to the actual procedure involved, but hung up half-way to the controlling factor of whether we are in fact finding a median.

Whenever one has to put a data set in order of magnitude first and then find the midpoint between the largest and smallest values, one is finding a Median, regardless of the number of data points in the set. It is coincidental that the median and the mean of a two value set are the same. But, it is the case that the two values are selected by ordering the set by magnitude/size, and taking the highest and lowest. This is the process of finding a median.

I explained this as many times as I could without insulting the average reader….

In modern meteorological use, it is the absurdity of throwing out/ignoring all the intermediate values that is the big point.

D. J. Hawkins
Reply to  Kip Hansen
October 2, 2018 5:03 pm

“In modern meteorological use, it is the absurdity of throwing out/ignoring all the intermediate values that is the big point.”

If the purpose is to preserve data for it’s own sake I think you have a case. However, your own conclusion in your essay is that for meteorological purposes, it is, in fact, good enough. I empathize with your distress at the binning of perfectly good data, but you can’t have it both ways. It either matters, or it doesn’t.

steven mosher
Reply to  Willis Eschenbach
October 2, 2018 5:39 pm

yup,
funny that he goes on to misquote nick
and then pound the issue about being precise

kip own goal hansen

steven mosher
Reply to  Kip Hansen
October 3, 2018 5:02 pm

wrong

YOU: “Stokes maintains that any data of measurements of any temperature averages are apparently just as good as any other — that the median of (Tmax+Tmin)/2 is just as useful to Climate Science as a true average of more frequent temperature measurements”

He never says both are equally useful.

never says EQUALLY.
YOU added that.

he says

“. It’s not. But so what? They are both just measures, and you can estimate trends with them”

you added a judgement not in his text.
not precise reading kip

Joel O'Bryan
Reply to  Willis Eschenbach
October 2, 2018 6:43 pm

In the US CRN Daily data sets they calculate 2 values:
Field 8 T_DAILY_MEAN [7 chars] cols 55 — 61
Mean air temperature, in degrees C, calculated using the typical
historical approach: (T_DAILY_MAX + T_DAILY_MIN) / 2. See Note F.

Field 9 T_DAILY_AVG [7 chars] cols 63 — 69
Average air temperature, in degrees C. See Note F.

In the US CRN Monthly data sets they also calculate 2 average/mean values.

Field 8 T_MONTHLY_MEAN [7 chars] cols 57 — 63
The mean air temperature, in degrees C, calculated using the typical
historical approach of (T_MONTHLY_MAX + T_MONTHLY_MIN) / 2. See Note
F.

Field 9 T_MONTHLY_AVG [7 chars] cols 65 — 71
The average air temperature, in degrees C. See Note F.

All Note F says is:
F. Monthly maximum/minimum/average temperatures are the average of all
available daily max/min/averages. To be considered valid, there
must be fewer than 4 consecutive daily values missing, and no more
than 5 total values missing.

So, Pick your poison.
8 is the blue ball. Swallow it and all is well.
9 is the red ball. swallow that and down the rabbit hole you go.

Joel O'Bryan
Reply to  Kip Hansen
October 2, 2018 8:19 pm

With CRN, a GHCN read-out on paper is only about as useful as toilet paper.

I realize that is still what most of the world has, but the UHI and the adjustments applied to GHCN make it little more than dirty toilet paper even before the wipe.

I just wrote a Letter to Editor to make my local paper in response to an article they published today claiming September 2018 was the hottest September on record. They got that info of course from the NWS and their HCN station at Tucson International Airport.

Tucson has had a CRN station since late 2002.

here’s what I wrote the AZ Daily Star (aka the “AZ Red Star,” as it has long been known.:

Subject: Tucson’s September not the hottest, sorry Alarmists

Tucson dot Com reports September was hottest on record, by AZ Star report Mikayla Mace, on 2 Oct 2018.
Wrong.
The problem — the NWS station at Tucson Airport has a significant Urban Heat Island (UHI) problem inflating its readings, like many readings the NWS uses from the Historical Climate Network.
NOAA in the early 2000’s created the Climate Reference Network (CRN) of modern automated weather stations away from UHI contamination sites.
Since 2003, Tucson has had a CRN station (#53131) just south of the AZ-Sonoran Desert Museum.
For the month of September, 2010 actually holds the record at average 29.4 deg C (84.9 F), with 2018 in 2nd at 29.1 deg C (84.4 F). 2003 is 3rd.
For 2012-2017 Septembers have been right at the average of 27.6 deg C; that is no trend – up or down.
Truth please, not climate lies.”

Reply to  Joel O'Bryan
October 2, 2018 8:41 pm

“Pick your poison”
No, according to what is said there they will normally give the same answer. It’s just the order in which you do addition, which is commutative. The difference is in the treatment of missing values. If you calculate the daily averages and then average, you’ll discard both Tmax and Tmin for that day if either is missing. If you average separately, you probably won’t. There will be a certain number (for each of max and min) required for the month, but they might not match. This may occasionally cause digestive problems, but is unlikely to be fatal.

Geoff Sherrington
Reply to  Nick Stokes
October 3, 2018 3:27 am

Nick,
How does this ‘fix’ cope with the problem when missing values arise because they are far enough from expected to be considered like outliers and culled?
If you are infilling in any way, you will get a potentially large error unless you sometimes infill with an implausible value similar to the one that got it culled in the first place, but was valid.
Geoff

Reply to  Kip Hansen
October 3, 2018 9:41 pm

Kip,
“Didn’t you say you started your program with the GHCN_Monthly TAVG data?”
Yes I do. So do GISS and the others. But someone has to have done the monthly averaging, sometime.

Normally a max/min thermometer will return both values. However, pins stick, writing fails (can even be illegible). Etc. And if you read in the morning, a missed day will probably be spread over two, with yesterday’s max and today’s min missing.

Don K
Reply to  Willis Eschenbach
October 3, 2018 6:17 am

Willis,

Quite by chance, I happen to have looked up the Python statistics package median function about six hours ago.

—-
“Return the median (middle value) of numeric data.
When the number of data points is odd, return the middle data point. When the number of data points is even, the median is interpolated by taking the average of the two middle values:”

>>> median([1, 3, 5])
3
>>> median([1, 3, 5, 7])
4.0
—-

OK,, let’s try R.

med print (med)
[1] 1 3 5 7
> median(med)
[1] 4

—-

Not exactly what I would have expected, but apparently we (you and I) have been going through life with a modestly incorrect understanding of “median”.

Clyde Spencer
Reply to  Don K
October 3, 2018 8:31 am

Don K,
Note also that Python reports an integer for the odd-number set (the original data), but reflects the interpolation step for the even-number set by adding a ‘decimal zero.’

Don K
Reply to  Clyde Spencer
October 3, 2018 10:37 am

Yep. Clearly Python has decided that the mean is a float. Which seems reasonable to me. I’m not very familiar with R, but here’s an example where the interpolated value has a fractional part.

> med median(med)
[1] 4.5

Editor
Reply to  Don K
October 3, 2018 9:08 am

Don K ==> (To say I am sick of this little sticking point would be a great understatement…)

Khan Academy gives this method:

“To find the median:

Arrange the data points from smallest to largest.

If the number of data points is odd, the median is the middle data point in the list.

If the number of data points is even, the median is the average of the two middle data points in the list.”

The fact of finding the median is somewhat disguised by ordering the data and throwing out all but the Max and Min.

Don K
Reply to  Kip Hansen
October 4, 2018 1:37 am

Kip, It came to me after a while that the “median” temperature isn’t really a median if there are more than a high and low temperature available. Although it’s hopefully close enough for alarmist work. The difference being that the true median is either the middle value or the mean of the two middle values. The “climate median” is instead the mean of the two end points. To give it a name, I propose “naidem” — a “backwards” median. (It could have an actual name I suppose).

Clyde Spencer
October 2, 2018 11:37 am

Kip,

You remarked, “Climatologists concern themselves with the long-range averages that allow them to divide various regions into the 21 Koppen Climate Classifications and watch for changes within those regions.” I have previously made the point that we should be looking at all the Koppen climate regions and looking for patterns of changes in temperature and precipitation. However, I’m unaware that anyone is actively looking at this in the context of a warming Earth. I have looked at the BEST database and don’t recollect seeing anything as granular as the 21 regions. Are you aware of any of the various databases addressing this concern?

Editor
Reply to  Clyde Spencer
October 2, 2018 12:57 pm

Clyde ==> See the link for Koppen Climate Classifications .

Clyde Spencer
Reply to  Kip Hansen
October 2, 2018 1:34 pm

Kip,
Okay, I went to the link, and it is, as I suspected, a description of the Koppen climate regions. I was asking if anyone is tracking global climate change at a finer granularity than land, SST, and Arctic, which is all that I have seen reference to.

Editor
Reply to  Clyde Spencer
October 2, 2018 3:28 pm

Clyde ==> Dr. Uma Bhatt, at the University of Alaska Fairbanks did some work dividing Alaska into climate regions — my link to her presentation is currently broken, and I gave emailed her for a workable link.

Reply to  Clyde Spencer
October 2, 2018 3:39 pm

Clyde, that analysis has been done by Swedish researchers (named Chen surprisingly)
The paper is: Using the Köppen classification to quantify climate variation and change: An example for 1901–2010
By Deliang Chen and Hans Weiteng Chen
Department of Earth Sciences, University of Gothenburg, Sweden
http://hanschen.org/uploads/Chen_and_Chen_2013_envdev.pdf

Their website is: http://hanschen.org/koppen/#home

My synopsis is https://rclutz.wordpress.com/2016/05/17/data-vs-models-4-climates-changing/

Clyde Spencer
Reply to  Ron Clutz
October 2, 2018 6:48 pm

Ron Clutz,
Thank you for the link. It is good to know that someone is looking at the situation at a regional scale. Their conclusion is that there is an increase in desert and a decrease in Arctic/tundra zones. That is not surprising.

Clyde Spencer
Reply to  Ron Clutz
October 2, 2018 8:38 pm

Ron Clutz,
I just read your synopsis at your website. I think that there is a problem with the color labels on the maps, particularly maps b and c.

Joe Born
October 2, 2018 11:42 am

For me that’s a new definition of “median.” Until today I’d have thought that it’s the value the temperature was below for a total of 12 hours and above for a total of 12 hours.

That’s why I come to this site. I learn new stuff all the time.

Alan Tomalty
Reply to  Joe Born
October 2, 2018 1:40 pm

i don’t know what you would call your metric, but it would make sense if we could disregard fast moving storm systems. However since fast moving storm systems result in a rollercoaster ride of temperatures, your metric would not be sensible to use. It would also suffer from having to throw out the historical records as Roy Spencer has pointed out.

Joe Born
Reply to  Alan Tomalty
October 2, 2018 2:24 pm

I don’t, either. I was just making an obviously too-obscure comment about how odd it was to introduce a discussion of median into this context.

However, if you did want to extend the concept to continuous quantities–and I can’t immediately supply a reason for wanting to–I do think the way I defined it seems pretty good.

D. J. Hawkins
Reply to  Joe Born
October 2, 2018 5:16 pm

The only issue I would raise is that even when speaking of “continuous quantities” you quickly discover you really aren’t. The best you can do is to pick a sample interval that is smaller than the rate at which the sampled quantity is changing. The Nyquist Sampling Theorem is useful here. However, I think that you are considering only the condition where the temperature is changing in a regular fashion over the diurnal period, where the lowest temperature is near midnight or sometime after. and the highest is in the early- to mid-afternoon. With a fast-moving weather front, it could be possible to have a double maximum or minimum for the day, like a two-humped camel. How do you derive the median in that case, since the rank ordering of values is definitely not in sync with their time order?

Usurbrain
October 2, 2018 11:43 am

One has to look at the shape of the curve. One of the curve has a large portion of the entropy [ the energy of warmth under the curve] at a low temperature and the other has a large portion of the entropy at a point closer to the Median but still is not a good measurement of the entropy.
Experiment: take a large, very large set of random numbers, these numbers would be set of ten numbers with five numbers less than zero. and the other five numbers greater than zero. Now using the median of the Hi/Lo large set of numbers make a graph of the Median and the Mean using all numbers. Guaranteed that the Absolute Average of all numbers will be very,very, close to ZERO. However, the Graph of the Medians will look like the sawtooth graph above, and the graph of the means will look like a smoothed version of the median graph.
Years ago knew how to do that, back in the days of Lotus 123, but have not played with that kind of stuff for decades. Graph will amaze you about the farce of AGW and how it is all Chaos. Chaos on several levels.

October 2, 2018 11:51 am

Also found a lot of confusion on the issue of the true avg. When analysing data I stick with looking at maxims and minims. Too much variation in Tavg.
Click on my name to read my final report.

Clyde Spencer
Reply to  HenryP
October 2, 2018 12:07 pm

HenryP,
Personally, I think that it is more instructive to see what Tmax and Tmin are doing over time than to collapse the information into a single number, Tavg, and only use that. It may double the processing to work with both Tmax and Tmin, but that is what computers are for!

D. J. Hawkins
Reply to  Clyde Spencer
October 2, 2018 5:21 pm

I agree (for what that’s worth). I do not know where I ran across it, but when someone pointed out that the GAST increase, such as it was, was mostly about warming nights and winters, suddenly I understood how the warmunists were trying to pull a fast one. But you could only see that by analyzing the separate trends of Tmax and Tmin.

steven mosher
Reply to  Clyde Spencer
October 3, 2018 5:04 pm

thats why we look at both and discuss both tmin and tmax in ipcc reports

hey where is uah record of tmin?

Clyde Spencer
Reply to  steven mosher
October 3, 2018 5:08 pm

Mosher,
You asked, “hey where is uah record of tmin?” You should ask that of my distant relative.

October 2, 2018 12:04 pm

The information lost when tmin and tmax are combined

https://tambonthongchai.com/2018/04/05/agw-trends-in-daily-station-data/

Editor
Reply to  Chaamjamal
October 2, 2018 1:02 pm

Chaamjamal ==> Thanks for the link.

C. Paul Pierett
October 2, 2018 12:21 pm

Kip breaking out “AGW scientists” as a separate and distinct cohort is wrong. They belong to the class of people you label “climatologists” Furthermore ascribing a per-determined result of their studies makes your categorization of them incorrect.

Editor
Reply to  C. Paul Pierett
October 2, 2018 1:04 pm

C. Paul Pierett ==> That’s one of the things that people may have differing opinions about.

jim hogg
Reply to  C. Paul Pierett
October 2, 2018 1:15 pm

Yeah: confirmation bias is the unfairly implied crime in that category. Confirmation bias is a temptation for all scientists – which they should resist of course. Many climate scientists genuinely believe that the recent recorded increase in temperature is driven by extra CO2 added to the atmosphere by us. And they’re as entitled to their theories/opinions/hypotheses as any other scientists, where the explanation isn’t known for certain. Implying confirmation bias is a pointless ad hominem that simply distracts from the job of identifying and evaluating the data effectively.

The essay raises a worthwhile question about whether or not the raw data is being used in the most accurate way. But as Mr Spencer pointed out at the top it’s difficult to see a better way that doesn’t invalidate earlier records.

Editor
Reply to  jim hogg
October 2, 2018 3:34 pm

Jim ==> There is appoint in time when we could start using the better data, even if it must be in parallel with the poor data. There is little modern excuse for continuing to use faulty data once it is possible to use good data.

The same applies to the continued attempts to use the problematic (being nice here) GAST Land and Sea when we have satellite temperature records — the satellites were sent up as a solution to the problem of troublesome surface data.

Editor
Reply to  Kip Hansen
October 2, 2018 4:54 pm

— a point in time —

Don K
Reply to  Kip Hansen
October 3, 2018 8:03 am

At the risk of belaboring the obvious, satellites and surface measurements aren’t measuring the same thing. You shouldn’t switch data sets in mid series or all sorts of unfortunate things are likely to happen, as with Mann et. al. and that misbegotten hockey-stick.

Editor
Reply to  Don K
October 3, 2018 9:30 am

Don K ==> If you are speaking to me, I am referring in the first paragraph to the fact the all automated weather stations keep five-minute temp records and have for twenty or so years. we could/should be using those, at the very minimum.

The second paragraph points out the satellites were meant to resolve all this carping about the lousy surface record and its myriad problems.

Don K
Reply to  Don K
October 4, 2018 2:39 am

Kip

I guess I am speaking to you although at the time I posted, I thought I was responding to Jim. Anyway, the point I was trying to make was that satellite temperatures are atmospheric temperatures, not surface temperatures. The two sets are surely correlated. But they are not the same thing. Extrapolating the lowest altitude satellite temperature to the ground probably has some slop. The one isn’t a plug in replacement for the other. Or even necessarily better..

If satellite temperatures are a replacement for anything, it’s radiosondes.

BTW, satellites are complicated and don’t generally work the way people tend to assume. For example, one of the satellites with MSUs is Landsat. I looked it up. As I suspected, it’s in a sun synchronous orbit which means that at low latitudes you will get two relatively brief passes a day roughly 12 hours apart. High latitude locations will be oversampled. Except for the poles which probably won’t be sampled at all. There may be other satellites that do sample the poles.

There’s probably an article somewhere that goes into the situation in detail. Maybe someone can point us at it.

Don K
Reply to  Don K
October 5, 2018 2:46 pm

Kip – “Most of the readers here — the knowledgeable ones — are quite familiar with the differences between sat and land station sets and what they measure.”

At one level, yes. At a lower level, no, with a few exceptions, they aren’t. Trouble is you’re dealing with a weak, noisy temperature signal and the numerous peculiarities of satellite temperature measurement may make a significant difference.

I’m not saying, don’t use satellite temperatures. I’m saying if you plan to do so to any great extent, take the time to learn quite a lot about them. If you want to discuss this further, let’s try email. You can get my email address from the second header line on my web site home page — http://donaldkenney.x10.mx/

Regards, Don

Kristi Silber
Reply to  Kip Hansen
October 4, 2018 3:21 pm

Kip,

Have you demonstrated statistically somewhere in the comments that some of the data (or the analysis) are faulty, using all the available data? My apologies if I missed it.

Geoff Sherrington
Reply to  jim hogg
October 3, 2018 3:48 am

Jim Hogg,
When you are uncertain of data quality for a given use, you turn to proper error estimation to assist your understanding of whether the numbers are fit for purpose.
We seldom see proper error analysis. Geoff.

October 2, 2018 12:38 pm

Clyde
In fact I am not sure what is exactly happening at the moment as computers can now take a measurement every minute and report the true avg for the day.
That is why I said that you cannot really compare data from now with data from the past as per Kip’s min and max thermometer….which is a very relevant observation by Kip.
Every country makes his own rules??
So. I stick with minums and maxims.

Editor
Reply to  HenryP
October 2, 2018 3:35 pm

Henry ==> Maxes and Mins are probably better than the fake Tavg — but still don’t tell us what we want to know about daily temperatures or about energy retention.

October 2, 2018 12:53 pm

median not half max + min except on 2 pt data set.
median of 2 point data set = mean of two point data set.

you cannot say the ‘average temperature of the day is not the same as max + min over 2’, unless you have > 2point data set, at which point median of that set is NOT t max + t min /2

I know what you are saying, but you are using the wrong terms to say it.

Editor
Reply to  Leo Smith
October 2, 2018 1:08 pm

Leo ==> That is rather pendantic.

For historic records, we have only a two-point data set — thus we find the median/mean which is the same — in this special case.

For modern records, for which there are 6-minute values for the whole 24-hour span, the procedure used is to take ONLY the Max and Min, ignore the rest of the set, and find the median of the fake two-value set and just call it the Average Daily Temperature.

Reply to  Kip Hansen
October 2, 2018 1:31 pm

Kip
Are you sure his is the same in every country of the world?

Editor
Reply to  HenryP
October 2, 2018 3:37 pm

It is the standard for the GHCN.

D. J. Hawkins
Reply to  Kip Hansen
October 2, 2018 5:30 pm

Kip;

Are you sure about the 6-minute interval? The USCRN page says they use a 5-minute average of 2-second readings.

taxed
October 2, 2018 1:12 pm

l think we get far to hung up on man made data like “global mean temps” been the here all and end all of climate and end up losing focus of what’s going on in the real world. According to the man made data there has been noticeable warming over recent years.
But what’s been happening in the real world does not support this claim as shown with the NH snow cover extent which has been tracking sidewards since the early 90’s. Also l myself have kept a 41 year old record of the date of the first snow of winter in my local area, and over that time this has shown no warming trend. lts things like this that makes me doubt just how much use things like “global mean temps” really are when it comes to understanding climate.

Editor
Reply to  taxed
October 2, 2018 3:39 pm

taxed ==> If you have your first snow date record in tabular form, I’d like to see it. (list, spreadsheet, csv, etc)

taxed
Reply to  Kip Hansen
October 3, 2018 7:49 am

Kip Hansan
l have kept this record in a book so am only able to write it down as a list. The record is from my local area in North Lincolnshire England. lt is as follows

77/78 21st Nov 9.05am
78/79 27th Nov evening time
79/80 19th Dec night time
80/81 28th Nov early morning
81/82 8th Dec 11.15am
82/83 16th Dec 2.20pm
83/84 11th Dec 8.00pm
84/85 2nd Jan around 11.00pm
85/86 12th Nov morning
86/87 21st Nov around 1.00am
87/88 22nd Jan 1.05am
88/89 20th Nov about 12.30am
89/90 12th Dec dawn
90/91 8th Dec early morning
91/92 19th Dec 6.40pm
92/93 4th Jan early morning
93/94 20th Nov 10.30pm
94/95 31st Dec 10.50am [corrected per author, .mod]
95/96 17th Nov 10.27am
96/97 19th Nov 11.30am
97/98 2nd Dec 10.40pm
98/99 5th Dec 8.40am
99/00 18th Nov evening
00/01 30th Oct 9.00am
01/02 8th Nov morning
02/03 4th Jan early morning
03/04 22nd Dec early morning
04/05 18th Jan early morning
05/06 28th Nov afternoon
06/07 23rd Jan night
07/08 23rd Nov early morning
08/09 23rd Nov early morning
09/10 17th Dec morning
10/11 25th Nov about 9.30am
11/12 5th Dec morning
12/13 27th Oct 12.20am
13/14 27th Jan morning
14/15 26th Dec night
15/16 21st Nov morning
16/17 18th Nov morning
17/18 29th Nov 7.50am

taxed
Reply to  Kip Hansen
October 3, 2018 11:43 am

Yes you can use the record for any future essay, just use my post name that will be OK.

P.S As you may wish to use it for a future essay, l rechecked the list for any errors and with the 94/95 31st Dec date the time should have been 10.50am rather 10.30am.

[Is not 10.50 am = 10:30 am ? 8<) .mod]

tty
October 2, 2018 1:16 pm

As for the discussion of medians it is true that strictly mathematically ”median” cannot be used for a continuum (like temperatures) since there is an infinite number of values in any interval, no matter how small.
In the real world however it is not possible to measure anything with infinite precision. In any range there is therefore only a limited number of possible measurement values, and a value with an equal number of possible higher and lower values can safely be regarded as a median.

But there are much worse problems with temperatures as a measure of energy in the climate system, particularly that the proper measure is actually enthalpy, not temperature. The enthalpy of air for a given temperature can be very different depending on pressure and the amount of water vapor.

Alan Tomalty
Reply to  tty
October 2, 2018 2:42 pm

You cannot measure enthalpy directly. Enthalpy is equal to internal energy + (pressure* volume)
If the pressure and volume are constant you can then calculate the change in enthalpy by measuring the difference between heat absorbed or released as the case may be. Since the atmosphere is not a closed system at equilibrium, the pressure is different at all levels and the volume constantly expands and contracts. Thus you cannot measure the difference in enthalpy in the atmosphere at 2 different points in time. The previous sentence points up an extremely important point as to why the earth’s climate system can never result in runaway global warming. The earth has a constantly changing volume of it’s atmosphere with pressure differentials depending on altitude. Because our planet atmosphere is dominated by N2 and O2 (which for the most part are non radiating gases) and the other small planets’ atmospheres are dominated by greenhouse gases, the earth’s atmosphere has a much smaller diurnal change in temperature from night and day. The 2 large planets of Jupiter and Saturn have atmospheres of hydrogen and helium which are not greenhouse gases but that is because their size creates a huge gravitational field which locks in the hydrogen and helium from escaping. The only way that the earth system could have runaway global warming would be for the earth’s atmosphere to contain an extreme amount of a greenhouse gas. That clearly is not going to happen for 2 reasons . 1) The sinks of CO2 are too large, oceans and vegetation on land. 2) The earth’s gravity is large enough to keep in nitrogen and oxygen from escaping but small enough to be able to respond easily from changes in water vapour ( which is a potent greenhouse gas) content; so that the changing volume of the earth’s atmosphere acts as a pressure release of any increased temperature. The earth’s atmosphere is NOT a greenhouse in the traditional sense. If there is a temperature increase, the volume expands to keep the pressure differentials more or less constant , and with conduction the warm air rises to carry the energy higher to eventual outer space. The volume then retracts. Because water vapour vastly outnumbers and outpowers the other greenhouse gases in our atmosphere , the above process is dominant. Until greenhouse gases become the majority, the above process will not change. Don’t hold your breath or waste your time waiting for that to happen.

D. J. Hawkins
Reply to  Alan Tomalty
October 2, 2018 5:37 pm

For atmospheric purposes, you can come pretty darn close by measuring the wet bulb temperature, or by using an enthalpy chart and measuring the dry bulb temperature and humidity. Stick in a correction for local barometric pressure and you should be golden. That would work for anywhere on the planet. Well, this one, anyway.

Alan Tomalty
Reply to  D. J. Hawkins
October 3, 2018 12:18 am

” or by using an enthalpy chart and measuring the dry bulb temperature and humidity.”

Dry bulb temperature is the actual air temperature as measured by climate scientists. The temperature stations are shielded from radiation and moisture. The wet bulb temperature measurement is no big deal. It is simply the lowest temperature that can be reached under the current ambient air conditions and still have evaporation. If the humidity is less than 100% then the wet bulb temperature will always be lower than the dry bulb temperature. When relative humidity is 100% the wet bulb temperature is the same as the dry bulb temperature. Dew point on the other hand is simply the lowest temperature to which the air can be cooled to before condensation happens at 100% relative humidity. Relative humidity (expressed as a %) is the ratio of the partial pressure of water vapour to the equilibrium vapour pressure of water at a given temperature. It depends both on temperature and pressure. 100% Relative humidity means that that parcel of air is saturated with water vapour and evaporation stops at that point. If the air cools below that point, condensation begins. Absolute humidity on the other hand is the actual density of the water content in air expressed in g/m^3 or g/kg. Specific humidity is the ratio of water vapour mass to the total mass of the moist air parcel.

All that to say, you won’t be able to calculate the enthalpy of the air parcel simply by taking wet bulb temperature measurements and using an enthalpy chart, because the atmosphere is 6200 miles high and has different pressures and temperatures all the way up. Interestingly, there are 5 layers of the atmosphere, troposphere,stratosphere, mesosphere, thermosphere and exosphere. Each successive layer behaves differently in temperature by switching the lapse rate from positive to negative or vice versa . The highest layer, exosphere is interesting . It is composed of hydrogen, helium, CO2, and O1. There has been very little research carried out of this layer. It is actually the largest layer and it is 10000 km thick or 6200 miles.

Alan Tomalty
Reply to  Alan Tomalty
October 3, 2018 12:25 am

I should have said the total atmosphere is 6700 miles thick or 11000 km.

Editor
Reply to  tty
October 2, 2018 3:41 pm

tty ==> “there are much worse problems with temperatures as a measure of energy in the climate system” — yes, of course there are….I am just pointing out this one, which is the metric on which ll Global Average Temperatures and Anomalies are based — not fit for purpose!

Bruce Lilly
October 2, 2018 1:25 pm

There are many errors in the essay. First, for the specific case of only two values, the mean and median are both given by the sum divided by two (as has been noted)(or to avoid possible overflow issues, the lower value plus one half the difference between the upper and lower values, or equivalently the upper value minus one half the difference). In the case of more than two values, it is absolutely not the case that the median is unchanged for a given range. Here are two sets of 5 values with the same range:
3, 4, 5, 6, 12 and
3, 9, 10, 11, 12
The medians are 5 and 10, respectively (and the means are 6 and 9, respectively). The midpoint of the range is 7.5 for both sequences. In this particular pair of data sets, the median shows greater variation than the mean.

One commenter remarked that in the sequence 1, 2, 3, 4, 41, the value 41 may be an outlier. The value of the true median is that it is insensitive to up to 50% outliers in the data (this is known as the breakdown point), whereas the mean is affected by any outlier (the breakdown point for the mean is 0%). The median of these five numbers remains 3 whether the 41 is changed to 4 or to 400; a single outlier has no effect on the median of 5 values. Likewise, the median is unchanged if the minimum value is changed from 1 to -100 or to 2. Indeed, the median remains unchanged if both of those values are changed, because those two values comprise less than 50% of the five data points.

Clyde Spencer
Reply to  Bruce Lilly
October 2, 2018 7:28 pm

Bruce Lilly,
You misrepresented what I claimed. What I said was, “… interpolating the midpoint between two extreme values (Whatever you want to call it!) results in a metric that is far more sensitive to outliers than an arithmetic mean of MANY measurements.”

Bruce Lilly
Reply to  Clyde Spencer
October 3, 2018 6:50 am

Clyde Spencer,
I agree that the mid-range value is highly sensitive to outliers; somewhat more so than the mean (in the example set 1,2,3,4,41, the mid-range is 21.5 and the mean is 10.2).

I was referring to your comment:

In your example, depending on just what is being measured, one might justifiably consider the
“41” to be an outlier, and be a candidate for being discarded as a noise spike or malfunction in
the measuring device.

I was simply pointing out that the true median (not the mid-range value) is much less sensitive to outliers than the mean (the median of the full example set is 3, which is representative of the values with 41 excluded whereas neither the mean (10.2) nor the mid-range value (21.5) of the full set are representative of the values 1,2,3,4). The median can be used to filter outliers, either by a windowed median filter or by a repeated median filter, which is often used with noisy data.

Michael Carter
October 2, 2018 1:27 pm

Personally, I find the actual min and max more interesting and relevant than the median. Also e.g. no of days/year above/below x

In my latitude no of frosts recorded / year is worthwhile too

Cheers

M

taxed
Reply to  Michael Carter
October 2, 2018 1:45 pm

100% agree.
lts things like the number of days of frost, sunshine hours, days of rain, and the number of days of snow cover. Are really the sort of things that we should be looking at. Rather then some man made figure like “global mean temp”.

Usurbrain
October 2, 2018 1:57 pm

A much better of the warming is Degree heating days and degree cooling days.
DHD alone would show if an area was warming or cooling. I have seen them on the internet going back to the early 40’s. Have tried in vain to find the DB again and have had no luck. Only trouble is that it was changed some time in the past. It now is the time below 65 degrees and I think it used to be used to be hours below 50 degrees. Also it is not a true measure of the area under the graph as it determined by simply taking the average (mean) of temperature below 65 degrees.

Eample: The high temperature for a particular day was 33°F and the low temperature was 25°F. The temperature mean for that day was: ( 33°F + 25°F ) / 2 = 29°F

Because the result is below 65°F: 65°F – 29°F = 36 Heating Degree Days.

Note that this example calls using only two numbers the mean and not the median.

D. J. Hawkins
Reply to  Usurbrain
October 2, 2018 5:50 pm

I don’t believe the base was ever 50°F.

See below for the EIA’s databases, going back to 1949:

https://www.eia.gov/totalenergy/data/annual/index.php#summary

CarloN
October 2, 2018 2:01 pm

If a systematic error is assumed, the absolute numbers (taken as median, mean or other central tendency measurements) may very well be wrong BUT their variations (i.e. temperature increase / reduction) should be considered as valid (being the systematic error equally applied to all values).

From a climatological standpoint the “mortal sin” would be the occurence of a non-systematic error (i.e. unevenly distributed) with variable weight on the measured values so to result in a statistical significant trend of the time series.
I am not aware of any definitive argument that rejects the hypothesis of a significant increase of temperatures in the recent decades; I also do not think that changing the measurement method will affect the direction of the trend.

Of course on the causes of this warming the science is far to be settled too.

October 2, 2018 2:12 pm

The meteorological surface air temperature (MSAT) is a measurement of the air temperature inside a ventilated enclosure placed at eye level above the ground. The minimum and maximum MSATs contain information about two very different physical processes. The minimum MSAT is a rather approximate measure of the bulk air temperature of the local weather system as it is passing through. The maximum MSAT is some measure of the mixing of the warm air produced at the surface with the cooler air at the level of the thermometer.

The proper way to analyze the temperature record is to consider the minimum MSAT and the delta or difference between min and max MSAT as evidence of separate physical process. The average of these two temperatures has little physical meaning.

Although the details are complex, the underlying physical processes are relatively straightforward to understand. At night, when the surface and air temperatures are similar, surface cooling is limited mainly to net long wave IR (LWIR) emission from the surface through the atmospheric LWIR transmission window. This is nominally 50 +/- 50 Watts per sq. m. The magnitude of the cooling flux increases with decreasing humidity and decrease with increasing cloud cover. This is a consequence of the surface exchange energy. The downward LWIR flux from the lower troposphere balances out most of the blackbody emission from the surface. Almost all of the downward LWIR flux from the troposphere to the surface originates from within the first 2 km layer in the troposphere. The LWIR emission to space is decoupled from the lower troposphere by convection and the decrease in molecular linewidth. The upward and downward LWIR fluxes through the atmosphere are not equivalent. The concept of radiative forcing of an equilibrium average climate state is invalid.

During the day, the troposphere acts as an open cycle heat engine that transports the solar heat from the surface to the middle troposphere by convection. As the air ascends from the surface it expands and cools. This establishes the local lapse rate. The so called ‘greenhouse effect’ temperature is just the cooling produced by the ascent of an air parcel to a nominal 5 km level at a lapse rate of about -6.5 K per km.

The troposphere is unstable to convection. When the surface temperature exceeds the air temperature, convection must occur. The land surface must warm up until the excess heat is dissipated by convection. Under dry, full summer or tropical solar illumination conditions, the land surface temperature may reach 50 C or more. The net LWIR flux may reach 200 W per sq. m, but this is insufficient to cool the surface. 80% of the daily land surface cooling flux can easily come from convection. In addition, the surface heating creates a subsurface thermal gradient that conducts heat below the surface. As the surface cools in the afternoon, this thermal gradient reverses and the subsurface heat is returned to the surface.

A key concept here is the night time transition temperature at which the air and surface temperatures equilibrate and convection more or less stops. Cooling can then only occur by net LWIR emission. The transition temperature is normally set by the bulk air temperature of the weather system as it is passing through. This may change with local conditions (for example adiabatic compression during Santa Ana conditions in S. California). In many regions of the world, the prevailing weather systems are formed over the oceans and ‘carry’ the information of the ocean surface temperature with them as they move overland. For example, the minimum MSAT in most of California is set by the Pacific Ocean and the fingerprint Pacific Decadal Oscillation (PDO) can be clearly seen in the weather station records.

There is also another piece of information in the MSAT record. This is the seasonal phase shift or time delay between the peak solar flux at summer solstice and the peak MSAT temperatures. This is normally 4 to 8 weeks after solstice. This phase shift can only come from the ocean temperature coupling. The penetration depth of the diurnal solar flux temperature change over land is only about 0.5 m. The land heat capacity in this case cannot produce the seasonal phase sift.

(There is also a phase shift or time delay of up to 2 hours or so between the peak solar flux at local noon and the maximum MSAT, but that is not recorded as part of the normal temperature record).

Over the last 200 years or so, the atmospheric concentration of CO2 has increased from about 280 to 400 ppm. This has produced an increase in downward LWIR flux to the surface of about 2 W per sq. m. It is simply impossible for this small change in LWIR flux to couple into the climate system in a way that can produce a measurable change in MSAT temperature. The penetration depth of the LWIR flux into water is less than 100 micron – the width of a human hair. The net cooling from the LWIR flux is mixed at the ocean surface with the cooling from the wind driven evaporation. There is almost no thermal gradient at the air-ocean interface to drive convection. The ocean surface must warm up until the water vapor pressure is sufficient to support the wind driven evaporation. The magnitude and variability in the wind speed is so large that it will obliterate any change in near surface temperature from 2 W per sq. m produced by CO2 – before it can couple into the bulk ocean below.

Please stop averaging temperatures. The climate information is in the minimum and delta (max-min) MSAT of each weather station in its local climate zone. The climate models must predict the real measurable variables at the station level – not some mythical average.

This is a rather short summary of a very complex topic. For further information please see:

Clark, R., 2013a, Energy and Environment 24(3, 4) 319-340 (2013) ‘A dynamic coupled thermal reservoir approach to atmospheric energy transfer Part I: Concepts’
http://venturaphotonics.com/files/CoupledThermalReservoir_Part_I_E_EDraft.pdf
Clark, R., 2013b, Energy and Environment 24(3, 4) 341-359 (2013) ‘A dynamic coupled thermal reservoir approach to atmospheric energy transfer Part II: Applications’
http://venturaphotonics.com/files/CoupledThermalReservoir_Part_II__E_EDraft.pdf

Reply to  Roy Clark
October 2, 2018 2:59 pm

“The troposphere is unstable to convection. When the surface temperature exceeds the air temperature, convection must occur.”

That’s wrong; it’s quite stable. Convection occurs if the temperature gradient exceeds the dry adiabatic lapse rate, about 10°C/km. There is an exception where moisture is condensing, and so the latent heat component of vertical transport is significant. But otherwise convection is pretty much limited to local discrepancies which generate thermals.

Greg Cavanagh
October 2, 2018 2:13 pm

This from an Engineering draftsman. i.e. a practical approach to design needs.

1. What are we trying to measure with temperature records?
I figure they started recording min/max because they had nothing else, and probably had no goal other than hope that it might be useful in the future for some reason.
2. What does (Tmax+Tmin)/2 really measure?
Nothing. It’s not a measurement of anything, and it doesn’t represent any temperature of any part of the day. I don’t know why they don’t do studies on the maximums and minimums. At least then they have real values to work with.
3. Does the currently-in-use (Tmax+Tmin)/2 method fulfill the purposes of any of the answers to question #1?
It’s a statistical curiosity only. Inferring any information from it at all is risky and likely to cause confusion.

Editor
Reply to  Greg Cavanagh
October 2, 2018 3:45 pm

Greg ==> “It’s a statistical curiosity only. ” +10

DaveW
Reply to  Greg Cavanagh
October 3, 2018 5:28 pm

I have to agree with Greg: I don’t see the point in using the midpoint between Tmin and Tmax for anything. It would seem a poor estimator of any actual average temperature at any one spot, given the daily variation in the distribution of actual temperatures depending on clouds, fronts, etc. Apologies for any misconceptions, but out of curiosity I just checked the last two days at my closest weather station (Gympie, Qld) – they record temperature to the nearest 0.1 degree Celsius every half hour.

So 48 observations a day: (sum / 48) for the mean; the median must be inferred as the midpoint between the two central temperatures (I hope I have this right); and the midpoint is high+low/2. Daily mean temperature (C): [Daily midpoint between Tmax and Tmin]: Inferred median were (yesterday) 17.7: [18.2]: 16.5; (day before) 16.6: [17.4]: 16.4.

Small sample size, but in both cases [(Tmax-Tmin)/2] overestimates both the 48 sample mean and the inferred 48 sample median. So what is the purpose of using this measure and why call it a daily average? It is an average only of two non-randomly selected temperatures, so I can’t see any logic supporting calling it a mean temperature.

As median and midpoint are calculated the same way for a sample of two, then I guess it doesn’t matter which term you use, but I think ‘Daily Temperature Midpoint’ gives a clearer idea of what is actually being reported and used in calculations (‘median’ makes it sound more statistically relevant). Why anyone would care if the midpoint varied a few tenths of a degree, or even a degree, over time I have no idea. It seems a very poor estimator of even station daily average temperature let alone the heat content of the atmosphere.

October 2, 2018 2:32 pm

“Maybe a graph will help illuminate this problem.”
That graph of Boulder data would illuminate better if the x-axis were clearer. It shows daily values. And there is a lot of scatter on that scale. But climate scientists deal with much longer period averages, and the scatter disappears. What could matter is a bias.

Well, there is some. I also analysed that Boulder data here, with a key plot here
comment image

It shows running annual averages, compiled either as min/max average, or average of the hourly readings (in black). With min/max, it depends on when you do the reading. I showed the effects of reading at various times (notionally, picking 24-hour periods in the hourly data). It does make a difference – this is the well-known bias that requires that a correction be made if the time of observation changes. But there is no qualitative difference between black and colors; the black just sits in the middle and tracks with the others.

The bias from TOBS just corresponds to the change that you might get from putting the thermometer in a different nearby location. It subtracts out when you take the anomaly.

Alan Tomalty
Reply to  Nick Stokes
October 2, 2018 2:59 pm

“The bias from TOBS just corresponds to the change that you might get from putting the thermometer in a different nearby location. It subtracts out when you take the anomaly.”

See below link as to why Nick Stokes is wrong again.

https://realclimatescience.com/2017/05/the-wildly-fraudulent-tobs-temperature-adjustment/

MrZ
Reply to  Nick Stokes
October 2, 2018 11:51 pm

“The bias from TOBS just corresponds to the change that you might get from putting the thermometer in a different nearby location. It subtracts out when you take the anomaly”

Why bother with it then?

Especially since TOBS must be really difficult to accurately calculate when (MIN+MAX)/2 relates to a true Average like this.
http://cfys.nu/graphs/Boulder%20CRN%20July%202018.png.
RANDOM

Reply to  MrZ
October 3, 2018 3:13 am

i>”Why bother with it then?”
We don’t. There is just a requirement for an adjustment if it changes. Just as when a station moves.

MrZ
Reply to  Nick Stokes
October 3, 2018 4:49 am

“We don’t” ???

On GHCN I don’t know but for USHCN surely a huge % of the stations are TOBS adjusted, especially before 1970.

Reply to  MrZ
October 3, 2018 6:52 pm

Yes, As I said, they are adjusted for known changes in TOBS. No change – no adjustment. Just as there is no adjustment for where a station is (except maybe UHI). Only if it moves.

Geoff Sherrington
Reply to  Nick Stokes
October 4, 2018 1:37 am

Nick,
Having trouble comprehending your calcs.
Have you included the act of resetting the MaxMin thermometers at some time each day?
Geoff

RCS
October 2, 2018 2:34 pm

I’m with Willis Eschenbach on this.

In a symmetric distribution the mean and median are identical (e.g.: Normal, rectangular) while in all other distributions (e.g.: Chi squared, exponential, etc), the mean and median will be different and the difference will depend on the distribution parameters (variance, degrees of freedom).

The argument when applied to temperature becomes slightly more complex because looking at Delta T , one is looking at the difference between samples from two different distributions. Depending on how the question is posed, this could tend towards a normal distribution with many samples as a result of the Central Limit Theorem. However, there is no reason to suppose that the median of the difference between a sample from two different distribution is any more informative than the mean.

Temperature is a time series and, while peak to peak differences tell one something about the signal,one cannot infer other measures, such as the mean without making some pretty wild assumptions.

RCS
October 2, 2018 2:35 pm

I’m with Willis Eschenbach on this.

The Median is the value at which the integral of the probability distribution of a variable with respect to the variable.

I’m with Willis Eschenbach on this.

In a symmetric distribution the mean and median are identical (e.g.: Normal, rectangular) while in all other distributions (e.g.: Chi squared, exponential, etc), the mean and median will be different and the difference will depend on the distribution parameters (variance, degrees of freedom).

The argument when applied to temperature becomes slightly more complex because looking at Delta T , one is looking at the difference between samples from two different distributions. Depending on how the question is posed, this could tend towards a normal distribution with many samples as a result of the Central Limit Theorem. However, there is no reason to suppose that the median of the difference between a sample from two different distribution is any more informative than the mean.

Temperature is a time series and, while peak to peak differences tell one something about the signal,one cannot infer other measures, such as the mean without making some pretty wild assumptions.

Editor
Reply to  RCS
October 2, 2018 3:50 pm

RCS ==> Well,you can have your own definition if you want — I use standard everyday definitions and have relied on the Khan Academy to sully examples.

Editor
Reply to  Kip Hansen
October 2, 2018 3:59 pm

—to supply examples….

RCS
Reply to  Kip Hansen
October 3, 2018 5:08 am

Thank you. I use Kendall and Stuart: “The advanced theory of statistics” or slightly easier: Hoel “Introduction to Mathematical Statistics”. It’s not my defiinition. It is the proper definition based on the integral of the probability distribution. From this follows the coincidence of the mean and median in symmetric distributions and difference that is a function of distribution parameters in non-symmetric distributions.

I think you will find that this definition is accepted by statisticians and it also the definition given in Wikipedia (which is probably not an authority).

gnomish
Reply to  RCS
October 3, 2018 7:52 am

when a man writes an essay and defines his terms so you know exactly what he means…
but you pick an argument over his choice of definitions…
is there a euphemism for that?

Clyde Spencer
Reply to  RCS
October 3, 2018 8:17 am

RCS,
What is the shape of the probability distribution of Tmax and Tmin? What do these two samples tell us about the shape of the PDF of the population from which they were drawn?

Editor
Reply to  RCS
October 3, 2018 8:34 am

RSC ==> For the rest of the world, the definitions and procedures used in the Khan Academy page are sufficient.

Distinguishing (Tmax+Tmin)/2 as the Median helps to disambiguate it, especially in modern records, from a true average (arithmetic mean). It is distinguished by the method of calculation:

“To find the median:
Arrange the data points from smallest to largest.
If the number of data points is odd, the median is the middle data point in the list.
If the number of data points is even, the median is the average of the two middle data points in the list.”

In historic records, the number of data points is even (there are only two), Tmax and Tmin.

In modern records, there are often records every 5-minutes. The procedure used to find Daily Tavg is:
1. “Arrange the data points from smallest to largest”
2. Ignore all the intermediate data points, creating a new data set consisting of only two points, Tmax and Tmin.
3. Since the number of data points in this new data set is even, the median is the average of the two middle data points in the list, the only data points there are.

The procedure followed is that for finding a Median of a data set, not that for finding a mean. It is by procedure a median, and closely related to a “mid-point” if we were doing something a bit different with classes and histograms.

Clyde Spencer
Reply to  Kip Hansen
October 3, 2018 9:59 am

Kip,
Another argument for the naysayers is as follows:
Variance (s^2) is not defined for the median. However, for small samples (and two qualifies as small!), it is always recommended that the summation of differences squared, used to calculate variance from the mean, be divided by n-1, or in this case, 1. That is, in this case, the calculated variance would equal twice 1/2 the range squared. Which also is at odds with the Empirical Rule that the SD should be approximately range/4.

For example: Let’s assume that the daily Tmin and Tmax are 50 and 60, respectively. Assuming that we are calculating the mean, it would be 110/2, or 55. The variance would be ((50-55)^2 +(60-55)^2)/1 or 50. The SD would be ~7.1 The Empirical Rule would suggest that the SD should be about 10/4 or 2.5. It appears that the formula for calculating SD (s) is not applicable for only two samples. Thus, it loses its utility as being treated as a mean. For that reason, and others already given, I think that the interpolated mid-point of only two samples is best thought of as a median, rather than a mean.

RCS
Reply to  Kip Hansen
October 4, 2018 11:29 am

It depends on the dynamics of temperature during the day. The point is that temperature is a continuous signal and determining the mean (which requires knowledge of the signal over the period) cannot secessarily be infererred unless one knows the shape of the signal
(i.e. [max+min]/2 for a sine wave).

RCS
Reply to  Kip Hansen
October 4, 2018 11:37 am

Astonishing!
This web page is aimed at 12 year olds as far as I can see. There is a little more underlying mathematics in statistics that you appear to imagine.

I wouldn’t worry about it though.

[Be specific. Which web page are you complaining about? .mod]

RCS
Reply to  Ki