Climate Science Double-Speak: Update

Update by Kip Hansen

 

mystery_solvedLast week I wrote about UCAR/NCAR’s very interesting discussion on “What is the average global temperature now?”.

[Adding link to previous post mentioned.]

Part of that discussion revolved around the question of why current practitioners of Climate Science insist on using Temperature Anomalies — the difference between the current average temperature of a station, region, nation, or the globe and its long-term, 30-year base period, average — instead of simply showing us a graph of the Absolute Global Average Temperature in degrees Fahrenheit or Celsius or Kelvin.

Gavin Schmidt, Director of the NASA Goddard Institute for Space Studies (GISS) in New York, and co-founder of the award winning climate science blog RealClimate, has come to our rescue to help us sort this out.

In a recent blog essay at RealClimate titled “Observations, Reanalyses and the Elusive Absolute Global Mean Temperature”, Dr. Schmidt gives us the real answer to this difficult question:

“But think about what happens when we try and estimate the absolute global mean temperature for, say, 2016. The climatology for 1981-2010 is 287.4±0.5K, and the anomaly for 2016 is (from GISTEMP w.r.t. that baseline) 0.56±0.05ºC. So our estimate for the absolute value is (using the first rule shown above) is 287.96±0.502K, and then using the second [the first and second rules have to do with estimating the uncertainties – see Gavin’s post], that reduces to 288.0±0.5K [2016]. The same approach for 2015 gives 287.8±0.5K, and for 2014 it is 287.7±0.5K. All of which appear to be the same within the uncertainty. Thus we lose the ability to judge which year was the warmest if we only look at the absolute numbers.

You see, as Dr. Schmidt carefully explains for us non-climate-scientists, if they use Absolute Temperatures the recent years are all the same — no way to say this year is the warmest ever — and, of course, that just won’t do — not in “RealClimate Science”.

# # # # #

Author’s Comment Policy:

Same as always — and again, this is intended just as it sounds — a little tongue-in-cheek but serious as to the point being made.

Readers not sure why I make this point might read my more general earlier post:  What Are They Really Counting?

# # # # #

 

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

289 Comments
Inline Feedbacks
View all comments
August 19, 2017 7:41 pm

Why use anomalies instead of actual temperatures?
They produce swingier trends?

Walter Sobchak
August 19, 2017 8:22 pm

Another reason to use anomalies instead of temperatures is that the graph of anomalies can be centered at zero and show increments of 0.1°. It can make noise movements look significant. If you use temperatures, any graph should show Kelvin with absolute zero. Construct a graph using those parameters, and the “warming” of the past 30 years looks like noise, which is what it is. A 1°K movement is only ~0.34% not much. It is just not clear why we should panic over a variation of that magnitude.

Greg
Reply to  Walter Sobchak
August 20, 2017 12:57 am

” If you use temperatures, any graph should show Kelvin with absolute zero. ”
nonsense, you scale the graph to show the data in the clearest way with appropriately labelled axes.

hunter
Reply to  Greg
August 20, 2017 4:36 am

“Clearest”? or “most dramatic for our sales goals?”
There is a fine line between the two.
If 0.1 degree actually made an important difference to anything at all, then maybe scales used today would be informative.
Instead they are manipulative, giving the illusion of huge change when that is not the case.

hunter
Reply to  Walter Sobchak
August 20, 2017 4:40 am

If the scale was simply the reality reflecting the range of global temps the graph would be representing the chsnges honestly and people could make ingormed decisions.
That is counter to the goals of the consensus.

Alan Davidson
August 19, 2017 8:22 pm

Isn’t the real answer that if actual temperatures were used, graphical representations of temperature vs time would be nice non-scary horizontal lines?

hunter
Reply to  Alan Davidson
August 20, 2017 4:37 am

Yep.
“Keep the fear alive” us an important tool in the climate consensus tool kit.

BigBubba
August 19, 2017 8:53 pm

From a management perspective it always pays to hire staff that give you 10 good reasons why something CAN be done rather than 10 good reasons why something CAN’T be done:
So the question is: Why has the temperature data not been presented in BOTH formats? Anomaly AND Absolute.

crackers345
Reply to  BigBubba
August 19, 2017 8:59 pm

it has been presented
in both formats.
see karl et al’s 2015 paper in
Science.

hunter
Reply to  crackers345
August 20, 2017 4:41 am

But only the manipulative feat inducing scary scale is used in public duscussions.

crackers345
Reply to  crackers345
August 22, 2017 9:27 pm

hunter – conclusions are independent of scale.
obviously.

crackers345
Reply to  crackers345
August 22, 2017 9:28 pm

kip – link?

crackers345
Reply to  Kip Hansen
August 22, 2017 9:35 pm

kip – giss doesn’t quote an
absolute temperature
their site has a long faq answer
about why not. read it.
https://data.giss.nasa.gov/gistemp/faq/abs_temp.html

TheOtherBobFromOttawa
August 19, 2017 9:12 pm

This is a very interesting discussion. I’ve been thinking about this for some time. Consider the following.
The temperature anomaly for a particular year, as I understand it, is obtained by subtracting the temperature for that year from the 30-year average temperature. Assuming both temperatures have an error of +/- 0.5C, the calculated anomaly will have an error of +/- 1.0C. When adding or subtracting numbers that have associated errors, one must ADD the errors of the numbers.
So the anomaly’s “real value” is even less certain than either of the 2 numbers it’s derived from.

Greg
Reply to  TheOtherBobFromOttawa
August 20, 2017 1:00 am

If you can argue that the errors are independent and uncorrelated you can use the RMS error but yes, always larger than either individual uncertainty figure.

richard verney
Reply to  Greg
August 20, 2017 1:31 am

Let me correct. It is not whether you can argue that the errors are independent and uncorrelated, but rather whether they truly are independent and uncorrelated.
Yet in the climate field, it would appear that the errors are neither independent nor uncorrelated. there would appear to be systemic biases such that uncertainty is not reduced.

crackers345
Reply to  TheOtherBobFromOttawa
August 22, 2017 9:37 pm

+/- 0.5 C is way too high, esp
with modern equipment

Mark Johnson
August 19, 2017 9:40 pm

The take-away, and this is to be found in other disciplines as well, is “never let the facts get in the way of a good story.” The Left and the media just love to apply it.

August 19, 2017 10:00 pm

The desperation is to try to get some sort of important signal to show something important with climate, so they are looking at sample noise as data these days.

August 19, 2017 10:53 pm

There is nothing anomalous about the global average temperature as it changes from year to year (well there wouldn’t be if such a thing as global average temperature existed). The global average temperature has always varied from year to year, without there being any anomalies.

crackers345
Reply to  Phillip Bratby
August 22, 2017 9:37 pm

explain the long-term trend

Aristoxenous
August 19, 2017 11:28 pm

The alarmists [NASA, NOAA, UK MET Office] cannot even agree among themselves what, ‘average global temp. means’. Freeman Dyson has stated that it is meaningless and impossible to calculate – he suggests that a reading would be needed for every square km. Like an isohyet is the measurement going to be reduced / increased to a given density altitude? What lapse rates – ambient or ISO?
The satellite observations are comparable because they relate to the same altitude with each measurement but anything measuring temps near the ground are a waste of time and prohibitively expensive at; one station / square km or even 100 square km.

crackers345
Reply to  Aristoxenous
August 22, 2017 9:39 pm

why every sq km?
temperature stations aren’t free.
so the question is, what station density gives
the desired accuracy?
and your answer is?
show your math

Aristoxenous
August 19, 2017 11:30 pm

ISA not ISO.

August 19, 2017 11:34 pm

As Dr Ball says and I agree, averages destroy accuracy of data points.
Given we need accuracy for science, no? we do. Absolute temperatures would be used and need to be used, science is numbers, actual numbers not averaged numbers. If science worked with averages we’d never had had steam engines.
Take model runs.
100 model runs. Out of that 100 runs, 1 of the runs is the most accurate (no two runs are the same) so one must be the most accurate (we can’t know which one and accuracy is really just luck given the instability of produced output)
Because we do not understand (why) and which run is the accurate one, we destroy that accuracy with the other 99 runs.
Probability is useless in this context as the averages and probabilities conceal the problem, we don’t know how accurate each run is.
This is then made worse by using multiple model ensembles, which serve to dilute the unknown accuracy even more to the point where we have a range of 2c to 4.5c or above, this is not science, it is guessing, it’s not probability, it is guessing.
The only use for using loads of model ensembles is to increase the range of “probability” and this probability does not relate to the real physical world, it’s a logical fallacy.
The range between different temperature anomaly data sets are performing the same function as the wide cast net of model ensembles.
Now you know why they don’t use absolute temperatures, because using those increases accuracy and reduces the “probabilities” and removes the averages which allow for the wide cast net of non-validated “probabilities”.
The uncertainty calculations are rubbish. We are given uncertainty from models, not the real world, the uncertainty only exists in averages and probabilities not in climate and actual real world temperatures.

Reply to  Mark - Helsinki
August 19, 2017 11:42 pm

NOAA’s instability and wildly different runs prove my point. An average of garbage is garbage.
If NOAA perform 100 runs, take the two that vary most, and that is your evidence that they have no idea.

richard verney
Reply to  Mark - Helsinki
August 20, 2017 1:27 am

Or at any rate, it gives an insight into the extent of error bounds.

Reply to  Mark - Helsinki
August 20, 2017 4:10 am

Speaking of models and the breathtaking circularity inherent in the reasoning of much contemporary Climate Science!
The assessment of the reliability of sampling error estimates (In the application of anomalies to large scale temperature averages; in the real world), is tested using temperature data from 1000-year control runs of general GCMs! (Jones et al., 1997a)
And that is a real problem, because the models have the same inbuilt flaw; they only output gridded areal averages!
Thus, the tainting of raw data occurs in the initial development of the station data set, because spatial coherence is assumed for nearby series in the homogenisation techniques applied at this stage (Where many stations are adjusted and some omitted because of “anomalous” trends and/or “non climatic” jumps).
The aggregation of the “raw” data (Gridding in the final stage.) yet again fundamentally changes its distribution as well as adding further sampling errors and uncertainties. Several different methods are used to interpolate the station data to a regular grid but all assume omnidirectional spatial correlation, due to the use of anomalies.

Reply to  Mark - Helsinki
August 20, 2017 7:39 am

Grids set to preferred size position only serves to fool people.
We need a scientifically justified distance circumference for each data point grounded in topology and site location conditions (Anthony’s site survey would be critical for such)
Mountains hills and all manner of topology matters as do local large water bodies, as well as the usual suspects of urbanisation ect.
This is a massive task and we are better investing everything into satellites and developing that network further to solve some temporal issues for better clarity.
Still sats are good for anomalies if they pass the same location at the same time each day but we should depart from anomalies because they are transient and explaining why is nigh impossible.
A 50km depth chunk of the atmosphere is infinitely better than the surface station network for more reasons than not.
Defenders of the surface data sets are harming science

Reply to  Mark - Helsinki
August 20, 2017 7:42 am

With regards to surface data sets, a station with a local lake hills and town, all of that needs to be accounted for and solved. Wind speed data also needs to be incorporated to improve the data.
This is not happening. It is never going to happen.

Reply to  Mark - Helsinki
August 20, 2017 12:30 pm

I agree Kip, that was my point about it all, models are not for accuracy, but still, out of 100 runs 1 is the most accurate and the other 99 destroy that lucky accuracy.
My point also is that they don’t want accuracy (as they see it) because what if a really good model ran cool?
That wont do
They need a wide cast net to catch a wide range of outcomes in order to stay relevant.

Reply to  Mark - Helsinki
August 20, 2017 12:32 pm

and to say, oh look the models predicted that.
Furthermore NOAA’s model output is an utter joke, if as I said you take the difference between the 2 most different runs from an ensemble, they vary widely, which shows the model is really casting such a wide net that it is hard to actually say it’s wrong or (way off the mark)
Of course, we cant model chaos. 🙂
Giving an average of chaos is what they are doing, and it’s nonsense.

crackers345
Reply to  Mark - Helsinki
August 22, 2017 9:41 pm

Mark – Helsinki –
>> With regards to surface data sets, a station with a local lake hills and town, all of that needs to be accounted for and solved. Wind speed data also needs to be incorporated to improve the data. <<
not if the station
hasn't moved.
no one is interested in
absolute T.

Reply to  Mark - Helsinki
August 19, 2017 11:44 pm

you can simply calculate the uncertainty for real in the 100 model runs by measuring the difference between the two most contrary runs. Given the difference in output per run at NOAA.. that means real uncertainty in that respect is well in excess of 50%

crackers345
Reply to  Mark - Helsinki
August 22, 2017 9:42 pm

no.
that’s like saying you can flip a coin 100 times, and do this 100 times, and the
uncertainty is the max of the max and min counts.
that’s simply not how it’s done — the standard deviation
is easily calculated.

Tom Halla
Reply to  crackers345
August 22, 2017 9:52 pm

crackers, that is an invalid use of statistics. It in more analogous to shooting at 100 different targets with the same error in aim, not like measuring the same thing 100 times. The error remains the same, and does not even out.

crackers345
Reply to  Mark - Helsinki
August 22, 2017 9:58 pm

no tom. shooting isn’t random; its results
contain several biases.
a true coin, when flipped sufficienly, does not

Tom Halla
Reply to  crackers345
August 22, 2017 11:33 pm

With both shooting and taking a temperature reading multiple times over a span of time, one is doing or measuring different things multiple times, not the measuring the same thing multiple times. Coin tosses are not equivalent.l

Reply to  Mark - Helsinki
August 19, 2017 11:48 pm

As in, take 100 runs and take calculate how far the model can swing in either direction, for this you only need the two most different runs, there is your uncertainty.

Reply to  Mark - Helsinki
August 22, 2017 2:37 am

Kip ==> This following part of my comment was about data collection in the real world:

Thus, the tainting of raw data occurs in the initial development of the station data set, because spatial coherence is assumed for nearby series in the homogenisation techniques applied at this stage (Where many stations are adjusted and some omitted because of “anomalous” trends and/or “non climatic” jumps).
The aggregation of the “raw” data (Gridding in the final stage.) yet again fundamentally changes its distribution as well as adding further sampling errors and uncertainties. Several different methods are used to interpolate the station data to a regular grid but all assume omnidirectional spatial correlation, due to the use of anomalies.

I was trying to show how the “fudge” is achieved in the collection of raw data and how circular it is to then use gridded model outputs to estimate the sampling errors of that very methodology! 😉

August 19, 2017 11:48 pm

For your readers not familiar with physics. Gavin Schmidt says “…The climatology for 1981-2010 is 287.4+/-0.5K, and the anomaly for 2016 is (from GISTEMP w.r.t. that baseline) 0.56+/-0.05ºC.”
C stands for Celsius or Centigrade. One degree C is also one degree Kelvin (K). Except zero degrees C = 273.16 degrees K. (In theory no temperature can be less than zero degrees Kelvin, absolute zero.)
Why this is important for climate is that the equation used describes the Stephan-Bolzmann Law where temperature (T) is expressed in degrees Kelvin, in fact T to the power of 4 (T^4)or (T*T*T*T)
https://en.wikipedia.org/wiki/Stefan–Boltzmann_law
You can argue that the error can be fixed by using 14.2 degrees Celsius. (287.4 minus 273.2) in the equation. This is because all the temperatures an be converted by adding 273.2 to the temperature measurements.
But then you have to argue that the error in 0.56+/-0.05 is acceptable. An error of 5 parts in 56 is about one per cent. An error of one per cent in 273.3 is 2.7 degrees C or K. So it seems that Gavin Schmidt has won his argument. Using only temperature anomalies gives a more precise and accurate result.
But hold on a minute. Can Dr Schmidt really estimate the temperature anomaly with an accuracy of one per cent from pole to pole and all the way around the globe?
Richard Lindzen has addressed this question by reference to a study by Stanley Grotch published by the AMO.
You will find the reference here and in Richard Lindzen’s Youtube lecture, Global Warming, Lysenkoism, Eugenics at the 30:37.minute point.
Grotch’s paper claimed that the land (CRU) and ocean (COADS) datasets pass his tests of normality and freedom from bias. His presentation is reasonable.
However, his Figure 1 shows that the 26,000 datapoints range between plus and minus 2 degrees Celsius , while the signal (the mean temperature) ranges from approximately -0.2 C to +0.2 C over a period of 130 years, a rate of about 0.3 C per century. The signal is swamped by noise.
Dr Schmidt is basing his claims on spurious precision in the processing of the data.
https://geoscienceenvironment.wordpress.com/2016/06/12/temperature-anomalies-1851-1980/

crackers345
Reply to  Frederick Colbourne
August 22, 2017 9:43 pm

link/cite to schmidt’s quote?

August 19, 2017 11:53 pm

Gavin Schmidt says “…The climatology for 1981-2010 is 287.4+/-0.5K, and the anomaly for 2016 is (from GISTEMP w.r.t. that baseline) 0.56+/-0.05ºC.”
C stands for Celsius or Centigrade. One degree C is also one degree Kelvin (K). Except zero degrees C = 273.16 degrees K. (In theory no temperature can be less than zero degrees Kelvin, absolute zero.)
Why this is important for climate is that the equation used describes the Stephan-Bolzmann Law where temperature (T) is expressed in degrees Kelvin, in fact T to the power of 4 (T^4)or (T*T*T*T)
https://en.wikipedia.org/wiki/Stefan–Boltzmann_law
You can argue that the error can be fixed by using 14.2 degrees Celsius. (287.4 minus 273.2) in the equation. This is because all the temperatures an be converted by adding 273.2 to the temperature measurements.
But then you have to argue that the error in 0.56+/-0.05 is acceptable. An error of 5 parts in 56 is about one per cent. An error of one per cent in 273.3 is 2.7 degrees C or K. So it seems that Gavin Schmidt has won his argument. Using only temperature anomalies gives a more precise and accurate result.
But hold on a minute. Can Dr Schmidt really estimate the temperature anomaly with an accuracy of one per cent from pole to pole and all the way around the globe?
Richard Lindzen has addressed this question by reference to a study by Stanley Grotch published by the AMO.
You will find the reference here and in Richard Lindzen’s Youtube lecture, Global Warming, Lysenkoism, Eugenics at the 30:37.minute point.
Grotch’s paper claimed that the land (CRU) and ocean (COADS) datasets pass his tests of normality and freedom from bias. His presentation is reasonable.
However, his Figure 1 shows that the 26,000 datapoints range between plus and minus 2 degrees Celsius , while the signal (the mean temperature) ranges from approximately -0.2 C to +0.2 C over a period of 130 years, a rate of about 0.3 C per century. The signal is swamped by noise.
Dr Schmidt is basing his claims on spurious precision in the processing of the data.
https://geoscienceenvironment.wordpress.com/2016/06/12/temperature-anomalies-1851-1980/

Reply to  Frederick Colbourne
August 20, 2017 1:11 am

yeah, where is the CRU raw?
What have they done with the data in the last 20 years.
Were they not caught cooling the 40s intentionally just to reduce anomalies? Yes they were caught removing the blip from data, something NASA JMA BEST ect have have done.
The level of agreement between these data sets over 130 years either shows 1 collusion or 2 relying on the same bad data
Nonsense.

Reply to  Frederick Colbourne
August 20, 2017 1:13 am

as you probably already know, they are using revised history to assess current data sets. As such any assessments are useless.
We need all of the pure raw data, most of which does not exist any more.

Reply to  Frederick Colbourne
August 20, 2017 7:29 am

Good post tbh.
“However, his Figure 1 shows that the 26,000 datapoints range between plus and minus 2 degrees Celsius , while the signal (the mean temperature) ranges from approximately -0.2 C to +0.2 C over a period of 130 years, a rate of about 0.3 C per century. The signal is swamped by noise.
Dr Schmidt is basing his claims on spurious precision in the processing of the data.”
The logical fallacy is real world temperature anomalies vs what GISS says they are.
The certainty that GISS is accurate, is actually unknown, which means uncertainty is closer to 90% than 5%

Reply to  Frederick Colbourne
August 20, 2017 7:30 am

Schmidt must keep the discussion within the confines of GISS output.
Avoid bringing in the real world at every stage, in terms of equipment accuracy and lack of coverage.

crackers345
Reply to  Mark - Helsinki
August 22, 2017 9:45 pm

all the groups get essentially the same surface trend — giss, noaa, hadcrut, jmo, best
so clearly giss is no an outlier. this isn’t rocket
science

Reply to  Kip Hansen
August 20, 2017 12:27 pm

Indeed, data processing. Produces GISS GAMTA and also funnily produces Cosmic background radiation for NASA also.

Dan Davis
August 19, 2017 11:59 pm

Possible new source for temperature data: River water quality daily sets.
Graphs of temp. data across the regions and the globe would be quite interesting.
Probably a much more reliable daily set of records…

crackers345
Reply to  Dan Davis
August 22, 2017 10:05 pm

why more reliable?

knr
August 20, 2017 12:24 am

Gavin Schmidt who was , let us not forget, hand picked by Dr Doom to carry-on his ‘good work’
Given we simply lack the ability to take any such measurements in a scientifical meaningful way. All we have is a ,’guess’ therefore no matter what the approach what is being said is ‘we think it’s this but we cannot be sure’

crackers345
Reply to  knr
August 22, 2017 9:46 pm

so why can;’t
temperaure be
measured, in your
opinion?

C.K. Moore
August 20, 2017 1:19 am

Over the years I’ve noticed one thing about Gavin Schmidt’s explanations in RealClimate–they are excessively thorough and generally cast much darkness on the subject. If he was describing a cotter pin to you, you’d picture the engine room of the Queen Mary.

richard verney
August 20, 2017 1:20 am

This article raises a more fundamental issue and problem that besets the time series land based thermometer record, namely how do you calculate an anomaly when the sample set is never the same over time but instead it is constantly changing?
I emphasise that
the sample set used to create the anomaly in 1880, is not the same sample set used to calculate the anomaly say in 1900 which in turn is not the same sample set used to calculate the anomaly in 1920 which in turn is not the same sample set used to calculate the anomaly in 1940 which in turn is not the same sample set used to calculate the anomaly in 1960 which in turn is not the same sample set used to calculate the anomaly in 1980 which in turn is not the same sample set used to calculate the anomaly in 2000 which in turn is not the same sample set used to calculate the anomaly in 2016
if one is not using the same sample set, the anomaly does not represent anything of meaning.
Gavin claims that “The climatology for 1981-2010 is 287.4±0.5K” however the sample set (the reporting stations) in say 1940 are not the stations reporting data in the climatology period 1981 to 2010 so we have no idea whether there is any anomaly to the data coming from the stations used in 1940. We do not know whether the temperature is more or less than 1940 since we are not measuring the same thing.
The time series land based thermometer data set needs complete re-evaluation. If one wants to know whether there many have been any change in temperature since say 1880, one should identify the stations that reported data in 1880 and then ascertain which of these have continuous records through to 2016 and then use only those stations (ie., the ones with continuous records) to assess the time series from 1880 to 2016.
If one wants to know whether there has been any change in temperature say as from 1940, one performs a similar task, one should identify the stations that reported data in 1940 and then ascertain which of these have continuous records through to 2016 and then use only those stations (ie., the ones with continuous records) to assess the time series from 1940 to 2016.
So one would end up with a series of time series, perhaps a series for every 5 year interlude. Of course, there would still be problems with such a series because of station moves, encroachment of UHI, changes in nearby land use, equipment changes etc, but at least one of the fundamental issues with the time series set would be overcome. Theoretically a valid comparison over time could be made, but error bounds would be large due to siting issues/changes in nearby land use, change of equipment, maintenance etc.

Reply to  richard verney
August 20, 2017 3:46 am

Re: richard verney (August 20, 2017 at 1:20 am)
To your charge Richard, James Hansen “doth protest too much” for my liking.

…a charge that has been bruited about frequently in the past year, specifically the claim that GISS has systematically reduced the number of stations used in its temperature analysis so as to introduce an artificial global warming. GISS uses all of the GHCN stations that are available, but the number of reporting meteorological stations in 2009 was only 2490, compared to [circa]6300 usable stations in the entire 130 year GHCN record. (Hansen et al. 2010)

He doesn’t address the problem (In that paper) to my satisfaction, because elsewhere in the literature it is made clear that the change in number and spatial distribution of station data is a source of error larger than the reported (Or purported!) trends.

Nick Stokes
Reply to  richard verney
August 20, 2017 9:02 am

“how do you calculate an anomaly when the sample set is never the same over time”
Because you don’t calculate the anomaly using a sample set. That is basic. You calculate each station anomaly from the average (1981-2010 or whatever) for that station alone. Then you can combine in an average, which is when you first have to deal with the sample set.

crackers345
Reply to  richard verney
August 22, 2017 9:48 pm

richard verney – >> how do you calculate an anomaly when the sample set is never the same over time but instead it is constantly changing? <<
you take an average.
giss uses '51-'80.
but the choice is arbitrary

AndyG55
August 20, 2017 1:34 am

“0.56±0.05ºC
RUBBISH, No way GISS error is anywhere near that level

Clyde Spencer
Reply to  AndyG55
August 20, 2017 8:37 am

AndyG55,
Yes, I have read the Real Climate page that Kip linked, and the links that Gavin provides to explain why anomalies are used, and nowhere do I see an explanation for how the stated uncertainty is derived or an explanation of how it can be an order of magnitude greater precision than the absolute temperatures. My suspicion is that it is an artifact of averaging, which removes the extreme values and thus makes it appear that the variance is lower than it really is.

crackers345
Reply to  Clyde Spencer
August 22, 2017 9:49 pm

so write GS and ask

August 20, 2017 1:53 am

The postulate that global temperatures have not increased in the last 100 years is easily supported after a proper error analysis is applied.
People driven by a wish to find danger in temperature almost universally fail proper error analysis. There is a deal of scattered, incomplete literature about using statistical approaches and 2 standard deviations and all that type of talk; but this addresses the precision variable more than the accuracy variable, These two variables act on the data and both have to be estimated in the search for proper confidence limits to bound the total error uncertainty.
This is not the place to discuss accuracy in the estimation of global temperature guesses because that takes pages. Instead, I will raise but one ‘new’ form of error and note the need to investigate this type of error elsewhere than here in Australia. It deal with the transition from ‘liquid in glass’ thermometry to the electronic thermocouple devised whose Aussie shorthand is ‘AWS’ for Automatic Weather Station. These largely replaced LIG in the 1990s here.
The crux is in an email from the Bureau of Meteorology to one of our little investigatory group.
“Firstly, we receive AWS data every minute. There are 3 temperature values:
1. Most recent one second measurement
2. Highest one second measurement (for the previous 60 secs)
3. Lowest one second measurement (for the previous 60 secs)
Relating this to the 30 minute observations page: For an observation taken at 0600, the values are for the one minute 0559-0600”
When data captured at one second intervals are studied, there is a lot of noise. Tmax, for example, could be a degree or so higher than the one minute value around it. They seem to be recording a (signal+noise) when the more valid variable is just ‘signal’. One effect of this method of capture is to enhance the difference between high and low temperatures from the same day, adding to the meme of ‘extreme variability’ for what that is worth.
A more detailed description is at
https://kenskingdom.wordpress.com/2017/08/07/garbage-in-garbage-out/
https://kenskingdom.wordpress.com/2017/03/01/how-temperature-is-measured-in-australia-part-1/
https://kenskingdom.wordpress.com/2017/03/21/how-temperature-is-measured-in-australia-part-2/
This procedure is different in other countries. Therefore, other different countries are not collecting temperature in a way that will match ours here. There is an error of accuracy. It is large and it needs attention. Until it is fixed, there is no point to claims of global temperature increase of 0.8 deg C/century, or whatever the latest trendy guess is. Accuracy problems like this and other combine to put a more realistic +/- 2 deg C error bound on the global average, whatever that means.
Geoff.

richard verney
Reply to  Geoff Sherrington
August 20, 2017 2:37 am

But think of the very different response of a LIG thermometer which could easily miss such T highs if of such short lived duration.
This is why retro-fitting with the same type of equipment used in the 1930s/1940s is so important if we are to assess whether there has truly been a change in temperature since the historic highs of the 1930s/1940s.

hunter
Reply to  richard verney
August 20, 2017 4:29 am

Yes. This is a reasonable and low cost way to test the current vs. the past instruments. It also tests the justifications of those who change the past.
I pointed this a few months ago but got nowhere with it. If you can think of a way to push the idea forward, God speed.

Reply to  richard verney
August 21, 2017 5:33 am

RV,
Experienced petrologists would agree with your test scheme. The puzzle is, why was it not done before, officially. Maybe it was, I do not know. Thank you for raising it again.
As you know, it remains rather difficult to get officials to adopt such suggestions. If you can help that way, that is where effort could be well invested. Geoff

Reply to  Geoff Sherrington
August 20, 2017 5:10 am

Geoff,
I have said this on here before. The best post by far was Pat Frank’s about calibration of instruments. All that needed to be said was in that. A lot of us, including yourself, are all talking about the same idiocy. It’s nice to have it demonstrated.

Reply to  mickyhcorbett75
August 21, 2017 5:39 am

Nc75,
Where have you seen this raised before? Have you commented before on the methods different countries use to treat this signal noise problem with AWS systems? It is possible that the BOM procedure, if we read it correctly, could have raised Australian Tmax by one or two tenths of a degree C compared with USA since the mid 1990s. Geoff

Reply to  mickyhcorbett75
August 22, 2017 11:45 am

Geoff
If I recall Pat Frank’s paper looked at drift of electronic thermometers. It may be similar at least in approach to what you are talking about, but the general idea is that whatever techniques are used they have to be seen in a broader context of repeatability and microsite characterisation. Effects that appear to swamp any tenths of degrees and approach full degree variations.

Clyde Spencer
Reply to  Geoff Sherrington
August 20, 2017 8:43 am

Geof,
Indeed, it is done slightly differently in the US. Our ASOS system collects 1 minute average temperatures in deg F, and then averages the 5-minute record of the 5 sets to the nearest deg F, converts to the nearest 0.1 deg C, and sends that information to the data center. http://www.nws.noaa.gov/asos/pdfs/aum-toc.pdf

Reply to  Clyde Spencer
August 22, 2017 1:03 am

Clyde,,
Can we please swap some email notes on this. sherro1 at optusnet dot com dot au
A project in prep is urgent if that is OK with you. Geoff

Clyde Spencer
Reply to  Clyde Spencer
August 22, 2017 9:05 pm

Kip,
You said, BTW — The °F recorder temps are thus +/- 0.5 °F and the °C are all +/- 0.278°C — just by the method.”
Strictly speaking, 0.5 °F is equivalent to 0.3 °C because multiplying a constant (5/9) with infinite precision by a number with only one (1) significant figure, one is only justified in retaining the same number of significant figures as the multiplier with the least number of significant figures!

Reply to  Kip Hansen
August 20, 2017 3:52 pm

Kip,
I merely noted that the proposition of zero change fits between properly-constructed error bounds and gave an example of a large newish error. The plea is to fix the error calculations, not to fix the state of fear about minor changes in T. Geoff

Reply to  Kip Hansen
August 21, 2017 5:48 am

Kip
I do not want to draw this out, but I see stuff all of any physical symptoms of temperature rise. What are the top 3 indicators that make you think that way? Remember that in Australia there is no permanent snow or ice, no glaciers, few trees tested for dendrothermometry, no sea level rise evidence above longer term normal, Antarctic territory howing nextbto no instrumental rise and a numer of falls, so a good stage to conclude that the players are mainly acting fiction. Geoff.you

Reply to  Kip Hansen
August 22, 2017 1:00 am

Kip
I would be delighted to develop some ideas with you.
But not here. Do send me an opening email at sherro1 at optusnet dot com dot au
Geoff

crackers345
Reply to  Geoff Sherrington
August 22, 2017 9:50 pm

temp is obviously
increasing, because (macro)
ice is melting and sea
level is rising.

John Soldier
August 20, 2017 2:01 am

Off topic somewhat:
Are you, like me, continuously annoyed by the way the media (especially TV weather reporters) refer to the plural of maximum and minimum temperatures as maximums and minimums.
This shows an ignorance of the English language as any decent dictionary will confirm.
The correct terms are of course maxima and minima.
The various editors and producers should get their acts into gear and correct this usage.

August 20, 2017 2:40 am

±0.05ºC

While the atmospheric thermocline varies within 80 K in the troposphere alone at any given time:comment image
Gavin’s precision is high quality entertainment
http://rfscientific.eu/sites/default/files/imagecache/article_first_photo/articleimage/compressed_termo_untitled_cut_rot__1.jpg

Nik
August 20, 2017 2:47 am

Oh! We just love adjustments and 2017 has just been adjusted upwards ready for this years “Hottest Ever” headlines.
21 June 2017
2017 90 108 112 88 88
Today
2017 98 113 114 94 89 68 83
https://web.archive.org/web/20170621154326/https://data.giss.nasa.gov/gistemp/tabledata_v3/GLB.Ts+dSST.txt

DWR54
Reply to  Nik
August 20, 2017 3:28 am

See the GISS updates page: https://data.giss.nasa.gov/gistemp/updates_v3/

August 15, 2017: Starting with today’s update, the standard GISS analysis is no longer based on ERSST v4 but on the newer ERSST v5.

crackers345
Reply to  Nik
August 22, 2017 9:51 pm

adjustments lead to a lower trend
you should know
this

DWR54
August 20, 2017 2:57 am

This article raises a more fundamental issue and problem that besets the time series land based thermometer record, namely how do you calculate an anomaly when the sample set is never the same over time but instead it is constantly changing?

In the case of estimating temperature change over time, surely that’s an argument in favour of using anomalies rather than absolute temperatures?
Absolute temperatures at 2 or more stations or in a region might differ in absolute terms by, say, 2 degrees C or more, depending on elevation and exposure. That’s important if absolute temperatures are what you’re interested in (at an airport for example); but if you’re interested in how temperatures at each station differ from their respective long term averages for a given date or period, then anomalies are preferable.
Absolute temperatures might differ considerably between stations in the same region, but their anomalies are likely to be similar.

hunter
Reply to  DWR54
August 20, 2017 4:23 am

Well clearly the consensus solution is to change and discard past data that does not fit the present desired result.

Clyde Spencer
Reply to  DWR54
August 20, 2017 10:16 am

I think the point being made is that a standard baseline should be established (say 30 years before the influence of industrialization) and then that should be used as the standard by everyone, and not changed over time.

Peta of Newark
August 20, 2017 3:24 am

The BBC are telling us all about Cassini and its adventures at Saturn.
Nice.
But they (strictly the European Space Agency whose sputnik it is) have come up with this line:

“It’s expected that the heavier helium is sinking down,” he told BBC News. “Saturn radiates more energy than it’s absorbing from the Sun, meaning there’s gravitational energy which is being lost.

From here: http://www.bbc.co.uk/news/science-environment-40902774
Presumably this means Saturn is collapsing – or – possibly falling into the sun?
(Maybe the other way round innit, like how the Moon is going away as tides on Earth pull energy out of it)

crackers345
Reply to  Peta of Newark
August 22, 2017 9:52 pm

omg, no

old construction worker
August 20, 2017 4:18 am

“…the difference between the current average temperature of a station, region, nation, or the globe and its long-term, 30-year base period, average…..”
‘…the difference between the this current warm period average temperature of a station, region, nation, or the globe and its long-term, 30-year base period, average….’
There fixed