Update by Kip Hansen
Last week I wrote about UCAR/NCAR’s very interesting discussion on “What is the average global temperature now?”.
[Adding link to previous post mentioned.]
Part of that discussion revolved around the question of why current practitioners of Climate Science insist on using Temperature Anomalies — the difference between the current average temperature of a station, region, nation, or the globe and its long-term, 30-year base period, average — instead of simply showing us a graph of the Absolute Global Average Temperature in degrees Fahrenheit or Celsius or Kelvin.
Gavin Schmidt, Director of the NASA Goddard Institute for Space Studies (GISS) in New York, and co-founder of the award winning climate science blog RealClimate, has come to our rescue to help us sort this out.
In a recent blog essay at RealClimate titled “Observations, Reanalyses and the Elusive Absolute Global Mean Temperature”, Dr. Schmidt gives us the real answer to this difficult question:
“But think about what happens when we try and estimate the absolute global mean temperature for, say, 2016. The climatology for 1981-2010 is 287.4±0.5K, and the anomaly for 2016 is (from GISTEMP w.r.t. that baseline) 0.56±0.05ºC. So our estimate for the absolute value is (using the first rule shown above) is 287.96±0.502K, and then using the second [the first and second rules have to do with estimating the uncertainties – see Gavin’s post], that reduces to 288.0±0.5K [2016]. The same approach for 2015 gives 287.8±0.5K, and for 2014 it is 287.7±0.5K. All of which appear to be the same within the uncertainty. Thus we lose the ability to judge which year was the warmest if we only look at the absolute numbers.”
You see, as Dr. Schmidt carefully explains for us non-climate-scientists, if they use Absolute Temperatures the recent years are all the same — no way to say this year is the warmest ever — and, of course, that just won’t do — not in “RealClimate Science”.
# # # # #
Author’s Comment Policy:
Same as always — and again, this is intended just as it sounds — a little tongue-in-cheek but serious as to the point being made.
Readers not sure why I make this point might read my more general earlier post: What Are They Really Counting?
# # # # #
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
Why use anomalies instead of actual temperatures?
They produce swingier trends?
Another reason to use anomalies instead of temperatures is that the graph of anomalies can be centered at zero and show increments of 0.1°. It can make noise movements look significant. If you use temperatures, any graph should show Kelvin with absolute zero. Construct a graph using those parameters, and the “warming” of the past 30 years looks like noise, which is what it is. A 1°K movement is only ~0.34% not much. It is just not clear why we should panic over a variation of that magnitude.
” If you use temperatures, any graph should show Kelvin with absolute zero. ”
nonsense, you scale the graph to show the data in the clearest way with appropriately labelled axes.
“Clearest”? or “most dramatic for our sales goals?”
There is a fine line between the two.
If 0.1 degree actually made an important difference to anything at all, then maybe scales used today would be informative.
Instead they are manipulative, giving the illusion of huge change when that is not the case.
If the scale was simply the reality reflecting the range of global temps the graph would be representing the chsnges honestly and people could make ingormed decisions.
That is counter to the goals of the consensus.
Isn’t the real answer that if actual temperatures were used, graphical representations of temperature vs time would be nice non-scary horizontal lines?
Yep.
“Keep the fear alive” us an important tool in the climate consensus tool kit.
Alan ==> Difficult to impossible to actually judge the motivations of others — better just to ask them and look at their answers — then if its wrong, at least its on them. That’s what I have done in this mini-series. If UCAR/NCAR doesn’t like their answer when they see it here 0– they can change their web site. If Dr. Schmidt doesn’t like his answer, he can change his web site.
From a management perspective it always pays to hire staff that give you 10 good reasons why something CAN be done rather than 10 good reasons why something CAN’T be done:
So the question is: Why has the temperature data not been presented in BOTH formats? Anomaly AND Absolute.
it has been presented
in both formats.
see karl et al’s 2015 paper in
Science.
But only the manipulative feat inducing scary scale is used in public duscussions.
crackers ==> Read Dr. Schmidt’s essay at RC — link in the main essay. He says using them is “mightily confusing” for the public.
hunter – conclusions are independent of scale.
obviously.
kip – link?
crackers ==> http://www.realclimate.org/index.php/archives/2017/08/observations-reanalyses-and-the-elusive-absolute-global-mean-temperature/
BigBubba ==> Well, according to one of the world’s leading Climate Scientists — the famous Dr. Gavin Schmidt — it is because when they quote the absolute temperatures, the differences between recent years are so small that they fall within the known acknowledged uncertainties — and thus should be considered “the same”.
kip – giss doesn’t quote an
absolute temperature
their site has a long faq answer
about why not. read it.
https://data.giss.nasa.gov/gistemp/faq/abs_temp.html
crackers ==> You don’t seem to be commenting on this essay somehow. This essay is about this issue and about a post at RealClimate that discusses this issue — in fact, almost the entire post is a quote of Gavin Schmidt speaking explicitly to this point.
This is a very interesting discussion. I’ve been thinking about this for some time. Consider the following.
The temperature anomaly for a particular year, as I understand it, is obtained by subtracting the temperature for that year from the 30-year average temperature. Assuming both temperatures have an error of +/- 0.5C, the calculated anomaly will have an error of +/- 1.0C. When adding or subtracting numbers that have associated errors, one must ADD the errors of the numbers.
So the anomaly’s “real value” is even less certain than either of the 2 numbers it’s derived from.
If you can argue that the errors are independent and uncorrelated you can use the RMS error but yes, always larger than either individual uncertainty figure.
Let me correct. It is not whether you can argue that the errors are independent and uncorrelated, but rather whether they truly are independent and uncorrelated.
Yet in the climate field, it would appear that the errors are neither independent nor uncorrelated. there would appear to be systemic biases such that uncertainty is not reduced.
+/- 0.5 C is way too high, esp
with modern equipment
The take-away, and this is to be found in other disciplines as well, is “never let the facts get in the way of a good story.” The Left and the media just love to apply it.
The desperation is to try to get some sort of important signal to show something important with climate, so they are looking at sample noise as data these days.
There is nothing anomalous about the global average temperature as it changes from year to year (well there wouldn’t be if such a thing as global average temperature existed). The global average temperature has always varied from year to year, without there being any anomalies.
explain the long-term trend
The alarmists [NASA, NOAA, UK MET Office] cannot even agree among themselves what, ‘average global temp. means’. Freeman Dyson has stated that it is meaningless and impossible to calculate – he suggests that a reading would be needed for every square km. Like an isohyet is the measurement going to be reduced / increased to a given density altitude? What lapse rates – ambient or ISO?
The satellite observations are comparable because they relate to the same altitude with each measurement but anything measuring temps near the ground are a waste of time and prohibitively expensive at; one station / square km or even 100 square km.
why every sq km?
temperature stations aren’t free.
so the question is, what station density gives
the desired accuracy?
and your answer is?
show your math
ISA not ISO.
As Dr Ball says and I agree, averages destroy accuracy of data points.
Given we need accuracy for science, no? we do. Absolute temperatures would be used and need to be used, science is numbers, actual numbers not averaged numbers. If science worked with averages we’d never had had steam engines.
Take model runs.
100 model runs. Out of that 100 runs, 1 of the runs is the most accurate (no two runs are the same) so one must be the most accurate (we can’t know which one and accuracy is really just luck given the instability of produced output)
Because we do not understand (why) and which run is the accurate one, we destroy that accuracy with the other 99 runs.
Probability is useless in this context as the averages and probabilities conceal the problem, we don’t know how accurate each run is.
This is then made worse by using multiple model ensembles, which serve to dilute the unknown accuracy even more to the point where we have a range of 2c to 4.5c or above, this is not science, it is guessing, it’s not probability, it is guessing.
The only use for using loads of model ensembles is to increase the range of “probability” and this probability does not relate to the real physical world, it’s a logical fallacy.
The range between different temperature anomaly data sets are performing the same function as the wide cast net of model ensembles.
Now you know why they don’t use absolute temperatures, because using those increases accuracy and reduces the “probabilities” and removes the averages which allow for the wide cast net of non-validated “probabilities”.
The uncertainty calculations are rubbish. We are given uncertainty from models, not the real world, the uncertainty only exists in averages and probabilities not in climate and actual real world temperatures.
NOAA’s instability and wildly different runs prove my point. An average of garbage is garbage.
If NOAA perform 100 runs, take the two that vary most, and that is your evidence that they have no idea.
Or at any rate, it gives an insight into the extent of error bounds.
Speaking of models and the breathtaking circularity inherent in the reasoning of much contemporary Climate Science!
The assessment of the reliability of sampling error estimates (In the application of anomalies to large scale temperature averages; in the real world), is tested using temperature data from 1000-year control runs of general GCMs! (Jones et al., 1997a)
And that is a real problem, because the models have the same inbuilt flaw; they only output gridded areal averages!
Thus, the tainting of raw data occurs in the initial development of the station data set, because spatial coherence is assumed for nearby series in the homogenisation techniques applied at this stage (Where many stations are adjusted and some omitted because of “anomalous” trends and/or “non climatic” jumps).
The aggregation of the “raw” data (Gridding in the final stage.) yet again fundamentally changes its distribution as well as adding further sampling errors and uncertainties. Several different methods are used to interpolate the station data to a regular grid but all assume omnidirectional spatial correlation, due to the use of anomalies.
Grids set to preferred size position only serves to fool people.
We need a scientifically justified distance circumference for each data point grounded in topology and site location conditions (Anthony’s site survey would be critical for such)
Mountains hills and all manner of topology matters as do local large water bodies, as well as the usual suspects of urbanisation ect.
This is a massive task and we are better investing everything into satellites and developing that network further to solve some temporal issues for better clarity.
Still sats are good for anomalies if they pass the same location at the same time each day but we should depart from anomalies because they are transient and explaining why is nigh impossible.
A 50km depth chunk of the atmosphere is infinitely better than the surface station network for more reasons than not.
Defenders of the surface data sets are harming science
With regards to surface data sets, a station with a local lake hills and town, all of that needs to be accounted for and solved. Wind speed data also needs to be incorporated to improve the data.
This is not happening. It is never going to happen.
Mark, Scott, Richard ==> Models to not have accuracy — at all — they are not reporting on something real so there can be no accuracy. Accuracy is a measurement compared to the actuality. Models are graded on how close they come to the reality they are meant to model.
Climate and Weather Model run outputs are chaotic in that output is extremely sensitive to initial conditions (starting inputs). The differences in model run outputs demonstrates this — all the runs are started with almost identical inputs but result in wildly differing outputs.
See mine Chaos and Models.
I agree Kip, that was my point about it all, models are not for accuracy, but still, out of 100 runs 1 is the most accurate and the other 99 destroy that lucky accuracy.
My point also is that they don’t want accuracy (as they see it) because what if a really good model ran cool?
That wont do
They need a wide cast net to catch a wide range of outcomes in order to stay relevant.
and to say, oh look the models predicted that.
Furthermore NOAA’s model output is an utter joke, if as I said you take the difference between the 2 most different runs from an ensemble, they vary widely, which shows the model is really casting such a wide net that it is hard to actually say it’s wrong or (way off the mark)
Of course, we cant model chaos. 🙂
Giving an average of chaos is what they are doing, and it’s nonsense.
Mark – Helsinki –
>> With regards to surface data sets, a station with a local lake hills and town, all of that needs to be accounted for and solved. Wind speed data also needs to be incorporated to improve the data. <<
not if the station
hasn't moved.
no one is interested in
absolute T.
you can simply calculate the uncertainty for real in the 100 model runs by measuring the difference between the two most contrary runs. Given the difference in output per run at NOAA.. that means real uncertainty in that respect is well in excess of 50%
no.
that’s like saying you can flip a coin 100 times, and do this 100 times, and the
uncertainty is the max of the max and min counts.
that’s simply not how it’s done — the standard deviation
is easily calculated.
crackers, that is an invalid use of statistics. It in more analogous to shooting at 100 different targets with the same error in aim, not like measuring the same thing 100 times. The error remains the same, and does not even out.
no tom. shooting isn’t random; its results
contain several biases.
a true coin, when flipped sufficienly, does not
With both shooting and taking a temperature reading multiple times over a span of time, one is doing or measuring different things multiple times, not the measuring the same thing multiple times. Coin tosses are not equivalent.l
As in, take 100 runs and take calculate how far the model can swing in either direction, for this you only need the two most different runs, there is your uncertainty.
Kip ==> This following part of my comment was about data collection in the real world:
I was trying to show how the “fudge” is achieved in the collection of raw data and how circular it is to then use gridded model outputs to estimate the sampling errors of that very methodology! 😉
SWB ==> Read my comment to Geoff on auto weather station data. Data is rounded to near whole degree F, then converted to near tenth degree C…as it is being collected, even though the auto weather unit has a resolution of 0.1 degree F.
For your readers not familiar with physics. Gavin Schmidt says “…The climatology for 1981-2010 is 287.4+/-0.5K, and the anomaly for 2016 is (from GISTEMP w.r.t. that baseline) 0.56+/-0.05ºC.”
C stands for Celsius or Centigrade. One degree C is also one degree Kelvin (K). Except zero degrees C = 273.16 degrees K. (In theory no temperature can be less than zero degrees Kelvin, absolute zero.)
Why this is important for climate is that the equation used describes the Stephan-Bolzmann Law where temperature (T) is expressed in degrees Kelvin, in fact T to the power of 4 (T^4)or (T*T*T*T)
https://en.wikipedia.org/wiki/Stefan–Boltzmann_law
You can argue that the error can be fixed by using 14.2 degrees Celsius. (287.4 minus 273.2) in the equation. This is because all the temperatures an be converted by adding 273.2 to the temperature measurements.
But then you have to argue that the error in 0.56+/-0.05 is acceptable. An error of 5 parts in 56 is about one per cent. An error of one per cent in 273.3 is 2.7 degrees C or K. So it seems that Gavin Schmidt has won his argument. Using only temperature anomalies gives a more precise and accurate result.
But hold on a minute. Can Dr Schmidt really estimate the temperature anomaly with an accuracy of one per cent from pole to pole and all the way around the globe?
Richard Lindzen has addressed this question by reference to a study by Stanley Grotch published by the AMO.
You will find the reference here and in Richard Lindzen’s Youtube lecture, Global Warming, Lysenkoism, Eugenics at the 30:37.minute point.
Grotch’s paper claimed that the land (CRU) and ocean (COADS) datasets pass his tests of normality and freedom from bias. His presentation is reasonable.
However, his Figure 1 shows that the 26,000 datapoints range between plus and minus 2 degrees Celsius , while the signal (the mean temperature) ranges from approximately -0.2 C to +0.2 C over a period of 130 years, a rate of about 0.3 C per century. The signal is swamped by noise.
Dr Schmidt is basing his claims on spurious precision in the processing of the data.
https://geoscienceenvironment.wordpress.com/2016/06/12/temperature-anomalies-1851-1980/
link/cite to schmidt’s quote?
Gavin Schmidt says “…The climatology for 1981-2010 is 287.4+/-0.5K, and the anomaly for 2016 is (from GISTEMP w.r.t. that baseline) 0.56+/-0.05ºC.”
C stands for Celsius or Centigrade. One degree C is also one degree Kelvin (K). Except zero degrees C = 273.16 degrees K. (In theory no temperature can be less than zero degrees Kelvin, absolute zero.)
Why this is important for climate is that the equation used describes the Stephan-Bolzmann Law where temperature (T) is expressed in degrees Kelvin, in fact T to the power of 4 (T^4)or (T*T*T*T)
https://en.wikipedia.org/wiki/Stefan–Boltzmann_law
You can argue that the error can be fixed by using 14.2 degrees Celsius. (287.4 minus 273.2) in the equation. This is because all the temperatures an be converted by adding 273.2 to the temperature measurements.
But then you have to argue that the error in 0.56+/-0.05 is acceptable. An error of 5 parts in 56 is about one per cent. An error of one per cent in 273.3 is 2.7 degrees C or K. So it seems that Gavin Schmidt has won his argument. Using only temperature anomalies gives a more precise and accurate result.
But hold on a minute. Can Dr Schmidt really estimate the temperature anomaly with an accuracy of one per cent from pole to pole and all the way around the globe?
Richard Lindzen has addressed this question by reference to a study by Stanley Grotch published by the AMO.
You will find the reference here and in Richard Lindzen’s Youtube lecture, Global Warming, Lysenkoism, Eugenics at the 30:37.minute point.
Grotch’s paper claimed that the land (CRU) and ocean (COADS) datasets pass his tests of normality and freedom from bias. His presentation is reasonable.
However, his Figure 1 shows that the 26,000 datapoints range between plus and minus 2 degrees Celsius , while the signal (the mean temperature) ranges from approximately -0.2 C to +0.2 C over a period of 130 years, a rate of about 0.3 C per century. The signal is swamped by noise.
Dr Schmidt is basing his claims on spurious precision in the processing of the data.
https://geoscienceenvironment.wordpress.com/2016/06/12/temperature-anomalies-1851-1980/
yeah, where is the CRU raw?
What have they done with the data in the last 20 years.
Were they not caught cooling the 40s intentionally just to reduce anomalies? Yes they were caught removing the blip from data, something NASA JMA BEST ect have have done.
The level of agreement between these data sets over 130 years either shows 1 collusion or 2 relying on the same bad data
Nonsense.
as you probably already know, they are using revised history to assess current data sets. As such any assessments are useless.
We need all of the pure raw data, most of which does not exist any more.
Good post tbh.
“However, his Figure 1 shows that the 26,000 datapoints range between plus and minus 2 degrees Celsius , while the signal (the mean temperature) ranges from approximately -0.2 C to +0.2 C over a period of 130 years, a rate of about 0.3 C per century. The signal is swamped by noise.
Dr Schmidt is basing his claims on spurious precision in the processing of the data.”
The logical fallacy is real world temperature anomalies vs what GISS says they are.
The certainty that GISS is accurate, is actually unknown, which means uncertainty is closer to 90% than 5%
Schmidt must keep the discussion within the confines of GISS output.
Avoid bringing in the real world at every stage, in terms of equipment accuracy and lack of coverage.
all the groups get essentially the same surface trend — giss, noaa, hadcrut, jmo, best
so clearly giss is no an outlier. this isn’t rocket
science
Colbourne ==> “Dr Schmidt is basing his claims on spurious [apparent, seeming, but not real] precision
in[resulting from] the processing of the data.”Indeed, data processing. Produces GISS GAMTA and also funnily produces Cosmic background radiation for NASA also.
Possible new source for temperature data: River water quality daily sets.
Graphs of temp. data across the regions and the globe would be quite interesting.
Probably a much more reliable daily set of records…
why more reliable?
Gavin Schmidt who was , let us not forget, hand picked by Dr Doom to carry-on his ‘good work’
Given we simply lack the ability to take any such measurements in a scientifical meaningful way. All we have is a ,’guess’ therefore no matter what the approach what is being said is ‘we think it’s this but we cannot be sure’
so why can;’t
temperaure be
measured, in your
opinion?
Over the years I’ve noticed one thing about Gavin Schmidt’s explanations in RealClimate–they are excessively thorough and generally cast much darkness on the subject. If he was describing a cotter pin to you, you’d picture the engine room of the Queen Mary.
This article raises a more fundamental issue and problem that besets the time series land based thermometer record, namely how do you calculate an anomaly when the sample set is never the same over time but instead it is constantly changing?
I emphasise that the sample set used to create the anomaly in 1880, is not the same sample set used to calculate the anomaly say in 1900 which in turn is not the same sample set used to calculate the anomaly in 1920 which in turn is not the same sample set used to calculate the anomaly in 1940 which in turn is not the same sample set used to calculate the anomaly in 1960 which in turn is not the same sample set used to calculate the anomaly in 1980 which in turn is not the same sample set used to calculate the anomaly in 2000 which in turn is not the same sample set used to calculate the anomaly in 2016
if one is not using the same sample set, the anomaly does not represent anything of meaning.
Gavin claims that “The climatology for 1981-2010 is 287.4±0.5K” however the sample set (the reporting stations) in say 1940 are not the stations reporting data in the climatology period 1981 to 2010 so we have no idea whether there is any anomaly to the data coming from the stations used in 1940. We do not know whether the temperature is more or less than 1940 since we are not measuring the same thing.
The time series land based thermometer data set needs complete re-evaluation. If one wants to know whether there many have been any change in temperature since say 1880, one should identify the stations that reported data in 1880 and then ascertain which of these have continuous records through to 2016 and then use only those stations (ie., the ones with continuous records) to assess the time series from 1880 to 2016.
If one wants to know whether there has been any change in temperature say as from 1940, one performs a similar task, one should identify the stations that reported data in 1940 and then ascertain which of these have continuous records through to 2016 and then use only those stations (ie., the ones with continuous records) to assess the time series from 1940 to 2016.
So one would end up with a series of time series, perhaps a series for every 5 year interlude. Of course, there would still be problems with such a series because of station moves, encroachment of UHI, changes in nearby land use, equipment changes etc, but at least one of the fundamental issues with the time series set would be overcome. Theoretically a valid comparison over time could be made, but error bounds would be large due to siting issues/changes in nearby land use, change of equipment, maintenance etc.
Re: richard verney (August 20, 2017 at 1:20 am)
To your charge Richard, James Hansen “doth protest too much” for my liking.
He doesn’t address the problem (In that paper) to my satisfaction, because elsewhere in the literature it is made clear that the change in number and spatial distribution of station data is a source of error larger than the reported (Or purported!) trends.
“how do you calculate an anomaly when the sample set is never the same over time”
Because you don’t calculate the anomaly using a sample set. That is basic. You calculate each station anomaly from the average (1981-2010 or whatever) for that station alone. Then you can combine in an average, which is when you first have to deal with the sample set.
richard verney – >> how do you calculate an anomaly when the sample set is never the same over time but instead it is constantly changing? <<
you take an average.
giss uses '51-'80.
but the choice is arbitrary
“0.56±0.05ºC
RUBBISH, No way GISS error is anywhere near that level
AndyG55,
Yes, I have read the Real Climate page that Kip linked, and the links that Gavin provides to explain why anomalies are used, and nowhere do I see an explanation for how the stated uncertainty is derived or an explanation of how it can be an order of magnitude greater precision than the absolute temperatures. My suspicion is that it is an artifact of averaging, which removes the extreme values and thus makes it appear that the variance is lower than it really is.
so write GS and ask
The postulate that global temperatures have not increased in the last 100 years is easily supported after a proper error analysis is applied.
People driven by a wish to find danger in temperature almost universally fail proper error analysis. There is a deal of scattered, incomplete literature about using statistical approaches and 2 standard deviations and all that type of talk; but this addresses the precision variable more than the accuracy variable, These two variables act on the data and both have to be estimated in the search for proper confidence limits to bound the total error uncertainty.
This is not the place to discuss accuracy in the estimation of global temperature guesses because that takes pages. Instead, I will raise but one ‘new’ form of error and note the need to investigate this type of error elsewhere than here in Australia. It deal with the transition from ‘liquid in glass’ thermometry to the electronic thermocouple devised whose Aussie shorthand is ‘AWS’ for Automatic Weather Station. These largely replaced LIG in the 1990s here.
The crux is in an email from the Bureau of Meteorology to one of our little investigatory group.
“Firstly, we receive AWS data every minute. There are 3 temperature values:
1. Most recent one second measurement
2. Highest one second measurement (for the previous 60 secs)
3. Lowest one second measurement (for the previous 60 secs)
Relating this to the 30 minute observations page: For an observation taken at 0600, the values are for the one minute 0559-0600”
When data captured at one second intervals are studied, there is a lot of noise. Tmax, for example, could be a degree or so higher than the one minute value around it. They seem to be recording a (signal+noise) when the more valid variable is just ‘signal’. One effect of this method of capture is to enhance the difference between high and low temperatures from the same day, adding to the meme of ‘extreme variability’ for what that is worth.
A more detailed description is at
https://kenskingdom.wordpress.com/2017/08/07/garbage-in-garbage-out/
https://kenskingdom.wordpress.com/2017/03/01/how-temperature-is-measured-in-australia-part-1/
https://kenskingdom.wordpress.com/2017/03/21/how-temperature-is-measured-in-australia-part-2/
This procedure is different in other countries. Therefore, other different countries are not collecting temperature in a way that will match ours here. There is an error of accuracy. It is large and it needs attention. Until it is fixed, there is no point to claims of global temperature increase of 0.8 deg C/century, or whatever the latest trendy guess is. Accuracy problems like this and other combine to put a more realistic +/- 2 deg C error bound on the global average, whatever that means.
Geoff.
But think of the very different response of a LIG thermometer which could easily miss such T highs if of such short lived duration.
This is why retro-fitting with the same type of equipment used in the 1930s/1940s is so important if we are to assess whether there has truly been a change in temperature since the historic highs of the 1930s/1940s.
Yes. This is a reasonable and low cost way to test the current vs. the past instruments. It also tests the justifications of those who change the past.
I pointed this a few months ago but got nowhere with it. If you can think of a way to push the idea forward, God speed.
RV,
Experienced petrologists would agree with your test scheme. The puzzle is, why was it not done before, officially. Maybe it was, I do not know. Thank you for raising it again.
As you know, it remains rather difficult to get officials to adopt such suggestions. If you can help that way, that is where effort could be well invested. Geoff
Geoff,
I have said this on here before. The best post by far was Pat Frank’s about calibration of instruments. All that needed to be said was in that. A lot of us, including yourself, are all talking about the same idiocy. It’s nice to have it demonstrated.
Nc75,
Where have you seen this raised before? Have you commented before on the methods different countries use to treat this signal noise problem with AWS systems? It is possible that the BOM procedure, if we read it correctly, could have raised Australian Tmax by one or two tenths of a degree C compared with USA since the mid 1990s. Geoff
Geoff
If I recall Pat Frank’s paper looked at drift of electronic thermometers. It may be similar at least in approach to what you are talking about, but the general idea is that whatever techniques are used they have to be seen in a broader context of repeatability and microsite characterisation. Effects that appear to swamp any tenths of degrees and approach full degree variations.
Geof,
Indeed, it is done slightly differently in the US. Our ASOS system collects 1 minute average temperatures in deg F, and then averages the 5-minute record of the 5 sets to the nearest deg F, converts to the nearest 0.1 deg C, and sends that information to the data center. http://www.nws.noaa.gov/asos/pdfs/aum-toc.pdf
Clyde,,
Can we please swap some email notes on this. sherro1 at optusnet dot com dot au
A project in prep is urgent if that is OK with you. Geoff
Geoff ==> at the link provided by Clyde, pdf page 19, internal number page 11.
“Once each minute the ACU calculates the 5-minute
average ambient temperature and dew point temperature
from the 1-minute average observations (provided at least
4 valid 1-minute averages are available). These 5-minute
averages are rounded to the nearest degree Fahrenheit, converted
to the nearest 0.1 degree Celsius, and reported once
each minute as the 5-minute average ambient and dew point
temperatures. All mid-point temperature values are rounded
up (e.g., +3.5°F rounds up to +4.0°F; -3.5°F rounds up to –
3.0°F; while -3.6 °F rounds to -4.0 °F).”
BTW — The °F recorder temps are thus +/- 0.5 °F and the °C are all +/- 0.278°C — just by the method.
In the chart above the text ambient temps show:
Temperature In the range: -58°F to +122°F RMSE (root mean sq error) 0.9°F Max Error ± 1.8°F but a Resolution of 0.1°F (which is then rounded to whole °F per above)
Kip,
You said, BTW — The °F recorder temps are thus +/- 0.5 °F and the °C are all +/- 0.278°C — just by the method.”
Strictly speaking, 0.5 °F is equivalent to 0.3 °C because multiplying a constant (5/9) with infinite precision by a number with only one (1) significant figure, one is only justified in retaining the same number of significant figures as the multiplier with the least number of significant figures!
Geoff ==> Actually, we (the humans — even me) are “pretty sure” that the average air temperature has increased over the last 100-150 years — we are “pretty darned sure” that things are generally warmer now than they were during the widespread Little Ice Age.
There are a lot of other questions about GAST not answered by that statement, though, and that’s what CliSci supposed to be answering (among other things).
Kip,
I merely noted that the proposition of zero change fits between properly-constructed error bounds and gave an example of a large newish error. The plea is to fix the error calculations, not to fix the state of fear about minor changes in T. Geoff
Geoff ==> Not arguing with you, really. Your “The postulate that global temperatures have not increased in the last 100 years is easily supported after a proper error analysis is applied.” is true-ish but the point I make has to be noted — even if the postulate of “no change” could be supported by error analysis — we are none the less “pretty danged sure” that that postulate is not true. i just state the obvious — in CliSci there has been far too much relying on maths principles and statistical principles and too much ignoring the pragmatic physical facts.
Kip
I do not want to draw this out, but I see stuff all of any physical symptoms of temperature rise. What are the top 3 indicators that make you think that way? Remember that in Australia there is no permanent snow or ice, no glaciers, few trees tested for dendrothermometry, no sea level rise evidence above longer term normal, Antarctic territory howing nextbto no instrumental rise and a numer of falls, so a good stage to conclude that the players are mainly acting fiction. Geoff.you
Geoff ==> Your question raises an opportunity for an in-depth essay.
The quick answer is that the literature is full of studies supporting the idea that there was a world-wdie or at least wide-spread Little Ice Age — more and more studies have come to the fore as certain elements in CliSci have tried to downplay the Little Ice Age.
With the Little Ice Age now well-supported, the absence of the Little Ice Age now is probably the best supporting evidence for the “warmer now” concept — which is, as I have said, is the major reason we are “pretty sure”.
I’ll put the idea on my list for future essays.
Kip
I would be delighted to develop some ideas with you.
But not here. Do send me an opening email at sherro1 at optusnet dot com dot au
Geoff
temp is obviously
increasing, because (macro)
ice is melting and sea
level is rising.
Off topic somewhat:
Are you, like me, continuously annoyed by the way the media (especially TV weather reporters) refer to the plural of maximum and minimum temperatures as maximums and minimums.
This shows an ignorance of the English language as any decent dictionary will confirm.
The correct terms are of course maxima and minima.
The various editors and producers should get their acts into gear and correct this usage.
While the atmospheric thermocline varies within 80 K in the troposphere alone at any given time:
Gavin’s precision is high quality entertainment
http://rfscientific.eu/sites/default/files/imagecache/article_first_photo/articleimage/compressed_termo_untitled_cut_rot__1.jpg
Oh! We just love adjustments and 2017 has just been adjusted upwards ready for this years “Hottest Ever” headlines.
21 June 2017
2017 90 108 112 88 88
Today
2017 98 113 114 94 89 68 83
https://web.archive.org/web/20170621154326/https://data.giss.nasa.gov/gistemp/tabledata_v3/GLB.Ts+dSST.txt
See the GISS updates page: https://data.giss.nasa.gov/gistemp/updates_v3/
adjustments lead to a lower trend
you should know
this
In the case of estimating temperature change over time, surely that’s an argument in favour of using anomalies rather than absolute temperatures?
Absolute temperatures at 2 or more stations or in a region might differ in absolute terms by, say, 2 degrees C or more, depending on elevation and exposure. That’s important if absolute temperatures are what you’re interested in (at an airport for example); but if you’re interested in how temperatures at each station differ from their respective long term averages for a given date or period, then anomalies are preferable.
Absolute temperatures might differ considerably between stations in the same region, but their anomalies are likely to be similar.
Well clearly the consensus solution is to change and discard past data that does not fit the present desired result.
I think the point being made is that a standard baseline should be established (say 30 years before the influence of industrialization) and then that should be used as the standard by everyone, and not changed over time.
DWR54 ==> Give us a clear explanation on why we might be “interested in how temperatures at each station differ from their respective long term averages for a given date or period, then anomalies are preferable.” (Reasonable for a single station — but a planet wide average?)
The BBC are telling us all about Cassini and its adventures at Saturn.
Nice.
But they (strictly the European Space Agency whose sputnik it is) have come up with this line:
From here: http://www.bbc.co.uk/news/science-environment-40902774
Presumably this means Saturn is collapsing – or – possibly falling into the sun?
(Maybe the other way round innit, like how the Moon is going away as tides on Earth pull energy out of it)
omg, no
“…the difference between the current average temperature of a station, region, nation, or the globe and its long-term, 30-year base period, average…..”
‘…the difference between the this current warm period average temperature of a station, region, nation, or the globe and its long-term, 30-year base period, average….’
There fixed