# How fast is the Earth warming?

This article presents a method for calculating the Earth’s rate of warming, using the existing global temperature series.

Guest essay by Sheldon Walker

It can be difficult to work out the Earth’s rate of warming. There are large variations in temperature from month to month, and different rates can be calculated depending upon the time interval and the end points chosen. A reasonable estimate can be made for long time intervals (100 years for example), but it would be useful if we could calculate the rate of warming for medium or short intervals. This would allow us to determine whether the rate of warming was increasing, decreasing, or staying the same.

The first step in calculating the Earth’s rate of warming is to reduce the large month to month variation in temperature, being careful not to lose any key information. The central moving average (CMA) is a mathematical method that will achieve this. It is important to choose an averaging interval that will meet the objectives. Calculating the average over 121 months (the month being calculated, plus 60 months on either side), gives a good reduction in the variation from month to month, without the loss of any important detail.

Graph 1 shows the GISTEMP temperature series. The blue line shows the raw temperature anomaly, and the green line shows the 121 month central moving average. The central moving average curve has little month to month variation, but clearly shows the medium and long term temperature trend.

The second step in calculating the Earth’s rate of warming is to determine the slope of the central moving average curve, for each month on the time axis. The central moving slope (CMS) is a mathematical method that will achieve this. This is similar to the central moving average, but instead of calculating an average for the points in the interval, a linear regression is done between the points in the interval and the time axis (the x-axis). This gives the slope of the central moving average curve, which is a temperature change per time interval, or rate of warming. In order to avoid dealing with small numbers, all rates of warming in this article will be given in °C per century.

It is important to choose the correct time interval to calculate the slope over. This should make the calculated slope responsive to real changes in the slope of the CMA curve, but not excessively responsive. Calculating the slope over 121 months (the month being calculated plus 60 months on either side), gives a slope with a good degree of sensitivity.

Graph 2 shows the rate of warming curve for the GISTEMP temperature series. The blue line is the 121 month central moving slope (CMS), calculated for the central moving average curve. The y-axis shows the rate of warming in °C per century, and the x-axis shows the year. When the rate of warming curve is in the lower part of the graph ( colored light blue), then it shows cooling (the rate of warming is below zero). When the rate of warming curve is in the upper part of the graph ( colored light orange), then it shows warming (the rate of warming is above zero).

The curve shows 2 major periods of cooling since 1880. Each lasted approximately a decade (1900 to 1910, and 1942 to 1952), and reached cooling rates of about -2.0 °C per century. There is a large interval of continuous warming from 1910 to 1942 (about 32 years). This reached a maximum rate of warming of about +2.8 °C per century around 1937. 1937 is the year with the highest rate of warming since the start of the GISTEMP series in 1880 (more on that later).

There is another large interval of continuous warming from about 1967 to the present day (about 48 years). This interval has 2 peaks at about 1980 and 1998, where the rates of warming were just under +2.4 °C per century. The rate of warming has been falling steadily since the last peak in 1998. In 2015, the rate of warming is between +0.5 and +0.8 °C per century, which is about 30% of the rate in 1998. (Note that all of these rates of warming were calculated AFTER the so‑called “Pause-busting” adjustments were made. More on that later.)

It is important to check that the GISTEMP rate of warming curve is consistent with the curves from the other temperature series (including the satellite series).

Graph 3 shows the rate of warming curves for GISTEMP, NOAA, UAH, and RSS. (Note that the satellite temperature series did not exist before 1979.)

All of the rate of warming curves show good agreement with each other. Peaks and troughs line up, and the numerical values for the rates of warming are similar. Both of the satellite series appear to have a larger change in the rate of warming when compared to the surface series, but both satellite series are in good agreement with each other.

1) There is no cherry-picking of start and end times with this method. The entire temperature series is used.

2) The rate of warming curves from different series can be directly compared with each other, no adjustment is needed for the different baseline periods. This is because the rate of warming is based on the change in temperature with time, which is the same regardless of the baseline period.

3) This method can be performed by anybody with a moderate level of skill using a spreadsheet. It only requires the ability to calculate averages, and perform linear regressions.

4) The first and last 5 years of each rate of warming curve has more uncertainty than the rest of the curve. This is due to the lack of data beyond the ends of the curve.  It is important to realise that the last 5 years of the curve may change when future temperatures are added.

There is a lot that could be said about these curves. One topic that is “hot” at the moment, is the “Pause” or “Hiatus”.

The rate of warming curves for all 4 major temperature series show that there has been a significant drop in the rate of warming over the last 17 years. In 1998 the rate of warming was between +2.0 and +2.5 °C per century. Now, in 2015, it is between +0.5 and +0.8 °C per century. The rate now is only about 30% of what it was in 1998.  Note that these rates of warming were calculated AFTER the so-called “Pause-busting” adjustments were made.

I was originally using the GISTEMP temperature series ending with May 2015, when I was developing the method described here. When I downloaded the series ending with June 2015 and graphed it, I thought that there must be something wrong with my computer program, because the rate of warming curve had changed so dramatically. I eventually traced the “problem” back to the data, and then I read that GISTEMP had adopted the “Pause-busting” adjustments that NOAA had devised.

Graph 4 shows the effect on the rate of warming curve, of the GISTEMP “Pause-busting” adjustments. The blue line shows the rates from the May 2015 data, and the red line shows the rates from the June 2015 data.

One of the strange things about the GISTEMP “Pause-busting” adjustments, is that the year with the highest rate of warming (since 1880) has changed. It used to be around 1998, with a warming rate of about +2.4 °C per century. After the adjustments, it moved to around 1937 (that’s right, 1937, back when the CO2 level was only about 300 ppm), with a warming rate of about +2.8 °C per century.

If you look at the NOAA series, they already had 1937 as the year with the highest rate of warming, so GISTEMP must have picked it up from NOAA when they switched to the new NCEI ERSST.v4 sea surface temperature reconstruction.

So, the next time that you hear somebody claiming that Global Warming is accelerating, show them a graph of the rate of warming. Some climate scientists seem to enjoy telling us that things are worse than predicted. Here is a chance to cheer them up with some good news. Somehow I don’t think that they will want to hear it.

Article Rating
Inline Feedbacks
Latitude
August 28, 2015 6:26 pm

?w=640&h=480

August 28, 2015 7:01 pm

Steven Goddard claims are hosed –he knows as much as my cat about climate change: http://www.desmogblog.com/steven-goddard

Alan Robertson
August 28, 2015 7:28 pm

…and your cat learned everything it knows from you.

ab
August 28, 2015 7:34 pm

Desmog Blog? Really?

August 28, 2015 7:42 pm

No Allen, he learned all he knows on the subject from his cat.

RD
August 28, 2015 9:13 pm

desmog blog? ROFL

Andrew
August 28, 2015 10:28 pm

Congrats on successfully debunking one error in something he said – which he retracted 7 years ago.
Now since the data is public and the method reproducible perhaps you and any household pets could turn to the topic of THIS post?

ATheoK
August 28, 2015 11:22 pm

From the wrong end of the cat to boot.

August 29, 2015 2:35 am

Oh, gawd! Would someone please spare us this fool’s drivel?

Stephen Richards
August 29, 2015 4:37 am

AT DESMOGBLOG. You must be joking.

August 29, 2015 6:04 am

“My apologies to readers. I’ll leave it up (note altered title) as an example of what not to do when graphing trends”
http://wattsupwiththat.com/2010/07/02/ar
Anyone want to change their minds about him?

Menicholas
August 29, 2015 6:54 am

Warren, you must have been doing a Rip Van Winkle for a spell, eh?
Everyone here knows that Tony Heller was given a bad rap, and that he and Paul Homewood were actually way out in front on spotting the perverse chicanery of the government bought warmista data manipulators.
Tell you what sir, why do you not do us a favor and read up on current events before attempting to give lessons on your ignorance.

Ernest Bush
August 29, 2015 8:17 am

Umh…Tony Heller, aka Steven Goddard, programmed some of the climate models that are still being played with today. It was his job at the time. I think he learned more about climate and mathematics than you and your cat along the way. He is not an English or Biology major as so many of the vocal Warmists seem to be and is the perfect kind of individual to be investigating data changes in government data bases. He is currently programming under contract to the government.
While being a skeptic about NOAA and NASA data, he is a fierce defender of the GHE and its effect on our atmosphere and the radiative properties of CO2. Meanwhile, I am supposed to automatically assume he is hosed by a post at a blog frequented by Warmists with their agenda? Reaaallllllllly.

Latitude
August 29, 2015 8:35 am

warrenlb
August 28, 2015 at 7:01 pm
Steven Goddard claims are hosed
====
Warren, are you saying there’s no adjustments to past temperatures?
If you agree that past temperatures have been adjusted…..then the subject of this article “how fast is the earth warming” is nothing more than mental masturbation
The present rate of warming will never be known…. as long as past data is constantly changed

MarkW
August 29, 2015 8:54 am

The trolls are getting feisty this morning. They must know that the game is up.

catweazle666
August 29, 2015 10:32 am

And my cat will leave you for dead when it comes to climate science, warren.
Oh, and she’s quite good at modelling complex systems too, she’s helped me out on numerous occasions, by walking on the keyboard.
Eejit!

BFL
August 29, 2015 10:17 am

@Latitude: Very informative graphs. Also:
desmogblog = short for “the smoggy blog” & definitely and thoroughly polluted.
http://www.thefreedictionary.com/smoggy

pete j
August 30, 2015 9:27 am

I wonder if you were to determine the CMS, plotted from the CMA, calculated from the 3 historic GISTEMP temperature series cited by Goddard/Heller you can then determine the “acceleration” in the rates of warming that can be attributed to manipulation of data by government scientist-activists over the years as the data “changed” ?

DonM
August 28, 2015 6:49 pm

How about graph #5 showing the acceleration….

JimS
August 28, 2015 7:22 pm

I think you might have to ask Michael Mann for that one, DonM.

vukcevic
August 29, 2015 8:15 am

At age of about 9 or 10 ( at the time my village didn’t have electricity) so I built ‘cat’s whisker’ radio with a 30m long aerial.

vukcevic
August 29, 2015 8:17 am

sorry, went in wrong place, addressed to ‘Menicholas August 29, 2015 at 6:43 am’

vukcevic
August 31, 2015 12:50 am

Dr. Brown
Thank you for your comment and the advice. I agree with your observation about 60 year component, just about hanging on the lowest branch of the frequency spectrum’s limit.
Your suggestions are welcome, and would be a rewarding exercise for an enthusiastic student of applied mathematics. In my case as it happens, time availability and interest in other people’s errors are transitory, so it is likely that I will decline your invitation. On the subject of ‘the most valuable’ of my contribution, I am not inclined to concur, but your opinion and views are this time as always highly appreciated.

vukcevic
August 29, 2015 12:48 am

Spectrum of Gistem4 has number of prominent components, of which the solar magnetic (Hale) cycles is the highest by a whisker.
http://www.vukcevic.talktalk.net/CT4-Spectrum.gif
For some years now I have claimed that the Earth’s warming and cooling is correlated to the magnetic cycles periodicity, and the above research shows that relationship clearly:
http://www.vukcevic.talktalk.net/SC-GTrw.gif
It is not intensity of the cycles as such it is the polarity which give us clue where to look further. This graphic
http://www.vukcevic.talktalk.net/E1.htm

vukcevic
August 29, 2015 12:53 am

August 29, 2015 2:44 am

@ vukcevic August 29, 2015 at 12:48 am
Do you speculate that the magnetic cycles impact the cloud cover? If so, that would effect insolation reaching the ground and hence impact the surface temperature.

vukcevic
August 29, 2015 3:25 am

Thre are number of events going in parallel, difficult to say which if any may be relevant.
– NASA has shown that the Earth gets stronger impact from CME’s during even than odd cycles. CME’s do two things, sweep solar wind out of the way, allowing increase in the GCR penetration, cloudiness and cooling (svensmark).
CME’s (mainly high energy protons) electrically charge upper layers of atmosphere where polar vortex operates, affecting its velocity and under influence of the Earth’s magnetic field cause vortex to split(vukcevic)
http://www.vukcevic.talktalk.net/NH.gif
Splitting of polar vortex moves Arctic jet stream from zonal to meridional circulation, which in turn is the cause of colder winters in the N. Hemisphere.
– I found that the N. American tectonic plate, onto which sits the western half of the N. Atlantic (where the Gulf stream operates), has magnetic oscillations synchronised in periodicity and phase with solar magnetic cycles (see discussion on the recent thread starting here:
http://wattsupwiththat.com/2015/08/26/the-cult-of-climate-change-nee-global-warming/#comment-2015603
It is highly unlikely that the sun is direct cause, but there is strong possibility that both have the same driver (see sunspot formula parameters) . If the magnetic oscillations of the tectonic plate are result of mechanical dynamics affecting the sea floor, than it could be postulated that it may change the effectiveness of the Gulf stream’s transport of the ocean’s heat energy northwards.

Menicholas
August 29, 2015 6:43 am

My cat has some pretty thick whiskers.

vukcevic
August 29, 2015 8:16 am

should have gone in here:
At age of about 9 or 10 ( at the time my village didn’t have electricity) so I built ‘cat’s whisker’ radio with 30m long aerial.

Michael 2
August 29, 2015 10:39 am

I also built a radio with a cat whisker detector. Seems like it was a “Cub Scout” project. It never did work for me but we had only one weak radio station nearby. Still, it was an interesting activity for an 8 year old and put my feet on a course of technology. I think it lacked a suitable ground and the antenna probably wasn’t long enough.

rgbatduke
August 30, 2015 2:28 pm

Hi Vukcevic,
I do like your spectral decomposition of CRUTEMP4 (you mislabel it in your text and think that this may be one of the most valuable things you’ve posted. Note that given the length of the record, your peak in the 60 year range may be an artifact and — although I note the same thing and it is apparent in the top article if one looks for it (1937 to 1998 being 61 years) — I would avoid making a big deal out of it. I don’t know how you do detrending and systematic padding at the short end of the stic, but I’m guessing that the high frequency part is pretty good from 3-4 years out to maybe 30 years, so the 22 year peak is probably genuine. I think that the 5 (and 10 year harmonic) peaks are probably real as well — again a cursory glance at the climate data suggest that the autocorrelation time of the global average temperature is roughly 2.5 years — fluctuations with a ~5 year character seem abundant (and the 10 year peak is likely a sign that the 5 year peak is real, and this is just a harmonic). Again, I would be very cautious about attribution — I suspect that this peak is related to very large scale atmospheric cycles including but not limited to El Nino and is closely tied to the “natural” local oscillation time of the atmosphere/surface ocean system but the system is too complex to be certain.
The one thing that AFAIK nobody has done — and I freely offer you the opportunity to be the first — is to take the output from individual runs from as many individual runs of the CMIP5 climate models as you can stomach and do exactly the same fourier decomposition of their fluctuatio properties. I predict that they will have completely, totally different spectra. Furthermore, I predict from observations I’ve made on at least some of the series myself that their power spectrum will be completely wrong — the temperature fluctuations produced by the bulk of the models will be 2 to 3 times too large compared to the actual fluctuations produced by the actual climate. You might want to do a laplace transform as well as fourier to get a measure of not just the harmonics visible but a direct measure of the decay times.
The final step is to take the fluctuation-dissipation theorem and apply it to the result. If the climate models have the wrong oscillatory and relaxation spectra, they are just plain wrong, because the FDT:
https://en.wikipedia.org/wiki/Fluctuation-dissipation_theorem
basically asserts that the way an open system regresses to its local equilibrium state when perturbed is directly connected to the modes it uses to dissipate the energy added to it so that it remains in (near) detailed balance. This stuff is being studied only by a handful of mathematicians and I’m guessing most climate scientists (and a lot of the modelers) don’t even know it exists. The theory itself probably has to be modified for strongly nonlinear systems — I’ve looked at a few papers applying modified versions to e.g. the Lorenz model with at least some success — but I don’t think anyone has done a broad study of applying it to all the CMIP5 models on any basis at all, and I think such a thing would be enormously revealing on a semi-quantitative basis even without worrying about the details of using FDT in a nonlinear system. In particular, I would argue that if a climate model fails to reproduce the harmonic and exponential decay properties of the actual climate on timescales out to 20 years, this is strong evidence that they implement the wrong microdynamics, that they generate the wrong self-organized dissipative structures, that they will incorrectly predict the dissipation modes and likely temperature increase associated with increased CO2 forcing, and that their predictions not only “can’t be trusted”, they should be overtly rejected as too wrong to be of any use whatsoever, back to the drawing board.
To put this in simpler terms, if a climate model has the temperature fluctuating up and down by as much as 0.5 to 0.7 C over a single year (as some of the models in CMIP5 do) where the actual global average temperature shows nothing of the sort in any part of the well-resolved part of the anomaly by a factor of at least two, and where the actual model results being compared are not individual model runs but already average over still larger, still faster oscillations, there is a serious problem with the climate model involved. This is almost completely obscured by the spaghetti graphs that have become standard practice in the climate game. One sees only the general envelope of the tracks and the mind interprets this envelope as the “error of climate models”. It is not. There is no such thing. There is only the difference between one run of one model and the behavior of the actual climate. Sure, in a chaotic nonlinear system one cannot count on having the right long term behavior or identical traces (especially when the model runs are initialized from a state of extreme ignorance) but when the models exhibit quantitative and qualitatively different spectra in their short period behavior, it is simply direct evidence that the climate model in question is failing to capture or even come close to the meso-scale dynamics of forcing and dissipation.
How, then, can it possibly get a long term response correct?
rgb

vukcevic
August 31, 2015 12:52 am

meant to go here
Dr. Brown
Thank you for your comment and the advice. I agree with your observation about 60 year component, just about hanging on the lowest branch of the frequency spectrum’s limit.
Your suggestions are welcome, and would be a rewarding exercise for an enthusiastic student of applied mathematics. In my case as it happens, time availability and interest in other people’s errors are transitory, so it is likely that I will decline your invitation. On the subject of ‘the most valuable’ of my contribution, I am not inclined to concur, but your opinion and views are this time as always highly appreciated.

Sheldon Walker
August 29, 2015 12:50 am

That’s a good idea, DonM.
I thought about about including a graph of the acceleration of temperature in this article, but decided it would only confuse the issue of the rate of warming.
I plan to have a couple of follow up articles. One on the acceleration of temperature, and one on the relationship between CO2 level and the rate of warming.

DonM
August 29, 2015 11:51 pm

…although the moving average “representative guesses” at the ends (as the averages data is truncated) might be even more exaggerated as you take another derivative. But, without getting to deep into it I would guess that five years out (with full data) it would not change much. (you could chop it off at five years back to see how the estimated moving average compares with your above truncated moving average.
And for graph # 6. We could take the third derivative (the Jerk), throw in a bunch of fluffy language and claim that it is representative of something (make up a new theory that a Jerk of a certain size will kick us into the catastrophic change of state) and go after some grant money.
I’m to sleepy to try to tie in a Mann joke.

August 28, 2015 6:51 pm

Pointless using GISS or any of the land or land/sea data for anything but a good laugh. The extent of “administrative” changes have rendered all of them useless.

Michael Wassil
August 28, 2015 6:59 pm

+10
Temperatures will be ‘adjusted’ again next month to make this the hottest August on record; adjusted the following month and each of the following months again to make Sep, Oct, Nov etc the hottest months on record ad nauseam . So the ‘rate of increase’ will remain a moving goal post, central moving averages not withstanding. LOL.

Robert of Ottawa
August 29, 2015 7:00 am

I believe Steve Goddard has a graph of the rate of warming increase over the years due to the mysterious adjustments.

higley7
August 28, 2015 8:08 pm

Exactly my concern. Why not start with an unadulterated data set, say one based on rural temperature sites or satellite data? Starting with data that essentially wipes out the cooling in the 1970s is to play the warmists’
game and tacitly agree that the planet is actively warming. Remember, the warmists love straight lines and assume a century of future warming, based on nothing, but they do it anyhow. They need it to be.

Menicholas
August 29, 2015 6:47 am

Thank you Alan.
I was wondering why the comment section was stuck on how stupid Warren is compared to his cat, when the article uses “data” which is not data.
It is not even modified data, as if their was any such thing.
The kindest thing one could say about it is that it is a model of how the people who manufactured it think.
But the truth is that it is, at this point in time, nothing but a big fat set of lies.

Albert Paquette
August 28, 2015 7:01 pm

That’s interesting. There seems to be a peak warming year about every 21 years. Is there any reason for that?

Leonard Lane
August 28, 2015 10:12 pm

Probably the smoothing technique introducing false periodicity. See the statistical sites for info on the Slutsky- Yule effect.

M Seward
August 28, 2015 11:53 pm

Or its a solar signal – direct or indirect. Random sea states have a similar phenomenon where the sea level aboce a datume is normally distributed, i.e. is random but the successive crest to following trough distance is not, it is Guassian distributed. Not sure if that is mathematically similar or just a red herring but its all good food for the brain. Don’t know if you could fatten a climate scientist on this sort of diet though.

Menicholas
August 29, 2015 6:59 am

The Slutsky-Yule effect?
Is that a thing?
I think there may be a joke there, but I am not gonna be the one…leastways not with Christmas just around the corner.

Ian W
August 29, 2015 8:27 am

As vukcevic says above ( http://wattsupwiththat.com/2015/08/28/how-fast-is-the-earth-warming/#comment-2016891 ) this seems to be the Hale cycle. Whoever would have thought it?

Joel O'Bryan
August 28, 2015 7:08 pm

The fact that the 97-98 Super El Nino stands out, was there a mid-1930’s Super Duper El Nino?
Those are of course a redistribution of heat, whereby stored equatorial West Pac OHC can get transported via currents to highers lats and convection to the stratosphere for dumping back to 3 K outerspace.

Menicholas
August 29, 2015 7:06 am

The mid-thirties one was a actually a triple-prodigious, cats meow, brobdingnagian, extra-wonderful, most ginormous, dog’s pajamas el nino.

August 29, 2015 11:48 am

@ Menicholas, Run by cats. (it just seems our two cats run our lives, so cats might as well run the climate as well they are doing it pretty nicely around our area).

john robertson
August 28, 2015 7:17 pm

Sorry the Estimated Average Global Temperature is specious rubbish.
To demonstrate this, you could show the error range on your graph.
All the “mathematical manipulation” in the world will not produce data we do not have.
The arrogance of the claimed information of anomalies of 0.2 to 0.7 C is breathtaking.
In audio this signal would be the noise .
Or is the sacred data used here accurate to 0.01C as well?
Enough already with these anomalies, I am sick of this pseudo science.
In all honesty, could one even produce an accurate average global temperature from the satellite data, accurate to 0.1C.?
Do we have usable data from 1979?
As in data useful for the purpose of accurately calculating an average temperature for this planet?

FAH
August 28, 2015 8:09 pm

I agree. The first problem is calling the change in the average over global temperatures a “rate of warming.” While that is intuitively pleasing it is not correct. Many have pointed out that averaging over an intensive quantity over a heterogeneous system does not yield a valid thermodynamic quantity. Thinking of other intensive variables clarifies the problem. Examples of other intensive properties are pressure, melting point, boiling point, conductivity, etc. Extensive properties would include mass, volume, length, number of particles, free energy, entropy, etc. Using, say, the example of melting point, one could average the melting point over the climate system, to include the water, land, air, etc. scrupulously accounting for differences due to mineral and soil composition, pressures, and other variables and come up with an average global melting point. The question then becomes: what use does that number have? If it went down a small amount would the globe be getting “more melty?” It would appear in no meaningful equation describing material behavior. This is because those equations need the intensive property to apply to a local system or uniformly to the entirety of the system being described. But the average melting point would not apply to any specific local evolution, nor to any global evolution if an equation could be obtained for that. The point is that an average of an intensive quantity over a heterogeneous system does not enter into the dynamics of the system in a causal way.
On the other hand, I think a more understandable analogy for an average of an intensive quantity would be a stock market index. The price of a stock could be viewed as an intensive property, somehow describing the activity of the individual stock within supply and demand. An average over the whole of the market or some subset of it could then provide an index, such as DJ, SP etc. which can be historically related to past values and used as an indicator of market dynamics. Historical experience may be used to derive expectations under various assumptions, but the dynamics are always an issue of study. The dynamics would cause the index behavior, but not the other way around. Nevertheless, the index would be a meaningful descriptor of the market just as global average temperature can be a meaningful descriptor of a climate system. However, the index itself would not necessarily be a driver of individual purchase decisions, although that might be possible in a general behavioral approach. The other attribute a stock market index has is difficulty using it alone to predict the future. Many attempts to predict stock market performance based on indices of various types have shown it to be difficult. Further it often happens that one index, say the DJ, goes up while another, say the SP or NASDAQ, goes down or is flat. Small upticks or downticks are relatively meaningless in what they say about the overall market health. A crash or boom can come at any moment and may be unrelated to the details of the index.
Another dubious claim is the error estimate for the global average temperature. Usually these seem to be estimates of how well the individual temperatures are measured, based on the instruments used. But only the individual location measurements are characterized by those errors. The global average temperature (index) is just that, a global average. It represents the quantity characterizing the distribution of temperatures over the globe, not at individual points. The uncertainty of its estimate of the mean is estimated by the standard deviation of the temperature over the globe, not at individual point measurements. For example, I took the UAH version 6 data recently released, which goes back to 1978. It gives the global hemispheric breakdowns of land and ocean temperatures as well as the global averages. For each month given, you can calculate the average over the hemispheres, land and ocean, and obtain the mean. (The weighting may not be just right, but the principle is the same.) For each month you can also calculate the standard deviations for each month. (You could do it yearly if you wish as well). Because there are relatively large differences between hemispheres and between land and ocean, the standard deviations of the means are relatively large, ranging from a low of about 0.2 C up to as much as 1.2 C. The mean standard deviation is about 0.4. The distribution of standard deviations is skewed but the log of the standard deviations appears to be nearly normally distributed. So the accuracy with which the global average temperature represents the temperature data of the globe is more like 0.4C, not even close to 0.01C or whatever is claimed for the temperature anomaly estimates. Ticks up or down of 0.1 C are meaningless. Only if one thinks of the temperature measurements as extensive properties that can be summed does the instrumental error characterize the uncertainty of the global average. As in the stock market example, trends would be very hard to distinguish statistically from random walks.

E.M.Smith
Editor
August 28, 2015 8:28 pm

Exactly right. One can NOT average an intensive property and preserve meaning.
ANY average of different temperatures from different places is no longer a temperature and is a bogus thing.
GIStemp does lots of temperture averaging before making a grid box anomaly, so the anomaly canard can be ignored too. And that is based on a Monthly Average starting number based on daily min max averages…
It is bollocks from the first input monthly average to the end.
https://chiefio.wordpress.com/2011/07/01/intrinsic-extrinsic-intensive-extensive/

Hoyt Clagwell
August 28, 2015 9:02 pm

I think any rate of warming could only be meaningful if it was limited to a small region of similar climate. Once you average everything globally it loses all meaning. Imagine doing a study of the incomes of 100 people over a year. Assuming they start at the same income let’s say 10 people experience a 10% cut in pay, 80 people experience no change, and 10 people experience a 20 percent increase. It would never be correct to say that the whole group experienced an average 1% increase in pay.
Think about that while I try to mix up a batch of universally average paint color.

Robert B
August 28, 2015 11:31 pm

A better example of an intensive property is density. Its the quotient of two extensive properties, mass and volume. You could take many samples evenly throughout if the sample was not homogeneous to get an average that would be an estimate of the total mass divided by the volume. How accurate it is would depend on how many samples, how evenly it is sampled and how inhomogeneous it is.
In the case of temperature, its the total thermal energy/heat capacity. We know that the sampling is dreadful, and only at the bottom of the atmosphere (near solid objects). This surface changes by 10-30°C and the variation around the globe is greater than 100°C. The specific heat capacity is not independent of temp, air cools as it rises and warms as it falls, and then there is the latent heat in water vapour.
Somehow, labelling the fudging as” homogenization” makes it all OK and the change from month to month pretty close to what you get from satellites for the lower troposphere.

Editor
August 28, 2015 11:57 pm

E.M.Smith August 28, 2015 at 8:28 pm Edit

Exactly right. One can NOT average an intensive property and preserve meaning.

Doc, always good to hear from you. However, as I mentioned about someone else recently, you’re throwing the baby out with the bathwater.
As I understand it, if I follow your logic there is no meaning to the claim that a week in the summer with an average temperature of 80°F is warmer than a week in January with an average temperature of -10°F …
I ask because you say there’s no way way to average temperature and preserve meaning.. So if someone says to you “Last week the weather averaged below freezing, you’d better put on a coat”, would you reply “Sorry, you’ve averaged an intensive property, that has no meaning at all, so I’m going to wear my shorts outside” …
Or suppose I’m sick, and I take my body temperature every hour. If it averages 104° (40°C) over two days, would you advise that I ignore that average because temperature is an intensive property, and so my average has no meaning?
What it seems that you are missing is that very often, what we are interested in is the difference in the averages. Take the El Nino3.4 index as an example. It is the average of sea surface temperatures over a huge expanse of the Pacific. And as you point out, that’s an average of an intensive quantity. However, what we are interested in are the changes in the El Nino index, which clearly “preserve meaning” … and for that, the intensive nature of temperature doesn’t matter.
And the same thing is true about taking our temperature with an oral thermometer. Yes, it is only an average of the temperature around the thermometer, and we don’t care much what the average value is … but if it goes up by four degrees F, you don’t have to break out the rectal thermometer to know you’ve got problems,
Next, one can indeed average an intensive quantity, and preserve meaning. Let me take the water level in a wave tank as an example. The water level is an intensive quantity.
Now, if we measure the water level at say three location at the same time, our average won’t be very good, and won’t have much meaning at all. If we measure water level at 20 points at the same time, our average will be better. And if we use say a rapidly sweeping laser that can measure the water level at 20,000 points in a tenth of a second, we will get a very, very good average.
In general, you can measure any intensive quantity to any desired accuracy, PROVIDED that it is physically possible. For example, you might be able to put a hundred simultaneous lasers over the wave tank, for a total of two million measurements per tenth of a second … and you can be dang sure that that average tank level has meaning.
It’s like polling. The opinions of the people of the US resemble an intensive quantity, in that the only way to get a true average would be to ask everyone. BUT we can determine their average opinions, to a given reasonable level of accuracy, by taking something like two or three thousand measurements …
Or take density. Again, intensive. But if we take a cubic metre of a liquid with varying density, and we measure the density of every single cubic centimetre and we average those million cubic centimetres, we will get a very accurate answer. And if that is not accurate enough, just measure every cubic millimetre and average those.
(mmm … 1E+9 cubic mm per cubic metre … assume one second to measure the density of each cubic mm using some automated method … 31.6E+6 seconds per year … thats thirty years to do the measurements … looks like we need a Plan B.)
Finally, for some kinds of intensive properties there is a Plan B. Sometimes, we can take some kind of a sneaky end-run around the problem, and directly calculate a very accurate average. For example, if we take a cubic metre of a liquid with varying density as in the previous example and we simply weigh it, we can figure the average density to arbitrary precision with a single measurement … despite the fact that density is an intensive property.
And you may recall the story of Archimedes, who around 250 BC was asked by the king to determine if his crown was real gold. As the story goes, when Archy sat down in his bathtub, he saw the water level go up, and he ran through the streets of Sicily shouting “EUREKA”, which means “I’ve found it!” in Greek.
He’d discovered an accurate way to calculate the average of an intensive quantity, specific gravity. And he obviously knew how important that discovery was.
To summarize:
• While it is widely believed that there is no meaning in the averages of intensive quantities, in fact we routinely both calculate and employ such averages in a variety of useful ways.
• Extensive properties (mass, length, etc.) can generally be measured accurately with one measurement. Take out a tape measure, measure the length, done. Toss the object on a scale, write down the weight, done.
• With intensive properties, on the other hand, the more measurements that we take at different locations, the more accurate (and meaningful) our averages will be.
• As Archimedes discovered, the averages of some intensive properties, such as the specific gravity of a king’s gold crown, can be accurately measured with a single measurement. The value, meaning, and utility of averages of some intensive properties have been understood for over two millennia.
• The changes in averages of intensive properties, such as the El Nino 3.4 index, can contain valuable information provided that the measurements are repeated in the same locations, times, and manners.
Regards,
w.

FAH
August 29, 2015 12:53 am

Willis, let’s look at the water level. While it may be considered an intensive quantity, if we are considering an isolated system, with a fixed volume and with other variables (such as pressure, chemical composition of the water, etc.) constant, the water level and volume of water ( or mass of water) are simply related. In fact proportional. Hence for a fixed, isolated system, the water level is proportional to an extensive quantity. One could make the same argument for the temperature of a fixed volume of water, i.e. the temperature is proportional to the total energy. (Or perhaps in a larger volume of water assumed to be in convective and thermal equilibrium, such as some fixed expanse of sea.) However, if one has a heterogeneous collection of containers of various shapes and volumes, each would have its own relationship between the extensive property – mass or volume of water (assuming constant temperatures and pressures) and the intensive property – water level. The volumes of the collection of containers could vary from very small to very large, and the shapes could vary so that the relationship between volumes and levels was quite complicated. Then the average over the water levels of all the containers would not be simply related to an extensive quantity such as volume. The level in a small column container could go up dramatically with small additions of water and the level of a large reservoir would increase only slightly with added water. One would have to convert each level to the appropriate subsystem dynamic to get an extensive quantity, the average of which would then have ready interpretation. The statistics of the average water level would be dominated by the variations in the volumes and shapes of the containers, not the amount of water. It doesn’t make any difference how accurately one measures the level in all the various containers, the variation is still dominated by the variation over the containers. The “meaning” of small changes in the water level is only physically clear if the average is able to incorporate the heterogeneity of the systems being measured.
In the oral thermometer, sea surface, and density examples, the implicit assumption is that the characteristics of the system being measured are well approximated by some known underlying function, such that the intensive quantity being measured is essentially the same across the whole of that system or is related to the whole system by some previously determined analysis. In those cases, small changes do indicate something about the system because the proportionality of the fixed system extensive properties and the intensive properties is valid, i.e. the underlying dynamics are known. This is equivalent to saying that the distributional nature of the numbers for which the average is being calculated is known. But that relationship has to be positively established before uncertainties of the average can be claimed. The relationship between oral and rectal temperatures across body types has been characterized extensively. If the underlying dynamics are not known, then differences in the intensive property have reduced meaning and estimates of uncertainties are, well, uncertain.
The polling example is similar. The utility of a poll is a strong function of the sample composition. An underlying assumption of a poll is that there is some average behavior across the sample to be measured. But if one has a sample comprised of two classes, one strongly against some stated position and one strongly for it, and the pollster simply selects at random from the mixture of both populations, the average result will be that the population is ambivalent to the question. However, if one breaks the system down into components, one can find that subpopulations differ, i.e. the intensive quantity applies to the subsystems to which it is related. This is the equivalent of breaking down a heterogeneous system into systems described by the quantity of interest. One could then talk about the behavior of the subsystems adequately described by their subsystem averages.

mobihci
August 29, 2015 3:10 am

error margin? whats that? all you gotta do is fudge, smudge and ‘homogenise’ the crap out of every station you find, and then you have an ‘average’. of course a lot of the raw data may need to be ignored, and you may have to change cooling into warming on many stations to achieve an ‘average’, but in the end, it is worth it. i mean just think of how many millions that next grant is..
http://jennifermarohasy.com/2015/08/bureau-just-makes-stuff-up-deniliquin-remodelled-then-rutherglen-homogenized/

mkelly
August 29, 2015 5:26 am

E.M. Smith says: “ANY average of different temperatures from different places is no longer a temperature and is a bogus thing.”
——–
Willis I think ignoring this sentence is in error. You cannot average temperatures from the Yukon and Belize and get any meaning.

Menicholas
August 29, 2015 7:17 am

“Willis I think ignoring this sentence is in error. You cannot average temperatures from the Yukon and Belize and get any meaning.”
I am tending to agree with the skeptics here.
Although sweeping statements can often be parsed and examined to find an example of when the statement may not be true, in general there is a good point to be made here.
As for the Yukon and Belize…which is a fine example…how about if we consider the relevance of averaging the water temperature over the vast open ocean and the interior of the Antarctic continent, and the frozen center of the Arctic ocean, in all of their highly homogenized and guess-worked glory, and throwing those values into a big blender with the surface records?
If one could even get accurate readings from all these places, and then correctly work out in sufficient detail the thermodynamics of the heat content of the various layers and humidity levels…what does it really compare?

MarkW
August 29, 2015 9:14 am

While it is true that trying to average the melting point of many different materials is a meaningless excercise, all the temperature measurements being made are being done using air. (Yea I know that they have attempted to merge the sea surface and air temperature readings, and that is pure nonsense.)
If I have a room with 10 sensors reading air temperature, that is a meaningfull number, so long as you properly calculate the error bars.

MarkW
August 29, 2015 9:22 am

emkelly, while averaging the temperature between Yukon and Belize will tell you nothing about the temperature at either Yukon or Belize, it is useful if you are trying to determine the temperature of the entire earth. To go back to my earlier example of a room with 10 sensors. Each sensor is perfectly accurate for the spot in which it is located, and averaging the temperature of the 10 sensors will add nothing to you knowledge of what the temperature at the location of each specific sensor. However if you change your focus to the room as a whole, The average between all of the sensors will inform you as to whether energy is being added to or subtracted from the room as a whole. The trick is to have enough sensors to adequately cover the entire room. More sensors mean the error bars on your average decrease. You start out with the accuracy of each sensor, and then add to the error bars based on the fact that you have less than perfect coverage.

Editor
August 29, 2015 11:24 am

FAH August 29, 2015 at 12:53 am

Willis, let’s look at the water level. While it may be considered an intensive quantity, if we are considering an isolated system, with a fixed volume and with other variables (such as pressure, chemical composition of the water, etc.) constant, the water level and volume of water ( or mass of water) are simply related. In fact proportional. Hence for a fixed, isolated system, the water level is proportional to an extensive quantity.One could make the same argument for the temperature of a fixed volume of water, i.e. the temperature is proportional to the total energy. (Or perhaps in a larger volume of water assumed to be in convective and thermal equilibrium, such as some fixed expanse of sea.) However, if one has a heterogeneous collection of containers of various shapes and volumes, each would have its own relationship between the extensive property – mass or volume of water (assuming constant temperatures and pressures) and the intensive property – water level.

Thanks, Fan. You are operating under the assumption that I was measuring the water level (an intensive quantity) in order to get an estimate of the volume (an extensive quantity).
I said no such thing. I was only interesting in getting an accurate estimate of the water level. Here’s an example. Remember that we are discussing Doc Smith’s claim that averages of intensive quantities have no meaning.
Suppose I’m interested in building near the ocean. So I install a tide gauge, to measure an intensive property, the water level. Note that unlike your example, I’m NOT trying to figure out the ocean’s volume.
So I keep the tide station there, and after some years I can average and analyze the measurements and tell you the average water level at my station. It’s an intensive quantity, to be sure … but I hold strongly that that average water level assuredly has meaning, as it allows me to make decisions regarding how high above the (intensive) water level I should build.
You go on to say:

In the oral thermometer, sea surface, and density examples, the implicit assumption is that the characteristics of the system being measured are well approximated by some known underlying function, such that the intensive quantity being measured is essentially the same across the whole of that system or is related to the whole system by some previously determined analysis.

Again, you seem to be under the misapprehension that “the intensive quantity is the same … or related to the whole system”. Not true for body temperature, temperature varies all over my body. Not true for sea surface, the sea surface level varies everywhere.
And regarding density (or specific gravity), so what if the intensive quantity is related to the whole? The same is true of temperature, with the relationship being the heat content of the substance. But that’s not what Doc Smith said. He said that any average of an intensive quantity is meaningless … but Aristotle’s “Eureka” falsifies that claim.
Someone also said I should have commented on Doc Smith’s assertion that:

ANY average of different temperatures from different places is no longer a temperature and is a bogus thing.

Again, I would use the average of the NINO3.4 index. It is an index which is most certainly an “average of different temperatures from different places” … but are y’all really claiming that Doc is right and that the Nino3.4 index is a “bogus thing” with no meaning???
Regards,
w.

FAH
August 29, 2015 11:49 am

Mark, there are two reasons the general notion of using an average of air temperatures needs careful thought, one physical (intensive versus extensive) the other statistical (the meaning of an arithmetic average). Sorry if this discussion is a little long.
When folks think of the physical issue and have comfort averaging over temperatures, they invariably have made the tacit assumption that one is trying to describe a thermodynamically isolated, relatively homogeneous system, such as the room you mention. The tacit assumption is that the room’s temperatures are relatively close together at different locations and the air is at the same pressure and humidity and at equilibrium. Under that tacit assumption, indeed the heat content of the total air is closely approximated as proportional to the temperature and averages over the temperature in the room approximate the narrow distribution of air in the room. Notions of “warming” or “cooling” based on the average temperature have some support. The uncertainty in the average temperature (or heat) is closely estimated by the standard deviation of the temperatures measured. This is essentially morphing the intensive temperature into a good approximation of an extensive property for the particular assumed restricted homogeneous system. But let’s look at what happens when the system under consideration consists of a collection of systems (such as the poles, the tropics, hemispheric lands, and the ocean).
Let’s think of the poles, tropics, hemispheric lands, and ocean as different rooms and we want to know something about their thermodynamics. The air temperatures in each of those locations depends on a variety of thermodynamic quantities including humidity, albedo, convective mixing, aerosols, etc. etc. Call the poles Room P, tropics Room T, lands Room H, and ocean Room O, or just P, T, H and O for short and think of the air temperatures measured there (or anomalies, it doesn’t make a difference just modifies the statistics a little). Let’s say the measured temperature at P is -2, at T +2, at H +1 and at O it is 0.0, all degrees C (these are just hypothetical example numbers but they make the point). Assume these numbers are individually measured repeatedly with thermometers to precision 0.01 C. Now the average temperature is the average of -2, +2, 1, and 0, which is 1.0 C. The standard deviation (more about that later) of these numbers is 1.7 C. The uncertainty with which the average represents the distribution of the numbers (whatever they “mean”) is 1.7 C not 0.01 C. The reason the averaged numbers do not have a small standard deviation such as the individual temperature numbers is because they have not been individually related to the underlying extensive properties. If the average goes up or down one cannot say what it means thermodynamically unless one knows what the underlying distribution is doing. Hence the average number 1.0 C does not describe any thermodynamic system in any causal way and the uncertainty is much more than the measurement error of the thermometers. It does not describe the differential relationship between total system entropy and any of energy, volume, pressure, etc. So if the tacit assumption is not true (and it is not for the globe) then averages of air temperatures over the globe do not represent a thermodynamic quantity.
Now let’s address the statistics issue, i.e the notion of an average. What is commonly called an average is usually meant as an arithmetic average, i.e. the sum of a subset of some set of numbers, possibly all, divided by the number of measurements. There are other averages, such as a geometric average (the product) or a logarithmic average. All of these are simply prescriptions for calculating a number from some set of numbers. Now these prescriptions are, in statistics, generally used as estimates of parameters that characterize a presumed underlying distribution obeyed by the numbers. Arithmetic averages are usually used to estimate the mean of such assumed underlying distributions. Other prescriptions yield estimates of other parameters such as modes, quantiles, variances etc.
The prescription for calculating an estimate can be followed for any set of numbers. However, the utility of the number lies in its ability to estimate a parameter of the underlying distribution. The nature of that distribution is linked intrinsically to the utility of the estimate. Let’s use an illustrative example. If one calculates an average and standard deviation of the numbers 1,1,1,1,10,10 one gets an average of 4 and a standard deviation of 4.6. The prescription for calculating a standard deviation is the average over the deviations of the mean from each of the data points (actually the square root of the sum of their squares). If the assumption of an underlying Gaussian distribution is true to some extent, then this prescription yields a statistically “good” estimate of the standard deviation of the Gaussian whose mean was estimated by the average. (There is a slight nuance concerning sample size and whether we are thinking of the uncertainty of the estimator versus the underlying distribution parameter, but it is unimportant here.) For the example numbers above, it is obvious that these estimates don’t themselves give a clear picture of the underlying distribution (this is because in the back of our mind we are thinking of a Gaussian distribution). We could not draw the distribution of the numbers using just the average and standard deviation. So the utility of a number from the prescription depends on whether the underlying numbers at least somewhat obey the distribution we have in mind. (Another problem with this little example occurs a fair amount in climate science, such as in averages of precipitation amounts or the number of storms, namely the numbers are all positive, but that is another story. The usual Gaussian distribution goes from minus infinity to plus infinity. If the numbers are by definition positive, then some other distribution is needed (e.g. binomial, lognormal, etc.) and estimates of parameters describing those distributions are different.) Inferences drawn (e.g. trends) based on arithmetic averages of such quantities require a careful attention and are usually wrong if done using Gaussian based tools without assuring the underlying distribution is Gaussian on its face or by transformation. If the numbers are far enough away from zero, sometimes a Gaussian is a fair approximation, but otherwise not. Macroscopic temperatures are fairly far away from zero in the natural thermodynamic units of degrees K.
So to use an average wisely, such as global average temperature or temperature anomaly, requires examining the underlying distribution of the numbers. In my earlier post I looked at constant time global temperature distributions from UAH as an example. The statistical utility of the number is based on how well it estimates the distribution of temperatures. (Note that this is related to, but distinct from its physics utility.)

FAH
August 29, 2015 12:01 pm

Willis, I did not mean to imply that an index such as water level has no utility. The analogy of the stock market indices is an example of an index that is very useful to a number of people. As in the tide gauge data, one can build up a historical set of data that one can empirically and heuristically use to base behavior, such as where to build. Oral temperatures of individuals of various ages have been historically determined to be a good indicator of the human or animal body in its relatively well understood dynamical attempt to maintain equilibrium at a particular ideal temperature. Nothing wrong with that. The difficulty comes when the number is used to infer something about the dynamics of the whole, when the dynamics as a whole is complicated, not in equilibrium, and not well understood in a predictive sense.
For example, one can use the tide gauges at a beach location over 100 years to make a fairly informed decision about what to build locally and where and how to mitigate against the outlying inundations that have occurred. The effects of global sea level rise, local subsidence, erosion, etc. would all be incorporated in the local measurement and one does not really care what happens globally. As we all know all real estate, like politics, is local. But extending that local tide gauge measurement to the globe as a whole needs a lot more effort.

August 29, 2015 1:07 pm

I’m afraid I’m stuck on simple . I’ve been accused of taking simple to the extreme and that definitely might be said to be the design goal of my 4th.CoSy language .
As Willis recently commented , http://wattsupwiththat.com/2015/08/24/lags-and-leads/#comment-2015825 :

Me, I’m a great fan of very simple models, what I call “Tinkertoy” models. I like them because i can use them to see where the earth does and doesn’t act like the model predicts. …

The thorough quantitative understanding of models significantly simpler than Tinkertoys is the classical method of physics . In http://cosy.com/y14/CoSyNL201410.html , I paraphrased from one of the classics I’ve bought after it was mentioned in a blog :

Goody makes the interesting observation , … , that there are two approaches to understanding planetary temperature , one starting from the measured phenomena and working to explain it . That appears to be the ubiquitous method . But without the other , the traditional analytical understanding of abstracted systems simple enough to be quantitatively confirmed by experiment , one never can be said to understand .

David Appell in particular has spammed my explication of calculating the temperature of a radiantly heated colored ball observing that a planet is much more complicated than a simple colored ball .
But , if you don’t know how to calculate the temperature of a radiantly heated colored ball — which can be easily experimentally tested — but which I venture to say many career “climate scientists” don’t , you are blowing smoke to claim you understand anything more complicated .
Averaging temperatures clearly has quantitative meaning . To claim otherwise would imply that it’s just dumb luck that our estimates of observed average global temperature is within 3% of the temperature calculated by
simply summing the energy impinging on a point in our orbit .
As has been pointed out , many intensive properties like temperature and density are intensive because they are ratios with volume or mass already in their definitions . That does not make doing arithmetic with them meaningless . I recently had an exchange with someone so confused by the intensive distinction that he could not understand the purpose of the Kelvin scale or that it was meaningful to say that one temperature was twice another . Such arguments reduce ( Lord Chris , what’s the Latin ) to the impossibility of constructing any temperature scale at all .

FAH
August 29, 2015 1:29 pm

Bob, agree wholeheartedly.
Maybe the issue of locality is useful. In thermodynamics, intensive properties are the local, differential coefficients on the manifold of thermodynamic states, well defined if the system is in equilibrium, more difficult if it is not. As a local differential property defined at a point on an assumed locally continuous manifold, it is always possible to find a small enough region around the point such that the quantity varies little enough to make some approximations. If the system is in equilibrium the region is easier to define (the space is “flatter”) than if the system is not in equilibrium (not “flat”). It may be that over some useful small region, the intensive quantity is nearly constant, in which case an average may be very close to the value at the point. It may be possible to find some well-behaved function that describes the behavior of the differential over the region. In either case, the estimate of the differential (and its variation) over the region may be a good approximation of the thermodynamic intensive property within that region. This may be the case with the Nino index. I have never looked at that index or its calculation so I have no informed opinion on it one way or the other.
So there is no (thermodynamic) problem using an average of an intensive quantity over a sufficiently restricted region in the thermodynamic manifold that the intensive quantity varies little in some sense. In this case the approximate value of the average of the numbers is approximately a thermodynamic intensive quantity. The problem is in trying to obtain a thermodynamically meaningful quantity from values of the local differential intensive properties over a larger region of the manifold or if the manifold is not in equilibrium. In that case the average of numbers from different parts of the manifold are not guaranteed to be thermodynamically valid numbers. In other words the average does not represent the local differential behavior of any part of the manifold or the manifold as a whole. It may be a method can be obtained to relate the values by integration, transport along paths or symmetry arguments, but that has to be demonstrated before the averaged quantity can be used as thermodynamically meaningful. But this means directly useful thermodynamically. It often is the case that we build up a history of index behavior and seem to see correlations between the behavior and other phenomena (such as model outputs). Nothing erroneous about thinking there may be something there. It only means it is a good topic for study, not “settled science.”
Bottom line is that there is nothing intrinsically “wrong” in using averages over intensive properties. It is only that one needs to be thoughtful about the utility of the number, its relationship to thermodynamic notions, the details of the system under consideration, and what uncertainties in the number represent

FAH
August 29, 2015 3:32 pm

As an aside, Axel Kleidon has been doing some very interesting work using rigorous thermodynamics to explore the planetary (climate) system. One paper from 2012 focuses on free energy within the planetary system (an extensive quantity). It is titled “How does the Earth system generate and maintain thermodynamic disequilibrium and what does it imply for the future of the planet?” It was in Proc. Roy. Soc. but is available online in pdf at
http://rsta.royalsocietypublishing.org/content/roypta/370/1962/1012.full.pdf .
He published another interesting paper in 2011 on wind energy, “Estimating maximum global land surface wind power extractability and associated climatic consequences,” available online at
http://pubman.mpdl.mpg.de/pubman/item/escidoc:1693451/component/escidoc:1693450/BGC1565.pdf
Both are interesting, accessible reads and firmly grounded in thermodynamics. His work is neither pro-AGW nor anti-AGW. It is refreshing to think about the planetary system without the usual baggage of “hottest year evah!” … “no its not!” …. “yes it is!” etc. etc.

Robert B
August 29, 2015 4:21 pm

Each sensor is perfectly accurate for the spot in which it is located,

I find it strange that the mean temperature at a station is expected to remain the same if the temperature in a region is constant. I looked at three stations in my city that are 6-8km apart on a plain. The airport (AP), the old site in parklands (WT) and the new site (KT) that has a glass building reflect sunlight on to it in the afternoon. The plot is the difference between each station and the average for the day with 5 day moving mean to smooth out the plot.
The KT site is usually warmer by about a degree than the WT site in summer but the three are pretty close to each other in winter. Even if the station doesn’t move, the area becomes built up or the screen gets repainted, how good is the mean of maximum and minimum temperatures an indicator of the change in energy of the pocket of air above the station?
Then we rely on homogenised stations to pretend that they never moved, the area was built up or that it was repainted.

August 29, 2015 5:48 pm

This is a good example to test what I’m doing, if you don’t mind.
Calculate yesterday’s rising
Tmax (d-1) – Tmin (d-1)=Trise
Falling temp
Tmax (d-1) -Tmin (d0)=Tfall
Min difference
Tmin (d-1) – Tmin ( d0 ) =MnDiff
Tmax (d-1) – Tmax (d0 ) =MnDiff
I would expect to see the 2 different values to be very close, but you might see differences between the rising and falling values (representing UHI ).

MarkW
August 29, 2015 7:50 pm

FAH, with enough sensors, the atmosphere can be measured, just as the room is. The same problems you cite in regards to the atmosphere also exist in the room model. We have fans and AC exhausts and returns that stir up the air as well as add or remove energy from the system. We have windows that have different thermal qualities from walls, even with walls you have inside walls and outside walls. During the day the sun shines through the window, unless the blinds are pulled. At night it doesn’t. Then you can have people and electrical equipment that put out heat.
The solution is to either have sufficient sensors, or to calculate the error bars appropriately. The same is true with the atmosphere. With enough sensors you could make a definitive statement regarding the absolute energy content of the atmosphere and how this energy content is changing over time.
With out enough sensors, you need to calculate the appropriate error bars to compensate for the data you do not have.

FAH
August 29, 2015 9:14 pm

MarkW, absolutely, with enough sensors the atmospheric energy content can be measured. The question is how many would one need, could all the sensors be temperature sensors, and would the energy content be expressible as a simple sum of the temperatures.
For the case of a habitable room, HVAC systems are designed to hold the relevant quantities of temperature and humidity within comfortable bounds and to mix the air sufficiently so that the distribution of heat is as uniform as possible, despite the vagaries of day/night, amount of window area, inside/outside walls etc. So the HVAC controlled room air is designed to be a small region in a thermodynamic space. The control point is to keep the temperature (and humidity and mixing) within a small set of bounds. Under these conditions, a few temperature measurements (such as done by a few thermostats around the house or room) can fairly adequately capture the energy content. Given that the HVAC system keeps the room at some equilibrium state, the average temperature is a good proxy for the energy content. The uncertainty on the measurement would be the uncertainty on the control system bounds and the uncertainties of the temperature measurements. The temperature measurements could be made as precise as one wished, but the uncertainty of the energy estimate would generally be dominated by the variation allowed by the control system.
For the atmosphere, there is no human designed control system maintaining the system in global or local equilibria. Recall that temperature appears in a partial derivative in general and only in restricted systems in a total derivative. There are a variety of other thermodynamic variables necessary to characterize energy content of a system. Some of these are humidity, pressure, convection, turbulence, work being done by or on the air mass, etc. For example the specific heat of air varies with humidity by about 5 percent or so over atmospheric ranges. Atmospheric pressure and density varies with altitude exponentially, so we need to consider the size of the volume for which we want to measure the energy and measure the temperature as a function of the altitude desired. The work done on and by flowing air masses varies with the surface roughness, wind speeds, and characteristics determining the turbulence such as the height of the planetary boundary layer which essentially sets the Reynolds number of the local flow. Convective flow itself comprises a significant amount of the energy. So if we want to measure the energy content of some fraction of the atmosphere, we need to measure a lot of things, not just temperatures. Further, although we may be able to measure individual temperatures very precisely, say 0.001 degree C, the uncertainty of the energy estimate will be dominated by the variations in the other thermodynamic variables. A simple average of temperatures alone has large uncertainties as a measure of an extensive quantity like energy content over the globe. Given these uncertainties, there is no way to distinguish between variation in the temperature measurements due to the above stated factors versus the simple spatial variation over the globe and the strict statistical estimate of the uncertainty is quite large. I suspect this underlies the penchant for using even local anomalies, although that introduces another statistical quirk since anomalies are actually deviations. The issue of the distribution of temperatures about the calculated average and the import of that for inferences drawn is a part of another long conversation.
This is not to say that a simple average of temperatures is useless, just that it alone is not a thermodynamic quantity. (Unless the system is being artificially maintained in some narrow ranging equilibrium state like a habitable room.) Like the tide gauge measurements a historical record of temperature average can be used as an index and observed and searched for relationships to other things, under the assumptions that the underlying system is either cyclic, static, evolving, or whatever one wants to consider. Correlations to other variables can be examined. It is just how stock market players calculate the big price indexes, measures of debt/earnings, volatility indices, etc. and search the behavior over time for patterns relating them to consumer confidence, durable goods purchases, inventory levels or whatever.
The main problem with the simple global temperature index occurs when it, along with its statistical uncertainty, is thought of as a thermodynamic quantity and erroneous inferences are made.

Menicholas
August 29, 2015 11:03 pm

@ Bob Armstrong:

E.M.Smith
Editor
August 30, 2015 12:48 pm

@Willis:
The way an intensive property can be averaged and be useful is to find those things that make it into an extensive property and account for them. So to turn ‘temperature change’ into ‘heat change’ takes more data.
With calorimetry, that is the mass, the specific heat, the heat of fusion and the heat of vaporization (if those phase changes happen), and any changes of composition (as they reflect into specific heat) along with any heat leakage paths.
Do all that, then you are measuring A THING that is well defined and that temperature can have meaning after averaging (as you are not really just averaging a temperature anymore, you are averaging a quantity of heat – ‘by proxy’ via that temperature of that specific thing once qualified for mass, specific heat etc. etc.
So go back and look at your examples:
Taking your body temperature: Mass approximately constant. Specific heat approximately constant.
Taking density of water: You specified a specific mass (cubic mile or whatever) and with constant composition.
That’s the whole point. You MUST account for those other factors for any hope of utility to the intensive property average, after using them to find the extensive property that you can average.
A counter example:
Take two cups of water. One is 0 C the other is 40 C. Mix them. What is the final temperature?
The average of the temperatures is 20 C. How useful was that math?
Well, was that first cup frozen water at 0 C or liquid? (water can be in either phase at that temperature).
Were both cups full, or was one 2 g and the other 2000g?
Was one salt water and the other fresh? Or was one D2O?
See the problem?
That is why just averaging an intensive property gives useless meaningless results. Because in all the ways you can use if for benefit, you must account for the rest of the properties of the things measured. To find, and or control changes enough, to have an extensive property you are really averaging.
Now for a single thing you can assume those properties acceptably constant (though even there it isn’t always a safe assumption… if you have ONE cup of water at 0 C warmed to 40 C how much energy did it take?… you don’t know…) so you can assume that for ONE thermometer outside my front door if it is averaging 100 F in the afternoon, it’s a darned hot week. But if you have one reading 100 F (somewhere) and another reading 20 F (somewhere else) you can not say a thing about ‘if the day is a nice day” or not. And the average of 40 F is not very useful.
The problem with averaging temperatures is exactly that. We have a constantly varying set of instruments, in varying places, with varying methods (LIG, electronics, aspirated, latex vs whitewash) measuring a constantly varying air body (density, composition (humidity), specific heat, mass as rain leaves it ) in a constantly varying environment (snow, rain, dry, smoke) and then making as our very first step a monthly min-max average that then gets averaged in with a load of other crappy things.
Nothing is being standardized to make the intensive property useful. There is no real way to calculate the extensive property of heat gain / loss from that basket of heterogeneous intensive measurements and averaging them makes it worse, not better.
What is required is exactly what is required in EVERY real discipline of science and what was thoroughly drummed into me in Chemistry class in high school: Don’t screw around with the thermometer. MEASURE PRECISELY the mass of the material being measured. Know the specific heat, heat of fusion and heat of vaporization. ACCOUNT for ALL mass, phase, and composition changes. Then, maybe, you might be able to do some kind of mediocre calorimetry; but don’t count on it unless it is all in a Dewar flask. (And even then double check the seals and all).
So it isn’t that temperatures are useless, or that you can’t average readings for one thing and one thing only measured with the same instrument over time; it is, as was stated in the comment to which I was responding, they can NOT be heterogeneous. And every single air sample is different from the prior one as “you can never cross the same river twice”.
So if my Max thermometer records 100 F, 99 F, 110 F, 95 F I can say that the days were generally warm and all of them were above 95 F, but I can NOT say the average temperature was 103.F as that is a meaningless number. It is a statistic about temperatures and not a temperature. So the only right thing you can say is that the mean of the values was 103.5 (Note: No F) and at sometime it ranged from 95 to 110 but you don’t know how long, how hot most of the time (was it a 2 minute peak, or 100 for 12 hours). And absolutely NOTHING can be said about heat gain / loss as the key values of mass, composition, specific heat, phase change and those associated heats, etc. are all missing.
Add in a MIN reading and it gets worse. Is the average of a 0 C / 40 C day and a 20 C day “the same”? If 3 feet of snow fell at that 0C time, is it ‘the same’?
Simply averaging heterogeneous temperatures is useless.
(And if you think you have homogeneous temperatures from just using one thermometer at different times, remember that you are not relieved of the necessity to account for variations of mass, specific heat, etc. etc. So I might measure 100 F twice in a row, but one in dry air and the next in rain are two very different things… and one in the shade with one in the sun just as bad… and… Then calling that a proxy for ‘heating’ is just daft.)

Editor
August 30, 2015 9:55 pm

E.M., this kind of pitching and tossing of your whole comment is just changing the goal posts. YOU SAID:

Exactly right. One can NOT average an intensive property and preserve meaning.

I gave you several examples where averages of an intensive property have lots of meaning, including averaging ENSO3.4 temperatures, using a laser to measure the surface of a tank of water (or the surface of the ocean for that matter), someone saying “the average temperature last week was -20” and whether that preserves enough meaning to encourage you to put on a coat before going outside, and Archimedes’ discovery of how to do an exact calculation of the average density of the king’s crown, an intensive property.
In response, you come back with this eye-popping opening …
E.M.Smith August 30, 2015 at 12:48 pm
@Willis:

The way an intensive property can be averaged and be useful is …

BZZZT! Wrong answer! You already claimed in capital letters that an intensive property can NOT be averaged and preserve meaning … and now you come back and you want to lecture me on how an intensive property CAN be averaged and preserve meaning?
My friend, that’s what I just got done explaining to you, exactly how an intensive property CAN be averaged and still have meaning, complete with examples. As I explained to you, to get a meaningful average of an intensive property, you need one of two things—either 1) lots of measurements, the more the better, to reduce your error estimate, or 2) some way to take an end run, like Aristotle did.
And you now are repeating those same two ways back to me in different words … save your breath.
Sorry to be so brusque, Doc, because I generally do like your work, comments, and ideas, but I don’t know how to describe you moving the goalposts, and then repeating my words back to me as if it was new information, in any but a negative manner …
w.

FAH
August 31, 2015 2:43 pm

Willis and E.M et. al. in this discussion thread: I think the discussion is needlessly contentious and we all actually agree as long as we are precise in our terms and understanding of what each of us is saying.
First, it is not the case that an average of intensive qualities is meaningless, or useless, certainly not in general. Second, it is also not the case that an average of intensive quantities over a heterogeneous or non-equilibrium system or set of systems is IN GENERAL a well-defined THERMODYNAMIC variable, even while maintaining its utility. Third, it can be the case that for a restricted or well defined SPECIFIC system or subsystem an average over intensive quantities is a good measure of the intensive property relevant to the specifically defined system.
To the first point, Willis has given a number of examples in which an average over intensive quantities is useful and indicative of some integrated behavior of the whole system. Water levels are good example. The historical record of high and low tides over a variety of meteorological conditions at a specific location is very useful as a guide to how to plan future construction, tidal surge amelioration infrastructure and the like. It doesn’t matter whether it is or is not a rigorously defined thermodynamic quantity. We take meaning from the historical record as an indicator of the likely future behavior. Even the much maligned global average surface temperature is useful and not without meaning. If it is calculated consistently and as accurately as possible it serves as a historical record of itself and provides a useful target for modelers attempting to model the heterogeneous non-equilibrium climate to try to reproduce it. As long as one keeps in mind, when assessing the uncertainty of the average, to differentiate between the measurement uncertainties of the instruments distinct from the uncertainty with which the average approximates a thermodynamic quantity there is no problem.
To the second point, even though an average of intensive quantities may be accurately calculated and we have historical records indicating its utility as an indicator of the evolution of the system, it is not in general (but see the third point) a thermodynamic quantity. There is no shame in not being a thermodynamic quantity. It is still useful and meaningful. It can still indicate some amalgamated underlying behavior. Not being thermodynamic simply means that the average itself may not appear in a thermodynamic equation based in physics. Alas we do not have equations to represent everything we want, else we would not be having this discussion.
To the third point, if the system under consideration is sufficiently restricted that the intensive quantity is constant enough over the system (meaning its variations over the system are small compared to the underlying dynamics) or the dynamics are represented in some kind of weighting system, then an average of an intensive quantity can be a good approximation of the intensive quantity relevant to that subsystem. Water level in a fixed tank is a good example. It may be the Nino index is another, in the sense that variations over the sea surface considered are small compared to the dynamics of the system – I honestly know nothing about that quantity. It doesn’t make any difference except for how we want to use it, as a term in a thermodynamic equation, or as an indicator to use to predict through correlation or through use of underlying detailed models. It is useful and meaningful in either case.
In summary, depending on the specific case an average over intensive quantities may represent a thermodynamic average intensive quantity or it may not represent an average thermodynamic intensive quantity. In either case, the number can still have immense utility and meaning. In a complicated world, not all numbers can be well defined thermodynamic quantities, but they are still useful. It remains for us to use the numbers wisely and try to avoid arguments when we all agree, at least when we understand what we each are saying.
To emphasize the point, the statement that “An average over intensive quantities has NO meaning” is incorrect. Also, the statement that “An average over intensive quantities is ALWAYS a thermodynamic variable” is incorrect.

Robert B
August 31, 2015 4:28 pm

Nicely written, EM. One suggestion is to use as an example of an intensive property the salinity of sea water. For most of the oceans it is between 3.1% and 3.8% and according to UCSB Science Line, the average is 3.47% or 34.7 parts per mil.
While you can be fairly confident that taking a 1mL sample will give you a good estimate of the salinity for miles around if its not right at the surface of still seas, raining or near the bilge pump outlet. The land temperature record is worse than taking the measurements just at the surface and near coastlines and then to say that the salinity has gone up to 3.48% and this is making sea snails less amorous. The data is an indicator but how much worth does it have?
The temperature record is only an indicator of change but it really should be separate max and min temperatures. The mean of this is meaningless.

richardscourtney
August 28, 2015 9:52 pm

john robertson and FAH:
Yes! And it has been said many times in many places. But few want to know.
Please see this, especially its Appendix B.
Richard

David A
August 31, 2015 5:49 am

I wish to highly recommend EM Smiths comment here… http://wattsupwiththat.com/2015/08/28/how-fast-is-the-earth-warming/#comment-2017711 which puts in laymen terms the entire discussion about intensive and extensive properties. It takes clear understanding of a subject to so effectively communicate.
I disagree with the Willis rebuttal because it is a science discussion, and for Willis to take such a literal and complete interpretation about a subject that by it very nature is always relative, is not logical. Clearly E M was referring to the matter and fact that how the global average surface record is currently calculated, does not turn the intensive T reading in an extensive property, and an intensive property derived GMT, so averaged, is meaningless. Also the fact that the examples Willis used turn and or apply extensively as well is intensively, is cogent to what EM Smith initially stated.
The EM Smith post clearly explains the difference between extensive and intensive properties , and that my friends is highly valuable.
Thanks for the post.

Editor
August 31, 2015 10:29 am

David A August 31, 2015 at 5:49 am

I wish to highly recommend EM Smiths comment here… http://wattsupwiththat.com/2015/08/28/how-fast-is-the-earth-warming/#comment-2017711 which puts in laymen terms the entire discussion about intensive and extensive properties. It takes clear understanding of a subject to so effectively communicate.

And I wish to highly recommend my reply to E. M. Smith here, which points out that Doc is just parroting back to me what I said to him, and that his “explanation” is in total contradiction to his original statement.
Originally he said:

Exactly right. One can NOT average an intensive property and preserve meaning.

After I pointed out, with a number of examples including Aristotle, that he was wrong and that one CAN average an intensive property and preserve meaning, he replied starting with …

The way an intensive property can be averaged and be useful is …

David, that’s not an “effective communication”. It is a failed attempt to get people to ignore his first statement, and it is merely a restatement of what I’d just said. It seems that that impresses you, so it appears he was at least partially successful in his attempt at misdirection.
w.

August 28, 2015 11:16 pm

if you dont like anomalies add a constant.
you realize that all you do when you calculate anomalies is scale by the mean.
ya.. subtract an anomaly.
Guess what. do the algebra. subtracting a constant doesnt change anything.

Stephen Richards
August 29, 2015 4:40 am

Steven its not the anomalies that are the problem but the derivation of average temps calculated from manipulated data.

Jim G1
August 29, 2015 7:21 am

If the intent is to propagandize through causing alarm, anomalies, by re scaling, sure do a great job. When one considers the precision of the equipment, sampling and adjustments to data, along with some of the statistical gyrations to which it is sometimes subjected, there is little meaning in the results. Charts of actual temperature over time are not nearly as alarming, particularly when we consider that we have been in an interglacial for about 12, 000 years.

john robertson
August 29, 2015 9:00 am

Steven Mosher
And this improves the quality of the information available how?
When your constant has an error range of 2 to infinity degree C, what meaning does a 0.1C variation in it have?
When your average is a meaningless construct, defined high precision deviations mean what?
What information is being provided?
One of the features of this nonsense masquerading as science, is the deliberate refusal to define the terms.
Climatology is babble by choice, prattling on about the number of angels that can dance upon the end of a pin, does nothing to confirm the existence of said angels.
The false precision of these variations in the estimated global average temperature, reminds me of the fabric of the Emperors New Clothes.

Solomon Green
August 30, 2015 5:23 am

Editor
August 30, 2015 10:21 pm

Stephen Richards August 29, 2015 at 4:40 am

Steven its not the anomalies that are the problem but the derivation of average temps calculated from manipulated data.

Thanks for the comment, Stephen. The problem with your claim is that almost all climate data used by competent scientists is manipulated in some fashion.
For example, Steve McIntyre discovered a lovely example in (from memory) the GHCN temperature data. Seems some African country where for a couple of years they took their temperature data in °F rather than in °C … and the GHCN didn’t catch it.
Now, when you see data like that, which is the norm rather than the exception in climate science, what do you do? Throw it away? We don’t have enough data to do that, and besides, it was obvious what was wrong. So you convert the °F to °C, and Bob’s your grandma’s illegitimate son … but then, of course you are using MANIPULATED DATA, oh, no, that’s evil, can’t have that …
Or suppose you’ve changed out your mercury thermometer for a new one in your temperature station. A few years later, after you’ve collected enough daily data, you notice that all your readings seem a bit high. So you check the new thermometer against a standard, and it reads 2° high. You check it again and again over time, it’s always 2°high, some kind of manufacturing error perhaps.
So … would you throw out all that data?
Or would you just subtract 2°C from it and use it? Me, if I’d spent several years getting out there every day and collecting the data, I know I’d vote for the latter … which means that I’d manipulate the data.
In climate science, the correct question is almost never “was the data manipulated”? Of course it was, at a minimum it will have undergone quality control, which may include things like breakpoint identification, checks for heterogeneity, outlier detection, decisions on how to deal with missing data, the list is long and very data-dependent … but regardless, any scientist would be a fool to just take raw data as is and use it without doing at least that amount of “data manipulation”.
Since there is almost always some type of data manipulation, the relevant question to ask is “Exactly HOW was the data manipulated?” Once you know that, you can discuss whether or not that particular change was justified or not.
In other words, both data manipulations and climate data itself are like spaghetti westerns—some are good, some are bad, and some are ugly. As a result, the oft-repeated mantra of “raw data good, ‘manipulated’ data bad” is a very misleading oversimplification of a much more complex and nuanced situation.
Best regards,
w.

MarkW
August 29, 2015 9:08 am

In 1880 and for many years afterword, the thermometers were read to the “nearest” degree C, using the mark one eye ball. The absolute best error bars you could get with such a reading is +/- 0.5C. Add in the problems with site maintenance, missed readings, undocumented equipment and location changes, and covering less than 5% of the earth’s surface.
The idea that we could tease a signal of just a few tenths of a degree C out of that mess is absolute nonsense.

Mike M. (period)
August 28, 2015 7:19 pm

Sheldon Walker,
You wrote: “there has been a significant drop in the rate of warming over the last 17 years”
But you present no actual evidence to support that claim. You can not decide if something has changed without evaluating the uncertainties in the analysis. That you have failed to due, likely because you know nothing about statistics.
The first rule of time series analysis is that you do not average, smooth, or filter data before doing the analysis. You don’t merely violate that rule, you average over the same time period as you fit, pretty much guaranteeing an invalid result.
p.s. I think there may be an exception to the above rule: removing a periodic signal the origin of which is well enough understood that one can be sure it is not relevant to the subsequent analysis. But some people have conniptions over that.

E.M.Smith
Editor
August 28, 2015 9:55 pm

Since all the surface temperature series start with daily min max averages that are then averaged into monthly averages that then get turned into regional and even a global average, doesn’t that also mean they can not speak to trend either…

Mike M. (period)
August 29, 2015 8:01 am

E.M. Smith,
“Since all the surface temperature series start with daily min max averages that are then averaged into monthly averages that then get turned into regional and even a global average, doesn’t that also mean they can not speak to trend either”
Maybe, maybe not. I do not have the expertise to make that judgement with any confidence, although I do have enough expertise to see the flaws in Walker’s analysis. Science is like that; it is not so much a matter of getting things right as getting rid of things that are wrong.
I think it likely that monthly averages can be justified on the grounds of time scale separation, especially since variance on time scales of up to a week or so is very different from variance on times scales of a few months and longer. The removal of seasonal variation by using anomalies likely falls under the exception I mentioned above; we certainly understand the cause of those variations. We also understand the origin of diurnal variation, but the correct average in that case is unclear. Daytime T’s are usually representative of a boundary layer that can be up to 2 or 3 km deep; nighttime T’s often only represent the nocturnal inversion layer that might be no more than 100 meters deep. So averaging max and min is averaging measurements of two different things. There are many such issues that arrive with spatial averaging.
Given the importance that the mainstream climate science community seems to attach to these numbers, it is important that issues such as the above be addressed. It frustrates me that the climate scientists do not appear to have made that effort. I find myself wondering if the numbers are as important as claimed, or if they are just convenient for propaganda.

richardscourtney
August 28, 2015 10:01 pm

Mike M. (period):
I agree. Any data can be processed to show anything.
The climate data are all processed prior to presentation. This provides severe doubt to any conclusions drawn from climate data.
Additional processing of the presented climate data increases the doubts concerning any conclusions – e.g.”rate of warming” – drawn from the additionally processed data.
None of the data indicate anything in the absence of justified estimation of its inherent error range(s).
Richard

firetoice2014
August 29, 2015 5:13 am

The month of December, 2014 is an interesting case study regarding these anomaly products. At the end of November, 2014 the earth had a single global mean surface temperature. At the end of December, 2014 the earth also had a single global mean surface temperature. The difference between the global mean surface temperature at the end of November, 2014 and the global mean surface temperature at the end of December, 2014 is a unique value. However, the change in the global mean surface temperature anomaly, and thus the change in the global mean surface temperature, reported by the three primary producers of global mean surface temperature anomaly products for December, 2014 is not a unique value. GISS reported an anomaly increase of precisely 0.06oC; NCDC reported an anomaly increase of precisely 0.12oC; and, HadCRUT reported an anomaly increase of precisely 0.15oC.
It is possible that the global mean surface temperature anomaly change in December, 2014 was precisely 0.06oC, or precisely 0.12oC, or precisely 0.15oC. However, it is clearly not possible that the anomaly change, and thus the temperature change, was precisely 0.06oC and precisely 0.12oC and precisely 0.15oC. It is certainly possible that the anomaly change was somewhere within the range of values reported by the three primary anomaly product producers. It is also possible that the actual anomaly change was not within that range. It is interesting, however, that each of these disparate anomaly change estimates was just large enough to permit the producer to claim that 2014 was the warmest year on record, even if with less than 50% certainty.

Menicholas
August 29, 2015 7:25 am

“I agree. Any data can be processed to show anything”
The real crime is the number of people who are absolutely fooled into thinking we actually are living in the warmest year EVAH, and are sure of it to a dead certainty, to the point of thinking shutting down our economic lifeblood…our energy infrastructure…is a good idea.
Worse yet…to the point of wanting to tell children in schoolrooms how doomed and f**ked they are.
This is the real crime.

Mike M. (period)
August 29, 2015 8:05 am

firetoice2014 wrote: “GISS reported an anomaly increase of precisely 0.06oC; NCDC reported an anomaly increase of precisely 0.12oC; and, HadCRUT reported an anomaly increase of precisely 0.15oC.”
That is incorrect. The anomaly values are not precise, they are approximate. I am not sure what the error estimates are, but it would not surprise me if they are large enough for the various numbers to be consistent with each other.

David A
August 29, 2015 8:21 am

Yes, Mike, consistent with each other, but entirely inconsistent with UHA and RSS, and nothing in the physics of the IPCC climate models can explain that.

Menicholas
August 29, 2015 8:58 am

“I am not sure what the error estimates are”
Indeed you do not.
No one does.
Because, unlike the scientific method, the climate science method most commonly makes no mention of error bars or certainty levels.
How on Earth would you, or anyone else, know something which is absent from the story?

Brandon Gates
August 29, 2015 5:19 pm

Menicholas,

Because, unlike the scientific method, the climate science method most commonly makes no mention of error bars or certainty levels.

Here’s Hansen, et al. (2010) which describes the entire GISSTEMP process, which includes a lengthy discussion of error and uncertainty estimates: http://pubs.giss.nasa.gov/docs/2010/2010_Hansen_etal_1.pdf
Both papers are cited in the FAQ pages for their respective temperature product.

David A
August 29, 2015 10:03 pm

Brandon, the divergence from the satellites has become an overwhelming issue, impossible for the physics contained in the IPCC models to overcome. They could run their models from here to eternity, and they will not have the surface rapidly warming while the troposphere is flat or cooling.

richardscourtney
August 30, 2015 1:13 am

Brandon Gates:
I wrote

None of the data indicate anything in the absence of justified estimation of its inherent error range(s).

In the subsequent sub-thread Menicholas wrote

unlike the scientific method, the climate science method most commonly makes no mention of error bars or certainty levels.
How on Earth would you, or anyone else, know something which is absent from the story?

Menicholas is right because as he says the climate science method most commonly makes no mention of error bars or certainty levels. But you have replied to him saying

Here’s Hansen, et al. (2010) which describes the entire GISSTEMP process, which includes a lengthy discussion of error and uncertainty estimates: http://pubs.giss.nasa.gov/docs/2010/2010_Hansen_etal_1.pdf
Both papers are cited in the FAQ pages for their respective temperature product.

And your reply ignores the word “justified” in my original statement that “None of the data indicate anything in the absence of justified estimation of its inherent error range(s).”
The teams you cite each provide 95% confidence limits for their results. However, the results of the teams differ by more than double those limits in several years, and the data sets provided by the teams have different trends. That is a clear discrepancy because these data sets are compiled from the same available source data (i.e. the measurements mostly made at weather stations using thermometers), and purport to be the same metric. This discrepancy demonstrates that the 95% confidence limits of at least one of these data sets are wrong.
So, we know the error estimates for these data sets are wrong but we do not know how wrong they are. And, therefore, the data sets are not meaningful.
Richard

Brandon Gates
August 30, 2015 3:24 pm

richardscourtney,

Menicholas is right because as he says the climate science method most commonly makes no mention of error bars or certainty levels.

Once again: discussion of error and uncertainty is found on the documentation pages for GISTEMP LOTI: http://data.giss.nasa.gov/gistemp/FAQ.html
Both groups published papers in literature describing how they account for and estimate errors and uncertainties in their respective products:
GISTEMP: http://pubs.giss.nasa.gov/docs/2010/2010_Hansen_etal_1.pdf

And your reply ignores the word “justified” in my original statement that “None of the data indicate anything in the absence of justified estimation of its inherent error range(s).”

You repeat a false claim after it has been shown to be wrong. Multiple sources of error and uncertainty are detailed at length in the papers I have twice now linked to.

The teams you cite each provide 95% confidence limits for their results.

More or less yes … for GISTEMP it’s a 2-sigma error bound. The 95% confidence level is 1.96 sigma, which is what HADCRUT4 publishes. And still you yet maintain that this is “most commonly” not done in climate science. Bizarre.

However, the results of the teams differ by more than double those limits in several years, and the data sets provided by the teams have different trends.

One wonders how to do percentage calculations on the difference between two ANOMALY time series with arbitrary baselines.
From 1880-2015, the trends (in C/decade) are 0.065 for HADCRUT4, 0.070 for GISTEMP, difference of 0.005.
For HADCRUT4 the mean annual CI from 1880 to 2015 is +/- 0.108 C, for GISTEMP it is +/- 0.054. Using a 1981-2010 baseline, there are 7 years from 1880 to 2015 where the error envelopes do not intersect: 1932, 1938, 1948, 1949, 1951, 1952 and 1953. In the other 95% of the records, the CIs overlap, which makes a lot of sense if you think about it. The mean annual difference between the two products using the same base period over the same interval is 0.058, the Pierson correlation coefficient is 0.98, and the two-tailed P statistic is 2.5 * 10^-20 … or effectively zero.
In other words, the two datasets are not statistically significantly different from each other at the 95% confidence level.

That is a clear discrepancy because these data sets are compiled from the same available source data (i.e. the measurements mostly made at weather stations using thermometers) …

GISTEMP LOTI uses ERSST.v4, whilst HADCRUT4 uses ICOADS for sea surface temperatures. HADCRUT4 uses some land surface station data not included in GHCN, therefore those stations are not represented in GISTEMP. HADCRUT4 also discards some stations in GHCN entirely due to excessive gaps, whereas GISTEMP infills gaps by interpolation based on neighboring stations.

… and purport to be the same metric.

Both organizations characterize their products as estimates of mean global temperature trends subject to uncertainty and error. They also both routinely compare their own product to others, by final results as well as by methodological differences.

This discrepancy demonstrates that the 95% confidence limits of at least one of these data sets are wrong.

The fact that confidence intervals are being calculated in the first place implicitly tells anyone with the barest inkling of common scientific practice that the results of both products are wrong regardless of whether they’re using the same source data and methods (which they are not). The question of wrongness then becomes one of degree, estimates of which are in fact published, easy to find, and readily accessible. One need only bother to go looking for the details … or at the very least review them once citations have been provided — TWICE no less.

richardscourtney
August 31, 2015 12:25 am

Brandon Gates:
I do not know if your reply to me is obtuse or idiotic. Perhaps you can clarify?
I wrote

The teams you cite each provide 95% confidence limits for their results. However, the results of the teams differ by more than double those limits in several years, and the data sets provided by the teams have different trends. That is a clear discrepancy because these data sets are compiled from the same available source data (i.e. the measurements mostly made at weather stations using thermometers), and purport to be the same metric. This discrepancy demonstrates that the 95% confidence limits of at least one of these data sets are wrong.
So, we know the error estimates for these data sets are wrong but we do not know how wrong they are. And, therefore, the data sets are not meaningful.

You have replied with much irrelevant waffle and conclude saying

The fact that confidence intervals are being calculated in the first place implicitly tells anyone with the barest inkling of common scientific practice that the results of both products are wrong regardless of whether they’re using the same source data and methods (which they are not). The question of wrongness then becomes one of degree, estimates of which are in fact published, easy to find, and readily accessible. One need only bother to go looking for the details … or at the very least review them once citations have been provided — TWICE no less.

NO! The provision of confidence intervals tells everybody except you that the right values are claimed to be probably within the stated error range.
The fact that
“The teams you cite each provide 95% confidence limits for their results. However, the results of the teams differ by more than double those limits in several years, and the data sets provided by the teams have different trends.”
shows that the right values are NOT within the stated error range for at least one of the data sets; i.e. the data are wrong.

And this observation that the data are wrong is not overcome by publication of how the incorrect estimates of the error ranges of the data sets were calculated.
Richard

Menicholas
August 31, 2015 7:19 pm

Mr. Gates,
You seem to be interpreting my use of the phrase “climate science” very narrowly, if you think citing a few examples refutes my point.
I would wager that every skeptical voice on this blog could cite dozens of RECENT examples of press releases and articles which make all manner of claims and never mention uncertainty or error bars.

Brandon Gates
September 1, 2015 12:05 pm

Menicholas,
That your previous statements were ambiguous, and therefore subject to “incorrect” interpretation, is not my responsibility.
This skeptic wagers (nay: observes) that press releases from other scientific disciplines are similarly lax when it comes to mentioning uncertainty or error estimates. Which does not make it right, and in fact I deem it lamentable.

Menicholas
September 1, 2015 2:21 pm

OK, It’s MY fault. It is and has all been my fault, always.
I am not even sure what you just said.
I mean, I can read the words, but I am not grasping the thought you are intending to impart.
In any case, I was wondering…is it just my imagination, or were we deprived of your wonderful company here for some length of time, Mr. Gates?
You mentioned a book…is it a new one?

Lance Wallace
August 28, 2015 7:20 pm

“Calculating the slope over 121 months (the month being calculated plus 60 months on either side),”
You have points all the way to 2015. How did you locate the temperatures for the 60 months after 2015 to include in your regression?
You also show values for 1979 in the satellite series. How did you average in the 60 months of satellite temperatures before the satellites were up?

Menicholas
August 29, 2015 7:36 am

There are three kinds of lies.

Peter Sable
August 29, 2015 7:55 am

You have points all the way to 2015. How did you locate the temperatures for the 60 months after 2015 to include in your regression?

The first and last 5 years of each rate of warming curve has more uncertainty than the rest of the curve. This is due to the lack of data beyond the ends of the curve. It is important to realise that the last 5 years of the curve may change when future temperatures are added.

I think he answered that. He reduces the size of the kernel at the ends. It’s one of the “how to deal with endpoints” methods I”m evaluating.
I don’t like his choice of kernels though. Why a square? Because it’s convenient in Excel? That’s not a good choice. A Gaussian kernel or one of the kernels Loess uses would be more appropriate.
I have all this plus data going back to GISS 2005 release. Curious to see how this analysis changes over time. Will post later.
Peter

Peter Sable
August 29, 2015 7:57 am

Here’s a set of kernels in common use by statisticians. Signal processing folks use different ones, which I always find bizarre:
https://en.wikipedia.org/wiki/Kernel_%28statistics%29#Kernel_functions_in_common_use
Peter

MarkW
August 29, 2015 9:32 am

The problem in my mind is that once you change your method of processing the data, you are no longer comparing like to like. If you can prove that changing the kernel has no impact on the data being presented, they why not use that new kernel to process all the data, since it’s just as good?

MarkW
August 29, 2015 9:30 am

He did say that the graph is less accurate after 2010 due to the fact that we don’t have data points in the future yet.
I agree that the graph should have been truncated since the dropping accuracy means we are no longer comparing apples to apples.

Rbabcock
August 28, 2015 7:20 pm

Great read.. thanks for the effort.
Up next, how about a rate of rate of warming?

August 28, 2015 7:39 pm

2.5 C per century. That’s, let me check, century = 100 years, aka .025 C per year. Bogus!!! Where do you locate daily/annual measurements of that precision? Nothing but statistical hallucinations, lost in a cloud of +/- instrument error/uncertainty. And nothing about these temperature curve fits ties back to CO2 concentrations.
IPCC has no idea what percentage of the CO2 increase between 1750 & 2011 is due to industrialized mankind because there are no reliable measures of natural sources and sinks. Could be anywhere between 4% and 196%.
The 2 W/m^2 additional RF due to that added CO2 per IPCC is lost in the 340 W/m^2 ToA +/- 10 W/m^2 incoming and the -20 W/m^2 of clouds.
IPCC AR5 Text Box 9.2 admits their models didn’t capture the pause/lull/hiatus/stasis and have no idea how to fix them.

Menicholas
August 29, 2015 7:40 am

“Could be anywhere between 4% and 196%.”
I think you need to extend the percent range down to zero percent (and perhaps into negative territory, since because the warmist hypothesis is obviously incorrect, it opens up the possibility that those who posit a net cooling may, in fact, be right). What do we have to indicate that at least 4% of anything is directly attributable to CO2 increases?

Louis Hunt
August 28, 2015 8:23 pm

When you said that “The entire temperature series is used,” I expected you to calculate an overall rate of warming for the entire period. I didn’t see any such calculation. I’m curious what the rate of warming would be when calculated or averaged over the entire time period from 1880 to 2015.

TonyL
August 29, 2015 12:54 am

Here is what I get.
http://i58.tinypic.com/2q36tkg.png
Intercept = -0.6961 deg. C.
Just exactly 1.00 deg/century. This sounds a bit low, but we remember that the GW narrative is ~0.8 deg C over the time from 1880 to 2000, or so. So maybe it is OK, after all.
ALL: Just because I plotted this up and calculated a trend does not imply that I think this means anything at all. Particularly in regards to planet Earth.

TonyL
August 29, 2015 6:52 am

I did not notice this when I first posted the graph.
(It is a big graph, click to embiggen)
The fitted line crosses some grid lines at two neat points. First is -0.50, 1900, nice and neat (within a pixel or so). The second is +0.50, 2000, nice and neat. 1.00 degree from -0.50 to +0.50 and 100 years from 1900 to 2000. All nice and neat and tidy.
This is 1627 data points with lots of noise, disparate data sets, adjustments and corrections. And it just happened that way.
I get the feeling I am being laughed at. I think somebody is mocking us.

Menicholas
August 29, 2015 7:44 am

Is it embiggen, or enbiggen?
Sophisticated minds need to know!
Maybe I had wax in my ears the night that episode of The Simpsons aired.
http://www.kottke.org/07/06/embiggen-cromulent
Just goes to show…you can learn something new EVERY damn day!

Menicholas
August 29, 2015 7:45 am
BFL
August 29, 2015 10:34 am

@TonyL: Just eyeballing the graph one could discern about 0.5 deg from 1880 to 1940 then flat to about 1980 then another ~0.5 to 2015 or about 1 deg over 135yrs. Why can’t the models can’t do this; perhaps no understanding of what’s going on?

TonyL
August 29, 2015 10:59 am

@ BFL
You are right, but I think people know altogether too well what is really going on. One has to go back in time to the pre-internet era to get a clear idea of what was going on. Back in those ancient days, information was recorded by the use of indelible stains applied to a material produced from dead trees. Information was transmitted by physically transferring the processed material from person to person. This system, as primitive as it was, had the great advantage of permanence within reason. Information thus recorded, was quite resistant to change, and attempts at edits would leave obvious tell-tale signs. Some of those records do survive today, and they tell quite a different story. Here is one of those records, from WUWT:
There are many other similar records, and they all show the same thing. There was a strong cooling trend for four decades, from the mid 1930s to the mid 1970s.
Original post here:
http://wattsupwiththat.com/2010/03/18/more-on-the-national-geographic-decline/
H/T Willis.
It seems Winston Smith, working at the Ministry Of Truth, has edited the past here.

September 1, 2015 12:13 pm

GISS’ “data” are a fabrication.
In reality, the world warmed in the late 19th century a fraction of a degree, then cooled. It warmed in the early 20th century (c. 1920-40) a fraction of a degree, then cooled from about 1945-77 so notably that scientists feared the next ice age was coming. Then it warmed perhaps a fraction of a degree from c. 1977-96, perhaps aided by clearer skies from anti-pollution efforts. It’s now cooling again.
The total warming since the end of the LIA in the mid-19th century might be as much 0.8 degrees C, of which most occurred before CO2 really took off after WWII. Recall that the response of the planet to that rise was over three decades of chilly cooling.

September 1, 2015 12:32 pm

The warming since c. 1850 could also be quite a bit less than 0.8 degrees C. No one can know, since it’s effectively impossible to measure the average temperature of the surface of the earth even now, let alone in the past, when seawater was collected in buckets.
My own guess is around 0.7 degrees C, of which 0.4 or 0.5 occurred before WWII. The late 19th and early 20th century warmings were two steps forward (warmer), followed in both cases by one or more steps back (cooler). The mid-20th century cooling was so pronounced that it almost cancelled out all the strong warming during the 1920s and ’30s.
Odds are good that the coming cool spell will similarly cancel out most of whatever warming might actually have occurred in the late 20th century.

SAMURAI
August 28, 2015 9:17 pm

Once the CAGW hypothesis is laughed and eye-rolled onto the trash heap of failed ideas in about 5~7 years, real scientists without agendas will have to dig through GISTEMP and HADCRUT raw data and fix all the contrived “adjustments” that have been made.
Until that difficult process is completed, it’s almost senseless to use GISTEMP and HADCRUT temp data for anything other than objects of ridicule…

Chris Hanley
August 28, 2015 9:56 pm

I was going to say that.
The next five or so years will be the conclusive.
If the satellite data trend shows no sign of the exponential increase, so necessary to the narrative, the show’s over.
Plaintiff lawyers should be sharpening their pencils.

MarkW
August 29, 2015 9:35 am

It is my personal belief that we will see actual cooling over the next 10 years.

Menicholas
August 31, 2015 7:26 pm

Ditto me on that cooling trend Mark.
And I think the eyeroll is the perfect response to the inanity of the warmistas.
Toss in a good eyelash flutter for emphasis too.
The problem, or course, is that the inanity is the least of it.
This BS is serious shite nowadays.
The US economy and much of our energy infrastructure is being regulated out of business.

August 28, 2015 10:00 pm
Dawtgtomis
August 29, 2015 8:13 am

Excellent, Will.
In these mass-ignorant, pessimist times,
The truth is more poignant but fun when it rhymes!

E.M.Smith
Editor
August 28, 2015 10:01 pm

GIStemp is not temperature data. It uses GHCN and some USHCN to make averages in grid boxes via data averaging and smearing and adjusting, then makes anomalies between those grid boxes most of which contain no actual thermometer at al…
It fabricates fantasy numbers in boxes, not temperatures.

SAMURAI
August 28, 2015 11:50 pm

E.M.Smith– Everyone knows UAH, RSS, HADCRUT, JISTEMP, etc., are temperature “anomaly” data sets.
Few, people go through the tedious task of writing out “anomalies” when referencing them, as it’s understood..
I’m just sayin’….

firetoice2014
August 29, 2015 8:16 am

SAMURAI,
GIStemp and HADCRUT are anomaly calculations, but they are not DATA sets, even though they began with data sets. Once the data are “adjusted”, they cease to be data, but are merely estimates of what the data might have been, had they been collected timely from properly selected, calibrated, sited, installed and maintained measurement instruments.

E.M.Smith
Editor
August 30, 2015 1:40 pm

:
GIStemp presents temperature graphs for locations. It presents temperatures for zonal and hemispheric and global temperatures, and it carries the data as temperatures through multiple stages of adjusting, averaging, and worse to make the final prossessed data-food-product. ( I’ve ported the code, back about 2009, to Linux, and run it, and read all of it…). Only in the very last stages does it make an anomaly of ‘grid boxes’ and not of temperatures from thermometers.
Since, last I looked, they had upped the number of boxes to about 16,000 but there were at MAX 6000 thermometers and in many decades less than 1200: By Definition the bulk of their ‘anomalies’ are calculated between two “grid boxes” that contain no actual thermometer and no actual temperature.
To call that a ‘temperature anomaly’ is as much a farce as calling it a temperature.
It is, at best, an anomaly of two fictional values created by a questionable means from prior frequent averaging of intensive properties based on a non-Nyquist sample, that have themselves already been reduced to a monthly average of intensive properties. I.e. IMHO garbage. And not at all a temperature OR a ‘temperature anomaly’ as temperatures ‘left the building’ with the first monthly average making GHCN and were completely assassinated by the time two physically empty ‘grid boxes’ have an anomaly created between their fictional values.
But you can call that a ‘temperature’ if you like…. I’m sure “it’s understood”…
I call that the “anomaly canard” as folks put it forth as though the very first step were turning the temperatures into anomalies (at which time the complains about averaging temperatures would evaporate) but they do not in GIStemp. They carry temperatures AS temperatures though to the very end, doing all their math on them AS temperatures. THAT is not and “anomaly” based process, is not using anomalies, and is not producing “anomalies”. It produces fictional temperatures in grid boxes. Then makes an “anomaly” between the fictions… that does not produce a ‘temperature anomaly data set’. It produces a distorted averaged temperature fiction used to make an anomaly of voids.

TonyL
August 28, 2015 10:08 pm

Same question as Lance Wallis, above.
Using a 121 month filter, you run out of data at July, 2010. To go beyond that point, some special technique must be used. This might be extrapolating a trend to 2020, or collapsing the filter down from 121 to 11 months, month by month, or some other method.
How did you handle the “end point problem”?

MarkW
August 29, 2015 9:38 am

From the comments in the article, I’m assuming that he just truncates the leading edge of his 121 month filter. In other words, he continues to use 60 months into the past, but just 59 months into the future, them 58 months, then 57 months, and so on until he gets to the present and is using 0 months into the future. Hence his comment about decreasing accuracy after 2010 and possible changes to the curve as new data points are added. If you want to compare like to like, you have to stop your analysis at 5 years from the beginning and ending of your data.

August 28, 2015 10:58 pm

This is what I calculate from the day to Dat difference between yesterday’s rising temp minus last night’s falling temp
These are the 72million surface records for stations with 360 samples per year.

4TimesAYear
August 29, 2015 12:00 am

Hey….I really like this chart!

August 29, 2015 4:30 am

If you take the day to Day change and plot it out for the norther extatropic it’s a sine wave, if you take the slope of the 2 zero crossings, it’s the rate of seasonal temperature change.
The Northern hemisphere is better sampled
This shows the rate it’s warming and cooling over the years.

Menicholas
August 29, 2015 7:56 am

I like day to Dat better.
It says it all.

MarkW
August 29, 2015 9:39 am

Dis, dat and the udder thing.

Mike
August 29, 2015 12:25 am

This may be interesting and valuable Mike, but without some explanation of _exactly_ what you are plotting it is meaningless to me. I am unable interpret what this is showing and what it may mean about climate.
What does “last night’s falling temp” actually mean? Is this an average rate of change? Over what period, how many readings.? Is it a linear regression, mean or what?
You also totally fail to say what “temperature” you are talking about or what your data source is.

August 29, 2015 4:44 am

My chart is data from NCDC global summary of days data set. For each day by station I calculate yesterday’s rising temp as Tmax -Tmin = rising, then Tmax – Tmin from the following morning =falling. Then difference = rising – falling. Each station included has greater than 360 samples per year, so that if there’s no trend it returns to zero.this is 72 million samples from around the world.
When you do the by continent, there are large swings at different times in this. Rising – falling is the same as yesterday’s min – today’s min. If you do the same with tmax, yesterday’s max – today’s max it is very flat.
What it shows is the average isn’t changing, pools of warm water are moving around changing surface temps regionally, but over all there are a couple of blips with no apparent loss of nightly cooling.

Mike
August 29, 2015 7:16 am

I calculate yesterday’s rising temp as Tmax -Tmin = rising, then Tmax – Tmin from the following morning =falling. Then difference = rising – falling.

Thanks for the reply. So if I follow your verbal explication correctly and use more precise mathematical notation, I get:
rising = Tmaxn – Tminn
falling = Tmaxn – Tminn+1
rising – falling = (Tmaxn – Tminn) – (Tmaxn – Tminn+1) =Tminn+1- Tminn
ie you are in effect calculating the daily change in Tmin.
If that is wrong could you please give a clear mathematical description of you calculation, words are not clear.

August 29, 2015 7:59 am

You have it right.
It’s also equal to
Tmin – Tmin +1
It’s all calculated for every station one day at a time, it’s an anomaly using the specific stations prior day, so it is as accurate as possible, and since it’s also a rate of change (deltaF/day) you can use it that way as well.
The chart averages to slight cooling, 50 of the 74 years are cooling, 30 of the last 34 years are cooling.
Since the measurements are +/-0.1 F, if you use that as the uncertainty the average for the last 74 years is 0.0 +/-0.1F
And the same operations done with tmax (tmax – tmax +1) is much closer to zero.
This does not mean summers are not warmer, just that any extra heat is lost in the fall. And it seems to me that that is the signature of Co2 warming, and is not evident is the surface record.

Menicholas
August 29, 2015 7:59 am

If I am interpreting this correctly, what it means is that if you wave your arms around in a big windmill fashion while talking real fast and using some big words and colorful charts…it is very impressive.

August 29, 2015 8:09 am

Right, 4 grade math is fancy arm waving with big words.
Lets compare that to all the arm waving and Interpolating data 1,000 km away.

Menicholas
August 29, 2015 8:54 am

Honestly micro, I thought you were goofing him…my bad.

August 29, 2015 9:12 am

No problem, I’m protective I guess. I think it’s the best use of the surface data, and it shows that at the actual stations nothing catastrophic is going on, it’s all going on in the the post processing of the data.

Steve Garcia
August 28, 2015 11:05 pm

I am pretty sure that the two warm periods correlate well with the Pacific Decadal Oscillation cool phases and that the cool periods of this/these curve(s) correlate well with the PDO warm phases – including the less-warming period beginning around 2000. This is damned near exactly what was predicted back in the very early 2000s – that the phase change (regime change) in the PDO would cause the warming to slow and probably reverse and had already begun to change.
I LIKE this method.
It is VERY significant that the high point of warming/century was in 1937. We’ve been beating them over the head bout the 1930s for over a decade bow, and they keep pretending that the 1930s didn’t happen.
Pretend science isn’t science. When you have to pretend that inconvenient data is not there, you are not an f-ing scientist; you are a priest of The Church of Preaching to the Choir.

4TimesAYear
August 29, 2015 12:01 am

Someone told me the 30’s hotter temps were just for the U.S. Apparently not.

Menicholas
August 29, 2015 8:03 am

Funny how with jet stream winds blowing air masses all around he globe a couple of times every week, all year long, we could nonetheless have one continent be warmer for an entire decade while not affecting the rest of the globe.
Very strange indeed.
And so coincidental, that this just happens to be the place with the most complete and comprehensive temperature records in the whole world.
Rolling the eyes hardly captures the disbelievability of this level of sophistry.

August 29, 2015 8:16 am

It’s quite possible if you consider that ocean surface temps could induce a alternate path changing where the change between tropical air and polar air.
You can see this over the US ,most of this summer the Midwest has been under Canadian air, as opposed to tropical air.
That’s a 10 or 15 F swing in temps.

Menicholas
August 29, 2015 9:12 am

The heat during the thirties was virtually coast to coast for not one or two, but many, years.

MarkW
August 29, 2015 9:41 am

A shifting jet stream would heat some areas and cool other areas for as long as the jet stream remained “shifted”.

August 29, 2015 10:42 am

If the area doesn’t change, but it seems like it could easily do both.
I live just south of Lake Erie, the 60’s and early 70’s were cool, most homes and cars didn’t have air conditioning. That could easily have been the jet stream was south of us, and the hot 80’s and 90’s could easily be explained by the jet stream being north of us.
Then with all of the post processing of surface temps that could be all the warming in general was.
There does seem to be regional differences between min temperature at that time between the US and Eurasia.
And the ocean cycles could be a source of forcing on the path of the jet stream.in fact we know the pdo/El Nino’s change where the jet stream hits land coming off the Pacific, and it’s track over the western half of the continent.

Dt, not just he USavid A
August 29, 2015 10:07 pm

1930s and early 40 NH and global T (hin

Dt, not just he USavid A
August 29, 2015 10:31 pm

The 1930s early 40s were globally warmer according to NOAA
From Tony Heller post…. Note only using NOAA’s own charts…
In 2001, NASA reported 0.5C warming from 1880 to 2000, with an error bar of less than 0.2C.
Now they show 1.3C warming during that same time interval, an increase of 0.8C, and nearly tripling the amount of warming. The changes they have made are 400% of the size of their 2001 error bars – a smoking gun of fraud.
But it is worse than it seems. NASA had already done huge amounts of data tampering by 2001, erasing the 1940 spike shown in the NH AND globally.
From: Tom Wigley
To: Phil Jones Subject: 1940s
Date: Sun, 27 Sep 2009 23:25:38 -0600
Cc: Ben Santer
It would be good to remove at least part of the 1940s blip, but we are still left with “why the blip”.
eventually became the record this post used, claiming record warmth at the surface while the satellites (verified by thousand of weather balloon reports) are .3 degrees (a huge margin) below 1998, which 100 percent of all climate models run from here to eternity could never duplicate.

Gloria Swansong
August 30, 2015 2:49 pm

Dt,
It’s not just a trillion dollar, megadeath fraud, but a global criminal conspiracy.

Menicholas
August 31, 2015 6:56 pm

Thanks Dt, not just he USavid A , (what up wit’ dat handle, meng?)
I was going to get around to posting this blog post by Tony Heller. I do not really have time to monitor all the different places I am commenting. This site is only one of many, and climate only one of the topics I am given to commenting on. Right now the stock market takes up a lot of my time ( as does my full time job and keeping by cats bathed).
I wanted to post it further up-thread as part of a counterpoint argument to one Mr.Gates, who, incredibly, disputes the notion that climate science is not so scrupulous about minding error bars and certainty levels.
Maybe he (and Mr. Courtney?) will notice here and respond.
What the hell good are error bars or confidence intervals if subsequent revisions ignore them?

August 28, 2015 11:47 pm

Does this mean they lied when they said “the science is settled” ?

M Seward
August 28, 2015 11:58 pm

No, “the science is settled” is just shorthand for “la la la la la la la la la…”

Gary P Smith
August 29, 2015 6:34 am

If “the science is settled”, shouldn’t they stop asking for more grant money to study the science of climate?

Menicholas
August 29, 2015 8:06 am

OK, Mr. Smith, stop trying to be all logic-y about our money and our science!

Patrick
August 28, 2015 11:57 pm

Following the logic of warmists, I am left wondering if it’s all my fault. I was born in 1937 and retired in 1998!

August 29, 2015 2:53 am

You rotten bastard! I intend to start a class action suit against you to recover all damages due to global warming. (by the way, just how did you do it all by yourself? Are you one of those X-men or something?)
🙂

Menicholas
August 29, 2015 8:07 am

LMAO!

MarkW
August 29, 2015 9:42 am

Let me guess, his super hero name is Thermos, the man of heat.

toorightmate
August 29, 2015 5:22 am

You young blokes still have a lot to learn.
Is a 150 year or 70 year time span climate? Or is it simply weather?
Isn’t a 150 year time frame nothing more than a single point on a climate chart?

August 29, 2015 6:12 am

IPCC AR5 glossary defines climate as weather averaged over thirty years. For what that’s worth.

Menicholas
August 29, 2015 8:09 am

150 is a tiny point on a geology chart, it is true.
For climate, thirty years is the defining period, as others have noted.

MarkW
August 29, 2015 9:43 am

Have they ever justified why they use 30 years? Or was it just picked out of a hat?

Menicholas
August 31, 2015 7:39 pm

I am guessing that at some point someone needed to pick an interval which was long enough to smooth out a lot of the noise of erratic weather, but make it short enough that existing records could be used to do systematic classification of the world’s climatic zones.
It may have been Koppen, or one of his contemporaries.

Dt, not just he USavid A
August 29, 2015 10:34 pm

Patrick drove a Lincoln, before he bought a Suburban, which he traded in for a Hummer.

Menicholas
Reply to  Dt, not just he USavid A
August 31, 2015 7:42 pm

Why do I have the feeling that a dirty joke just went right over my head?

August 29, 2015 12:00 am

This all assumes of course that past rates of warming or cooling have a relationship with future warming or cooling. As this has been totally disproved with CO2 and claims for solar cycles and other theories remain unproven, the assumption that you can calculate how much it is currently warming (or even make a call on whether it actually is warming) is a giant leap of faith with no basis in science whatsoever!

Mike
August 29, 2015 12:43 am

Firstly moving averages are crap as a low pass filter which is basically what you are trying to do. Please read and understand the following:
https://climategrog.wordpress.com/2013/05/19/triple-running-mean-filters/

4) The first and last 5 years of each rate of warming curve has more uncertainty than the rest of the curve. This is due to the lack of data beyond the ends of the curve. It is important to realise that the last 5 years of the curve may change when future temperatures are added.

No, the first and last 5 years are invalid. This is not much better than M.E. Mann’s padding. If you don’t have data to do the average you stop. Period.

3) This method can be performed by anybody with a moderate level of skill using a spreadsheet. It only requires the ability to calculate averages, and perform linear regressions.

A pretty lousy argument for applying bad mathematical processing. If you don’t know how to do anything beyond a mean and trend in Excel maybe you should learn before writing. You could adopt a triple running average as suggested in the above article.
The biggest error in all this is that you can not meaningfully take the average of sea and land temperatures to most of the datasets you have chosen are bunk to start with. To accept them and use them as the basis for analysis is just playing along the alarmists that producing falsified warming.
UAH and RSS are atmospheric measurements and not a bastard mix, so are not directly comparable but you don’t even point this out in the article.

August 29, 2015 3:19 am

When I was doing post grad work in mathematics (decades ago now), I had a Professor tell me that we should be teaching all students, from high school on, the subject of statistics and logic. He predicted that having computer programs that would do all the work of statistics for us would lead to fools thinking they were “doing statistics” when they had no clue what they were doing and no clue what could be logically done. But if the answer came out of a computer then they would think must be so! I think that my late professor made a very good prediction. (he was an applied math guy who knew computers and was not biased against their use either)
Also consider that many of the main alarmist “scientists” who are third rate intellects at best, do all kinds of statistical analysis without ever consulting a statistician. How the hell does a paper relying on statistics to make extraordinary claims make it past peer review if there is no statistician on the team that is trying to publish? How is that possible?
And how come there is no logician anywhere who will speak out about the utter lack of logic in many of the ideas and predictions of the warmist camp. (heck, and many of the luke-warmers to boot)
Where is Dr. Spock when you need him? Or Karl Popper even.

j ferguson
August 29, 2015 1:19 pm

Mark:

How the hell does a paper relying on statistics to make extraordinary claims make it past peer review if there is no statistician on the team that is trying to publish? How is that possible?

Isn’t that exactly how it’s possible?

jferguson
August 29, 2015 8:59 pm

Oops. .. because there is no statistician on the review team either?

August 30, 2015 2:43 am

I don’t think one needs to be a statistician to realize that if extraordinary results are asserted via statistical means that one needs statisticians to review the claim, methods, and data collection. So, even non-statistician editors should be able to see when a statistician is needed and crucial.

Mike
August 29, 2015 12:46 am

This all assumes of course that past rates of warming or cooling have a relationship with future warming or cooling.

No it doesn’t, it looks at the available data. If someone is projecting this into the future it is in your own head.

August 29, 2015 5:40 am

+1
Statistical analyses have no predictive, or attributive, capabilities. Those belong to the statistician.

Mike
August 29, 2015 12:53 am

If you want to use a meaningless mix of land and sea temps, why don’t you use the British HadCRUT temp data? It’s now the only one that is not applying a BS pause busting adjustment.
I would suggest looking at SST and land averages separately and using a better filter with a shorter period. That would give a similar degree visual “smoothness”.

Pa
August 29, 2015 1:16 am

Graph 1 is showing that the Global temperature is increasing. How do get this information from Graph 2?

hunter
August 29, 2015 1:34 am

How fast is the Earth warming? Try “trivially”, “insignificantly”, “marginally”, “barely”, “illusorily”.

PeterF
August 29, 2015 1:36 am

How comes the length of the moving average series are just as long as the data series ?
They should be shorter by 60 months on both ends. Or 120 months on one end. Does not instill confidence.

Menicholas
August 31, 2015 7:48 pm

I am 97% sure that they have 95% confidence in their intervals.

Terry
August 29, 2015 1:59 am

the earth is not warming as both satellite temperature data sets confirm. all three terrestrial data sets hadcrut4, giss and NCDC have been manipulated to give the impression of warming. earth is heading for a ice age UK scientists at Reading, Southampton and Northumbria universitiesconfirm this is the case. Also Abdussamatov at Polkovo Observatory St Petersburg, also Mahatma Gandhi Institute of Astronomy and Technology. Terri Jackson

Chris Thixton
August 29, 2015 2:07 am

Shouldn’t the question be “How fast is the climate changing?”. Answer: Nobody actually knows.

richard verney.
August 29, 2015 6:19 am

The climate is not changing (at any rate not so far to date) since climate is the mix of a number of different parameters each parameter constantly changing over a wide band, the width of which is set by natural variation.
Temperature is just one of the many parameters, and the change of 1/3 to 1 deg C is well within the bounds of natural variation
As soon as one accepts that climate is dynamic and constantly changes, then mere change alone is not in itself evidence of climate change. It is just what climate does.
Climate is regional, so for example, is the climate in the US materially different to that seen in the 1930s? Where is the evidence that it is? I have not seen any produced.
As far as I am aware, in my life time, not a single country has changed its Koppen classification, and those countries which were on the cusp of two climatic zones when the list was first produced, are still on the cusp and have not crossed the boundary into a new climate zone.

August 29, 2015 8:48 am

Richard: do you have reference to work relating to Koppen changes over time? This would indeed be interesting, especially in locations near the original boundaries.

richard
August 29, 2015 5:24 am

WMO give urban data a zero for quality. 3% of land is urbanized and 27% of the temps stations are in urban areas. So 27% of temp data info straight off is of zero quality.
Africa is one fifth of the worlds land mass. The majority of the African temp data is from urban areas. So thats Africa out of the loop.
Add in the vast areas of the world where there are no temp stations.

Basil
Editor
August 29, 2015 5:25 am

Several observations.
1) I’ve always thought the rate of change in temperature was a more significant parameter than temperature itself. I would approach this more directly, before applying any smoothing (like CMA here), by using 12 month first differences.
2) Almost any smoothing method will provoke controversy. I like Hodrick-Prescott, but I get much the same pattern with a 36 month centered moving average. The longer CMA used here is going to smooth out some significant shorter periodicities, which are seen clearly in Vukcevic’s spectrum analysis above (at 12:48). In defense, finding periodicity was not the objective here. But it does lead to the next observation.
3) Once the temperature data is stated in terms of rate of change, I’ve always been intrigued by the apparent homeostasis in the data. Obviously there are physical processes constraining rates of change in temperature: when the rate of change gets too high, it falls, and vice versa. How well do we understand the physical processes that account for this?
4) There is still an obvious upward trend/slope in the rate of change. How much of this is real, and how much of it is imagined? By “imagined” I mean here the result of constant data massaging that may be motivated by a desire to demonstrate a particular conclusion (like “there is no pause”). Some of the real upward trend undoubtedly owes to warming from natural causes. Can we really extract an anthropomorphic cause after allowing for that?
5) As to the final conclusion –“So, the next time that you hear somebody claiming that Global Warming is accelerating, show them a graph of the rate of warming.”– this post hasn’t disputed that. (See Point #4.)

co2islife
August 29, 2015 6:34 am

Once the temperature data is stated in terms of rate of change, I’ve always been intrigued by the apparent homeostasis in the data. Obviously there are physical processes constraining rates of change in temperature: when the rate of change gets too high, it falls, and vice versa. How well do we understand the physical processes that account for this?

Yep, Nature has built in safety valves. H20 is the the main moderator. H20 evaporates, absorbing heat, it rises, condenses releasing heat to the upper atmosphere. More heat, more H20, more coulds, less sunlight reaching earth to warm it. O3 also traps heat and alters the jet stream. Etc etc etc.

lgl
August 29, 2015 5:47 am

“So, the next time that you hear somebody claiming that Global Warming is accelerating, show them a graph of the rate of warming.”
Why bother? Like the author they will probably not understand that the graph shows that Global Warming is accelerating. Somewhere around 1 C/century^2.

MarkW
August 29, 2015 9:55 am

Considering the fact that we only have 30 years worth of usable data, how can you be so confident of the long term rate of warming, especially considering all we have learned in recent years regarding decade and century long trends in climate data?

Dt, not just he USavid A
August 29, 2015 10:38 pm

The entire troposphere (except for the corrupted surface) is .3 degrees cooler then 1998.

August 29, 2015 5:56 am

I am waiting peer-reviewed research that shows the optimum climate for our biosphere. The first question that would naturally flow would be where is our current climate and trend in relation to this finding.
Strangely, nobody seems interested in this vital comparison. Not so strangely, the solutions that are frequently demanded in the most urgent voice, all converge on a socialist worldview: statism, bigger government, higher taxes, less personal liberty, even fewer people. That bigger picture tells me all that I need to know about “climate science”.

richard verney.
August 29, 2015 6:27 am

Even Phil Jones (in an interview for the BBC) accepted that there was no statistical difference in the rate of warming between the early 20th century warming period of 1920 and 1940, and the modern era/late 20th century warming period between late 1970s and ending in 1998.
Accordingly, it is common ground, even with warmists, that the rate of warming has not accelerated between the time when CO2 is said to have driven most of the observed warming (ie., late 1970s to about 1998). and the time when manmade emissions of CO2 were too modest to have driven the warming (1920 to 1940).
I cannot recall, but Phil Jones might have accepted that the late 19th century warming had a statistically similar rate as that seen in the warming periods of 1920 to 1940, and the period late 1970s to 1998.
The fact that the rate of warming in these 3 warming periods is similar is strong evidence that CO2 is not significantly driving temperatures.

August 29, 2015 9:21 am

Lindzen frequently made the same point. See essay C?agw. Cuts to the heart of the attribution issue.

MarkW
August 29, 2015 9:57 am

I’ve talked with a number of warmists who proclaim that it doesn’t matter what caused the 1920 to 1940 warming, because we know that the current warming is being caused by CO2, the models prove that.

Menicholas
August 31, 2015 8:53 pm

Imagine if courts of law used such sophistry as evidence of crimes?
Anybody could be convicted of anything they were accused of, as being accused is proof of guilt in and of itself.

Gloria Swansong
August 29, 2015 11:15 am

Further evidence is the fact that earth cooled during the first 32 years of the postwar surge in CO2, as it again has during the continued rise since c. 1996.

co2islife
August 29, 2015 6:30 am

Why this is so damning:
1) CO2 has a relatively linear rate of change (ROC). The rate of change of temperature is highly non-linear. The same analysis can be applied to sea level and the results will be the same.
2) CO2 has its greatest impact at the lower CO2 levels. As the concentration of CO2 increases it’s W/^M^2/PPM decreases. CO2 would show a much greater impact of the ROC of temperature when it increased from 180 to 250 than from 250 to 400. 1900 to 1940 ROC seems about the same as 1945 to 1980.
3) If this analysis is applied to Vostok ice core data which has steps of 100 years, you will see that the ROC variation between 1880 and 2015 is nothing abnormal, in fact it will likely fall at the low range of the scale. Even if you just use the Holocene it still won’t fall outside the norm.
4) CO2 in no way can explain the rapid decreases, negative or pauses in the ROC. The defined GHG effect drivn by CO2 is a doomsday model. CO2’s increase is linear, man’s production of CO2 is not linear, temperature has to be linear under the GHG effect as defined by the warmists.
BTW, where did the data come from between 1830 and 1880? The 1880 shows a 100yr ROC of -0.5°C. Where did the data come from to get that number? Is the 100 years for the 1880 number 1830 to 1930, or is it 1780 to 1880? If it is 1830 to 1930, how is the 2015 value calculated?

Robert of Ottawa
August 29, 2015 6:53 am

This is starting from a corrupted data set. Also, UAH and RSS data sets are too short to perform meaningful analysis.

firetoice2014
August 29, 2015 11:32 am

Actually, it is starting from a corrupted temperature anomaly set, since data ain’t data post “adjustment’, but only estimates.

Menicholas
August 31, 2015 8:57 pm

Post adjustments, the only thing that is estimated by the data sets is how much the warmista data manipulators estimate they can get away with…so far.

Mike
August 29, 2015 8:03 am

Here is the rate of change of HadCRUT4 , mixed land+sea “average” anomaly, using a couple of well-behaved filters.
Firstly we can note that the apparent trough around 1988 in Sheldon’s fig 2 is figment of the imagination due to using a crappy running average as a low-pass filter. Once again please note folks RUNNING MEANS MUST DIE.
Secondly, the downward tendency at the end has stopped by 2008 and we don’t have enough data to run the filter any further. The continued trend in Sheldon’s graph is a meaningless artefact or running a crap filter beyond the end of the data.
Unless you wish to get laughed at, it would be best not to show his Graph 2 to anyone, except as an example of the kind of distortion and false conclusions that can happen with bad data processing.
Finally, please note that taking the “average” of sea temperatures and land near surface air temperatures has no physical meaning at all. This was just a less rigged dataset than the new GISS and NOAA offerings. You can’t ‘average’ the physical properties of air and water.

Mike
August 29, 2015 8:11 am

On the other hand, what the above rate of change graph does show is that the accelerating warming ( steady upward trend in rate of change ) that had everyone in a panic in 1999, had clearly not continued since. The link to every increasing atmospheric CO2 and the suggestion of “run-away” warming and tipping points are clearly also mistaken.

Peter Sable
August 29, 2015 3:56 pm

Sheldon’s fig 2 is figment of the imagination

Or is a figment of him using GISS and you using HADCRUT. Try changing one variable at a time…
In general I support your criticism but I’d rather see the argument done correctly…
Peter

Gloria Swansong
August 29, 2015 4:01 pm

Both are ludicrous fictions in the service of a criminal conspiracy.

August 29, 2015 8:41 am

Again cherry picking the data because if one goes back to the Holocene Optimum the question is how fast is the earth cooling?
Since the Holocene Optimum 8000 years ago the earth has been in a gradual overall cooling trend which has continued up to today punctuated by spikes of warmth such as the Roman ,Medieval and Modern warm periods.
The main drives of this are Milankovitch Cycles which were more favorable for warmer conditions 8000 years ago in contrast to today , with prolonged periods of active and minimum solar activity superimposed upon this slow gradual cooling trend giving the spikes of warmth I referred to in the above and also periods of cold such as the Little Ice Age.
Further refinement to the climate coming from ENSO, volcanic activity , the phase of the PDO/AMO but these are temporary earth intrinsic climatic factors superimposed upon the general broader climatic trend.
All the warming the article refers to which has happened since the end of the Little Ice Age, is just a spike of relative warmth within the still overall cooling trend due to the big pick up in solar activity from the period 1840-2005 versus the period 1275-1840.
Post 2005 solar activity has returned to minimum conditions and I suspect the overall cooling global temperature trend which as been in progress for the past 8000 years ago will exert itself once again.
We will be finding this out in the near future due to the prolonged minimum solar activity that is now in progress post 2005.

MarkW
August 29, 2015 8:58 am

I’d really like to see error bars put on those graphs.
The idea that we knew what the earth’s temperature was, within a few tenths of a degree C back in 1880 is utterly ludicrous. Given the data quality problems, equipment quality problems and the egregious lack of coverage, the error bars are more in likely in the range of 5 to 10C. The error bars have improved somewhat in recent decades, but they have at best been halved.
When the signal you are claiming is 1/2 to 1/5th your error bars, you doing pseudo science. And that’s being generous.

MarkW
August 29, 2015 9:02 am

Heck, the “adjustments” to the data are greater than the signal they claim to have found.
Junk from top to bottom.

Peter Sable
August 29, 2015 4:01 pm

I’d really like to see error bars put on those graphs.

Because the analysis transform is somewhat complex you’d have to do that in the form of a Monte Carlo simulation. I doubt you can do that in Excel. You need a Real tool, e.g. matlab, R, etc…
Even that is difficult because you’d have to know what size and distribution the errors should be. They may not be Gaussian, as the underlying distributions of temperature measurements (in space and time) are highly autocorrelated.
Peter

Brandon Gates
August 29, 2015 6:08 pm

Peter Sable,
… references this paper: Mears, C.A., F.J. Wentz, P. Thorne and D. Bernie (2011). Assessing uncertainty in estimates of atmospheric temperature changes from MSU and AMSU using a Monte-Carlo estimation technique, Journal of Geophysical Research, 116, D08112, doi:10.1029/2010JD014954
… along with the calculated uncertainty and error estimates on a GRIDDED basis by MONTH if you’re feeling especially masochistic.

MarkW
August 29, 2015 7:56 pm

If you don’t know the accuracy of the data you are using, then you aren’t doing science.

Peter Sable
August 30, 2015 8:28 am

Assessing uncertainty in estimates of atmospheric temperature changes from MSU and AMSU using a Monte-Carlo estimation technique,

Nice, thanks, I’ll have to track this down.
Here’s another for you that’s potentially Yet Another Big Hole in CAGW: This paper (Torrence and Compo) uses an assumption of red noise (alpha = 0.72) to see if fluctuations in SST temperature are random in nature or not at any particular frequency. (the Null Hypothesis is that they are random, and test against that). They manage to find an ENSO signal using this method, but reject all other signals from the SST record.
Now take this same idea in Torrence and Compo and see if you can find a warming trend that exceeds the 95% confidence interval of alpha=0.75 red noise.. Here’s a preview hint: The confidence interval goes through the roof the lower the frequency of the data… which means since trend is the lowest frequency component all temperature signals are far below this confidence interval. In my early, unpublished replication the only two signals in GISS that I can find above 95% confidence interval is 2.8 years, which is roughly “once in a blue moon”, as well as the 1 year seasonal cycle. Which means all the “warming” going on is just random fluctuation, the null hypothesis.
Peter

Peter Sable
August 30, 2015 9:26 am

As the model of measurement and sampling error used here for land stations
has no temporal or spatial correlation structure,

Ugh, Morice Kennedy et. al. assume way too much normal distribution with no correlation.
There’s lots of spatial correlation even across 5 degree grids as well as inside grids. My Monte Carlo experiments indicate that the std error is 2.4x that of a Gaussian distribution when a surface is auto correlated. The distribution is also slightly skewed – to the high end…
They also assume that adjustments have a poisson distribution and are not autocorrelated and have a zero mean. They might be be correlated, both with themselves and with the surrounding grid as well. I think it’s been clearly shown that adjustments do not have a zero mean…
They should validate their “not correlated” assumptions and “zero mean” assumptions. There are well known techniques for doing this but they didn’t use them in the paper.
GIGO.
Peter

Brandon Gates
August 30, 2015 9:28 pm

Peter Sable,

Nice, thanks, I’ll have to track this down.

You’re welcome.

[Torrence and Compo (1998)] manage to find an ENSO signal using this method, but reject all other signals from the SST record.

I skimmed it, don’t see where they reject all other signals.

In my early, unpublished replication the only two signals in GISS that I can find above 95% confidence interval is 2.8 years, which is roughly “once in a blue moon”, as well as the 1 year seasonal cycle. Which means all the “warming” going on is just random fluctuation, the null hypothesis.

1) Why have you not published?
2) As this level of math is well above my paygrade, please explain to me how a wavelet analysis is feasible — or even desireable — when the hypothesized driving driving signal isn’t periodic … and even if it was, has not completed a complete cycle?

As the model of measurement and sampling error used here for land stations has no temporal or spatial correlation structure,

Ugh, Morice Kennedy et. al. assume way too much normal distribution with no correlation. There’s lots of spatial correlation even across 5 degree grids as well as inside grids.

The way I’m reading that, they’re saying that the error model has no temporal or spatial correlation, not that the data themselves lack it. From other readings, literature is chock full of discussing spatial correlations in the observational data — it’s my understanding that GISS’ homogenization and infilling algorithms rely on it.
The section you quote cites Brohan (2006): http://onlinelibrary.wiley.com/doi/10.1029/2005JD006548/pdf
… and Jones (1997): http://journals.ametsoc.org/doi/pdf/10.1175/1520-0442%281997%29010%3C2548%3AESEILS%3E2.0.CO%3B2
Perhaps those will clear it up for you … but I’m beginning to suspect that you’ll only find more GIGO … 🙂

They also assume that adjustments have a poisson distribution and are not autocorrelated and have a zero mean.

This one I’m certain you misread: To generate an ensemble member, a series of possible errors in the homogenization process was created by first selecting a set of randomly chosen step change points in the station record, with each point indicating a time at which the value of the homogenization adjustment error changes. These change points are drawn from a Poisson distribution with a 40 year repeat rate.

I think it’s been clearly shown that adjustments do not have a zero mean…

Clearly not, I’m quite sure they’re aware of that, and they’re certainly not claiming they do.

They should validate their “not correlated” assumptions and “zero mean” assumptions. There are well known techniques for doing this but they didn’t use them in the paper.

In both cases I think you’re conflating characteristics of observational data with things that are not.

August 29, 2015 10:34 am

GISTEMP is entirely a work of science fiction, useless for any actual scientific purpose. It’s designed as a political polemical tool, not a real data series based upon observation.

Curious
August 29, 2015 12:15 pm

Why is the GISTEMP construction used instead of just the RSS and UAH numbers? I can understand why a reconstruction would be used for pre-1979 data, but what sense does it make to claim July was the hottest on record when the RSS/UAH data say that July was pretty average?

Brandon Gates
August 29, 2015 1:00 pm

Mr. Walker,

One of the strange things about the GISTEMP “Pause-busting” adjustments, is that the year with the highest rate of warming (since 1880) has changed. It used to be around 1998, with a warming rate of about +2.4 °C per century. After the adjustments, it moved to around 1937 (that’s right, 1937, back when the CO2 level was only about 300 ppm), with a warming rate of about +2.8 °C per century.

Comparing rate of temperature change to an absolute CO2 level at a point in time is not very meaningful. Comparing rate to rate would be better, but even then, change in temperature is responsive to change in forcing — which for CO2 is a function of the natural log of concentration. Regressing GISTEMP against the natural log of CO2 (120-month moving averages for both) gives a coefficient of 3.4.
The common rule of thumb is: ΔT = 3.7 * ln(C/C₀) * 0.8 = 2.96, which is within striking distance my calculated 3.4 but not very satisfying. When I add a solar irradiance time series (120 MMA again, in Wm^-1) to the regression, lo and behold, the ln(C/C₀) regression coefficient drops to 3.0 in line with expectations — about as good as an amateur researcher using simple spreadsheet functions could hope to expect.
In sum, ignore other significant and well-known climate factors at your peril.

If you look at the NOAA series, they already had 1937 as the year with the highest rate of warming, so GISTEMP must have picked it up from NOAA when they switched to the new NCEI ERSST.v4 sea surface temperature reconstruction.

Yes, that follows. Here’s a comparison between ERSST.v3b and v4 from NCEI itself:
Just eyeballing the thing, it’s easy to see that the period between 1930 and 1942 was more steeply adjusted upward than 2002-2015. The source page for that image is here: https://www.ncdc.noaa.gov/news/extended-reconstructed-sea-surface-temperature-version-4
… wherein they explain:
One of the most significant improvements involves corrections to account for the rapid increase in the number of ocean buoys in the mid-1970s. Prior to that, ships took most sea surface temperature observations. Several studies have examined the differences between buoy- and ship-based data, noting that buoy measurements are systematically cooler than ship measurements of sea surface temperature. This is particularly important because both observing systems now sample much of the sea surface, and surface-drifting and moored buoys have increased the overall global coverage of observations by up to 15%. In ERSST v4, a new correction accounts for ship-buoy differences thereby compensating for the cool bias to make them compatible with historical ship observations.
This does NOT explain the changes in the ’30s and ’40s, which is annoying. Further, it’s much discussed elsewhere that during the war years, more temperature readings were taken from engine coolant intakes than via the bucket method relative to pre-war years. This would tend to create a warming bias in the raw data warranting a downward adjustment. Instead, the v4 product goes the other way, which is confusing … and also quite annoying.

So, the next time that you hear somebody claiming that Global Warming is accelerating, show them a graph of the rate of warming.

… and again relying on my eyeballs, it’s easy to see that this rate graph has a positive slope with respect to time over the entire interval. Positive value of a 2nd derivative is positive acceleration, yes?
Next time a working climatologist says that Global Warming is accelerating, ask them, “Over what interval of time?” and use that interval in the rate analysis … because chances are they’re talking about something rather greater than a decade, and I find it’s best to compare apples to apples.

Brandon Gates
August 29, 2015 1:02 pm

MOD: oi, I missed a closing italics tag after “In ERSST v4, a new correction accounts for ship-buoy differences thereby compensating for the cool bias to make them compatible with historical ship observations.” Please fix.

Editor
August 29, 2015 11:51 pm

Fixed.
w.

Brandon Gates
August 30, 2015 12:12 am

Thanks.

Dr. Bogus Pachysandra
August 29, 2015 1:46 pm

“The average temperature increase will be so much higher than the previous record, set in 2014, that it should melt away any remaining arguments about the so-called “pause” in global warming, which many climate sceptics have promoted as an argument against action on climate change.”
http://www.independent.co.uk/environment/climate-change/climate-change-2015-will-be-the-hottest-year-on-record-by-a-mile-experts-say-10477138.html

Brandon Gates
August 29, 2015 3:08 pm

I always find it somewhat morbidly amusing when someone predicts that a certain event or piece of evidence will end “any remaining arguments”. In this particular case, the rebuttal has been in place since the tail end of last year: it’s ENSO whut diddit.

Professor Jen Uine Amorphophallus
August 29, 2015 6:08 pm

I think the problem here may have something to do with using units of distance (miles) to measure energy content of air.

MarkW
August 29, 2015 7:57 pm

Their belief system isn’t founded on evidence in the first place. Therefore nothing as trivial as evidence will shake their belief system.

dp
August 29, 2015 2:04 pm

Why begin a temperature trend at the end of the well-known “Little Ice Age”? The result is always going to be warming because that is what happens at the end of a protracted cold period. People condemn Michael Mann for hiding the LIA – posts like this one are in the same camp. The current trend is lacking a critical context and if this is all we have then I’d have to agree with the wackiest nutters out there that the world is on track to smouldering ruin. Stop doing that – it isn’t helping.

Brandon Gates
August 29, 2015 2:55 pm

dp,

Why begin a temperature trend at the end of the well-known “Little Ice Age”?

Almost certainly due to the relative dearth of thermometers and daily record keeping in the 17th century. Of course, when climatologists DO splice together proxy estimates of temperature trends with estimates obtained from the instrumental record, a great hue and cry of protest goes up from these quarters.

The result is always going to be warming because that is what happens at the end of a protracted cold period.

Sorry, but the planet does not just decide, “well, it’s been cold for a spell, time to warm up now because that’s what’s supposed to happen.” Physical systems do things for a physical reason. In this case, a good starting point is the Sun:
http://climexp.knmi.nl/data/itsi_wls_ann.png

dp
August 29, 2015 5:15 pm

You are going to have to describe what your rationale is for a world that does anything but warm after an LIA event. Warming is the only option. Nothing else is logical.

Menicholas
August 29, 2015 6:02 pm

*gasp*
The sun!
Talk about a hue and cry!
“HUE AND CRY…
a : a loud outcry formerly used in the pursuit of one who is suspected of a crime
b : the pursuit of a suspect or a written proclamation for the capture of a suspect “

Brandon Gates
August 29, 2015 5:46 pm

dp,

You are going to have to describe what your rationale is for a world that does anything but warm after an LIA event.

Warming is the only option. Nothing else is logical.

As I mentioned elsewhere in this thread, the last glacial maximum was 6 degrees cooler than the Holocene average. Based on precedent alone, logically the LIA could have been much cooler for a much longer period of time. However, logic works best when it considers as much available evidence as is possible. Looking at solar fluctuations since the 1600s is only the barest beginning of that exercise … but it IS a good place to start.

Menicholas Amorphophallus
August 29, 2015 9:21 pm

Indeed.
It is no shock at all to me.
That the big shiny hot thing in the sky is responsible for not only the temperature of the Earth, but also to variations in such, is only logical to my way of thinking.
Powerful evidence that it could not possibly have any effect would need to be presented to even begin to rule it out, IMO.
I have never seen evidence to rule out the sun.
In fact we know it to be at least somewhat variable in it’s output.
And we know these variations in output are only part of the story, and that variances in the solar wind and magnetic fields exert powerful influence on the incoming cosmic rays.
There are also questions regarding the direct effects of the shifting magnetic and electric fields on the atmosphere and also on the interior of the Earth.
It would not surprise me in the slightest to find that we have incomplete knowledge of the amount that it can vary, and the number of ways these variations can effect the Earth.
I also wonder about ocean stratification and overturning, specifically in the Arctic region.
Many in the warmista camp have argued for many years that the sun can be disregarded.
And have specifically said as much in regard to the solar cycles, and any effects associated with these cycles.
That refusal to consider the sun as a source of climatic variation is a glaring blind spot in what passes for climate science these days.
IMO.

Dt, not just he USavid A
August 29, 2015 10:42 pm

The entire troposphere, except for the maladjusted surface, is .3 degrees cooler then in 1998.

Brandon Gates
August 29, 2015 11:29 pm

Menicholas,

Powerful evidence that it could not possibly have any effect would need to be presented to even begin to rule it out, IMO.

You DO realize that you’re preaching to the choir with me on this point.

I have never seen evidence to rule out the sun.

Neither have I. My own back of envelope calcs put it at 0.2 C per 1 Wm^-2 change in TSI (0.25 Wm^-2 to account for spherical geometry * 0.8 C / Wm^-2 climate sensitivity parameter). That works out to about + 0.1 C contribution to the global temperature trend from 1880 to present, or about 1/6 the total increase. By way of comparison to literature, in GISS model E net change in solar forcing works out to about 1/9 of the total forcing change since 1880.

It would not surprise me in the slightest to find that we have incomplete knowledge of the amount that it can vary, and the number of ways these variations can effect the Earth.
In a system this complex there’s always going to be something we don’t know. But CO2’s effects are obvious to me, backed by long-established and well-documented physics, and in my book all but beyond dispute. The main challenge, and everything I’ve read suggests it is a challenge, is constraining how much of an effect it has relative to other factors. Even so, I seriously doubt that it’s not the dominant contribution to the trend since 1950.

I also wonder about ocean stratification and overturning, specifically in the Arctic region.

Depending on which proxy reconstruction one consults for estimating the magnitude of the MWP/LIA transition, it’s pretty tough to explain that swing on the basis of solar fluctuations alone … which would only give about a tenth of a degree difference globally according to my above math, whereas, say, Moberg (2005) suggest ~0.8 degrees net change in NH temps.

Many in the warmista camp have argued for many years that the sun can be disregarded.

Well then, those in the warmista camp saying such things haven’t done their homework, are bonkers, and/or simply lying: literature by researchers I consider credible says otherwise.

dp