How fast is the Earth warming?

This article presents a method for calculating the Earth’s rate of warming, using the existing global temperature series.

Guest essay by Sheldon Walker

It can be difficult to work out the Earth’s rate of warming. There are large variations in temperature from month to month, and different rates can be calculated depending upon the time interval and the end points chosen. A reasonable estimate can be made for long time intervals (100 years for example), but it would be useful if we could calculate the rate of warming for medium or short intervals. This would allow us to determine whether the rate of warming was increasing, decreasing, or staying the same.

The first step in calculating the Earth’s rate of warming is to reduce the large month to month variation in temperature, being careful not to lose any key information. The central moving average (CMA) is a mathematical method that will achieve this. It is important to choose an averaging interval that will meet the objectives. Calculating the average over 121 months (the month being calculated, plus 60 months on either side), gives a good reduction in the variation from month to month, without the loss of any important detail.

Graph 1 shows the GISTEMP temperature series. The blue line shows the raw temperature anomaly, and the green line shows the 121 month central moving average. The central moving average curve has little month to month variation, but clearly shows the medium and long term temperature trend.

Graph 1

The second step in calculating the Earth’s rate of warming is to determine the slope of the central moving average curve, for each month on the time axis. The central moving slope (CMS) is a mathematical method that will achieve this. This is similar to the central moving average, but instead of calculating an average for the points in the interval, a linear regression is done between the points in the interval and the time axis (the x-axis). This gives the slope of the central moving average curve, which is a temperature change per time interval, or rate of warming. In order to avoid dealing with small numbers, all rates of warming in this article will be given in °C per century.

It is important to choose the correct time interval to calculate the slope over. This should make the calculated slope responsive to real changes in the slope of the CMA curve, but not excessively responsive. Calculating the slope over 121 months (the month being calculated plus 60 months on either side), gives a slope with a good degree of sensitivity.

Graph 2 shows the rate of warming curve for the GISTEMP temperature series. The blue line is the 121 month central moving slope (CMS), calculated for the central moving average curve. The y-axis shows the rate of warming in °C per century, and the x-axis shows the year. When the rate of warming curve is in the lower part of the graph ( colored light blue), then it shows cooling (the rate of warming is below zero). When the rate of warming curve is in the upper part of the graph ( colored light orange), then it shows warming (the rate of warming is above zero).

Graph 2

The curve shows 2 major periods of cooling since 1880. Each lasted approximately a decade (1900 to 1910, and 1942 to 1952), and reached cooling rates of about -2.0 °C per century. There is a large interval of continuous warming from 1910 to 1942 (about 32 years). This reached a maximum rate of warming of about +2.8 °C per century around 1937. 1937 is the year with the highest rate of warming since the start of the GISTEMP series in 1880 (more on that later).

There is another large interval of continuous warming from about 1967 to the present day (about 48 years). This interval has 2 peaks at about 1980 and 1998, where the rates of warming were just under +2.4 °C per century. The rate of warming has been falling steadily since the last peak in 1998. In 2015, the rate of warming is between +0.5 and +0.8 °C per century, which is about 30% of the rate in 1998. (Note that all of these rates of warming were calculated AFTER the so‑called “Pause-busting” adjustments were made. More on that later.)

It is important to check that the GISTEMP rate of warming curve is consistent with the curves from the other temperature series (including the satellite series).

Graph 3 shows the rate of warming curves for GISTEMP, NOAA, UAH, and RSS. (Note that the satellite temperature series did not exist before 1979.)

Graph 3

All of the rate of warming curves show good agreement with each other. Peaks and troughs line up, and the numerical values for the rates of warming are similar. Both of the satellite series appear to have a larger change in the rate of warming when compared to the surface series, but both satellite series are in good agreement with each other.

Some points about this method:

1) There is no cherry-picking of start and end times with this method. The entire temperature series is used.

2) The rate of warming curves from different series can be directly compared with each other, no adjustment is needed for the different baseline periods. This is because the rate of warming is based on the change in temperature with time, which is the same regardless of the baseline period.

3) This method can be performed by anybody with a moderate level of skill using a spreadsheet. It only requires the ability to calculate averages, and perform linear regressions.

4) The first and last 5 years of each rate of warming curve has more uncertainty than the rest of the curve. This is due to the lack of data beyond the ends of the curve.  It is important to realise that the last 5 years of the curve may change when future temperatures are added.

There is a lot that could be said about these curves. One topic that is “hot” at the moment, is the “Pause” or “Hiatus”.

The rate of warming curves for all 4 major temperature series show that there has been a significant drop in the rate of warming over the last 17 years. In 1998 the rate of warming was between +2.0 and +2.5 °C per century. Now, in 2015, it is between +0.5 and +0.8 °C per century. The rate now is only about 30% of what it was in 1998.  Note that these rates of warming were calculated AFTER the so-called “Pause-busting” adjustments were made.

I was originally using the GISTEMP temperature series ending with May 2015, when I was developing the method described here. When I downloaded the series ending with June 2015 and graphed it, I thought that there must be something wrong with my computer program, because the rate of warming curve had changed so dramatically. I eventually traced the “problem” back to the data, and then I read that GISTEMP had adopted the “Pause-busting” adjustments that NOAA had devised.

Graph 4 shows the effect on the rate of warming curve, of the GISTEMP “Pause-busting” adjustments. The blue line shows the rates from the May 2015 data, and the red line shows the rates from the June 2015 data.

Graph 4

One of the strange things about the GISTEMP “Pause-busting” adjustments, is that the year with the highest rate of warming (since 1880) has changed. It used to be around 1998, with a warming rate of about +2.4 °C per century. After the adjustments, it moved to around 1937 (that’s right, 1937, back when the CO2 level was only about 300 ppm), with a warming rate of about +2.8 °C per century.

If you look at the NOAA series, they already had 1937 as the year with the highest rate of warming, so GISTEMP must have picked it up from NOAA when they switched to the new NCEI ERSST.v4 sea surface temperature reconstruction.

So, the next time that you hear somebody claiming that Global Warming is accelerating, show them a graph of the rate of warming. Some climate scientists seem to enjoy telling us that things are worse than predicted. Here is a chance to cheer them up with some good news. Somehow I don’t think that they will want to hear it.

0 0 votes
Article Rating
307 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Latitude
August 28, 2015 6:26 pm

comment image?w=640&h=480

Reply to  Latitude
August 28, 2015 7:01 pm

Steven Goddard claims are hosed –he knows as much as my cat about climate change: http://www.desmogblog.com/steven-goddard

Alan Robertson
Reply to  warrenlb
August 28, 2015 7:28 pm

…and your cat learned everything it knows from you.

ab
Reply to  warrenlb
August 28, 2015 7:34 pm

Desmog Blog? Really?

Reply to  warrenlb
August 28, 2015 7:42 pm

No Allen, he learned all he knows on the subject from his cat.

RD
Reply to  warrenlb
August 28, 2015 9:13 pm

desmog blog? ROFL

Andrew
Reply to  warrenlb
August 28, 2015 10:28 pm

Congrats on successfully debunking one error in something he said – which he retracted 7 years ago.
Now since the data is public and the method reproducible perhaps you and any household pets could turn to the topic of THIS post?

Reply to  warrenlb
August 28, 2015 11:22 pm

From the wrong end of the cat to boot.

Reply to  warrenlb
August 29, 2015 2:35 am

Oh, gawd! Would someone please spare us this fool’s drivel?

Stephen Richards
Reply to  warrenlb
August 29, 2015 4:37 am

AT DESMOGBLOG. You must be joking.

Reply to  warrenlb
August 29, 2015 6:04 am

Here is what Anthony Watts had to say about Goddard:
“My apologies to readers. I’ll leave it up (note altered title) as an example of what not to do when graphing trends”
http://wattsupwiththat.com/2010/07/02/ar
Anyone want to change their minds about him?

Reply to  warrenlb
August 29, 2015 6:54 am

Warren, you must have been doing a Rip Van Winkle for a spell, eh?
Everyone here knows that Tony Heller was given a bad rap, and that he and Paul Homewood were actually way out in front on spotting the perverse chicanery of the government bought warmista data manipulators.
Perhaps you missed this article by Dr. Brown:
http://wattsupwiththat.com/2015/08/14/problematic-adjustments-and-divergences-now-includes-june-data/
Tell you what sir, why do you not do us a favor and read up on current events before attempting to give lessons on your ignorance.

Ernest Bush
Reply to  warrenlb
August 29, 2015 8:17 am

Umh…Tony Heller, aka Steven Goddard, programmed some of the climate models that are still being played with today. It was his job at the time. I think he learned more about climate and mathematics than you and your cat along the way. He is not an English or Biology major as so many of the vocal Warmists seem to be and is the perfect kind of individual to be investigating data changes in government data bases. He is currently programming under contract to the government.
While being a skeptic about NOAA and NASA data, he is a fierce defender of the GHE and its effect on our atmosphere and the radiative properties of CO2. Meanwhile, I am supposed to automatically assume he is hosed by a post at a blog frequented by Warmists with their agenda? Reaaallllllllly.

Latitude
Reply to  warrenlb
August 29, 2015 8:35 am

warrenlb
August 28, 2015 at 7:01 pm
Steven Goddard claims are hosed
====
Warren, are you saying there’s no adjustments to past temperatures?
If you agree that past temperatures have been adjusted…..then the subject of this article “how fast is the earth warming” is nothing more than mental masturbation
The present rate of warming will never be known…. as long as past data is constantly changed

MarkW
Reply to  warrenlb
August 29, 2015 8:54 am

The trolls are getting feisty this morning. They must know that the game is up.

catweazle666
Reply to  warrenlb
August 29, 2015 10:32 am

And my cat will leave you for dead when it comes to climate science, warren.
Oh, and she’s quite good at modelling complex systems too, she’s helped me out on numerous occasions, by walking on the keyboard.
Eejit!

BFL
Reply to  Latitude
August 29, 2015 10:17 am

@Latitude: Very informative graphs. Also:
desmogblog = short for “the smoggy blog” & definitely and thoroughly polluted.
http://www.thefreedictionary.com/smoggy

pete j
Reply to  Latitude
August 30, 2015 9:27 am

I wonder if you were to determine the CMS, plotted from the CMA, calculated from the 3 historic GISTEMP temperature series cited by Goddard/Heller you can then determine the “acceleration” in the rates of warming that can be attributed to manipulation of data by government scientist-activists over the years as the data “changed” ?

August 28, 2015 6:49 pm

How about graph #5 showing the acceleration….

JimS
Reply to  DonM
August 28, 2015 7:22 pm

I think you might have to ask Michael Mann for that one, DonM.

Reply to  JimS
August 29, 2015 8:15 am

At age of about 9 or 10 ( at the time my village didn’t have electricity) so I built ‘cat’s whisker’ radio with a 30m long aerial.

Reply to  JimS
August 29, 2015 8:17 am

sorry, went in wrong place, addressed to ‘Menicholas August 29, 2015 at 6:43 am’

Reply to  JimS
August 31, 2015 12:50 am

Dr. Brown
Thank you for your comment and the advice. I agree with your observation about 60 year component, just about hanging on the lowest branch of the frequency spectrum’s limit.
Your suggestions are welcome, and would be a rewarding exercise for an enthusiastic student of applied mathematics. In my case as it happens, time availability and interest in other people’s errors are transitory, so it is likely that I will decline your invitation. On the subject of ‘the most valuable’ of my contribution, I am not inclined to concur, but your opinion and views are this time as always highly appreciated.

Reply to  DonM
August 29, 2015 12:48 am

Spectrum of Gistem4 has number of prominent components, of which the solar magnetic (Hale) cycles is the highest by a whisker.
http://www.vukcevic.talktalk.net/CT4-Spectrum.gif
For some years now I have claimed that the Earth’s warming and cooling is correlated to the magnetic cycles periodicity, and the above research shows that relationship clearly:
http://www.vukcevic.talktalk.net/SC-GTrw.gif
It is not intensity of the cycles as such it is the polarity which give us clue where to look further. This graphic
http://www.vukcevic.talktalk.net/E1.htm
give a bit more info

Reply to  vukcevic
August 29, 2015 12:53 am

typos : Spectrum of Crutem4 & gives a bit more info.

Reply to  vukcevic
August 29, 2015 2:44 am

@ vukcevic August 29, 2015 at 12:48 am
Do you speculate that the magnetic cycles impact the cloud cover? If so, that would effect insolation reaching the ground and hence impact the surface temperature.

Reply to  vukcevic
August 29, 2015 3:25 am

Thre are number of events going in parallel, difficult to say which if any may be relevant.
– NASA has shown that the Earth gets stronger impact from CME’s during even than odd cycles. CME’s do two things, sweep solar wind out of the way, allowing increase in the GCR penetration, cloudiness and cooling (svensmark).
CME’s (mainly high energy protons) electrically charge upper layers of atmosphere where polar vortex operates, affecting its velocity and under influence of the Earth’s magnetic field cause vortex to split(vukcevic)
http://www.vukcevic.talktalk.net/NH.gif
Splitting of polar vortex moves Arctic jet stream from zonal to meridional circulation, which in turn is the cause of colder winters in the N. Hemisphere.
– I found that the N. American tectonic plate, onto which sits the western half of the N. Atlantic (where the Gulf stream operates), has magnetic oscillations synchronised in periodicity and phase with solar magnetic cycles (see discussion on the recent thread starting here:
http://wattsupwiththat.com/2015/08/26/the-cult-of-climate-change-nee-global-warming/#comment-2015603
It is highly unlikely that the sun is direct cause, but there is strong possibility that both have the same driver (see sunspot formula parameters) . If the magnetic oscillations of the tectonic plate are result of mechanical dynamics affecting the sea floor, than it could be postulated that it may change the effectiveness of the Gulf stream’s transport of the ocean’s heat energy northwards.

Reply to  vukcevic
August 29, 2015 6:43 am

My cat has some pretty thick whiskers.

Reply to  vukcevic
August 29, 2015 8:16 am

should have gone in here:
At age of about 9 or 10 ( at the time my village didn’t have electricity) so I built ‘cat’s whisker’ radio with 30m long aerial.

Michael 2
Reply to  vukcevic
August 29, 2015 10:39 am

I also built a radio with a cat whisker detector. Seems like it was a “Cub Scout” project. It never did work for me but we had only one weak radio station nearby. Still, it was an interesting activity for an 8 year old and put my feet on a course of technology. I think it lacked a suitable ground and the antenna probably wasn’t long enough.

rgbatduke
Reply to  vukcevic
August 30, 2015 2:28 pm

Hi Vukcevic,
I do like your spectral decomposition of CRUTEMP4 (you mislabel it in your text and think that this may be one of the most valuable things you’ve posted. Note that given the length of the record, your peak in the 60 year range may be an artifact and — although I note the same thing and it is apparent in the top article if one looks for it (1937 to 1998 being 61 years) — I would avoid making a big deal out of it. I don’t know how you do detrending and systematic padding at the short end of the stic, but I’m guessing that the high frequency part is pretty good from 3-4 years out to maybe 30 years, so the 22 year peak is probably genuine. I think that the 5 (and 10 year harmonic) peaks are probably real as well — again a cursory glance at the climate data suggest that the autocorrelation time of the global average temperature is roughly 2.5 years — fluctuations with a ~5 year character seem abundant (and the 10 year peak is likely a sign that the 5 year peak is real, and this is just a harmonic). Again, I would be very cautious about attribution — I suspect that this peak is related to very large scale atmospheric cycles including but not limited to El Nino and is closely tied to the “natural” local oscillation time of the atmosphere/surface ocean system but the system is too complex to be certain.
The one thing that AFAIK nobody has done — and I freely offer you the opportunity to be the first — is to take the output from individual runs from as many individual runs of the CMIP5 climate models as you can stomach and do exactly the same fourier decomposition of their fluctuatio properties. I predict that they will have completely, totally different spectra. Furthermore, I predict from observations I’ve made on at least some of the series myself that their power spectrum will be completely wrong — the temperature fluctuations produced by the bulk of the models will be 2 to 3 times too large compared to the actual fluctuations produced by the actual climate. You might want to do a laplace transform as well as fourier to get a measure of not just the harmonics visible but a direct measure of the decay times.
The final step is to take the fluctuation-dissipation theorem and apply it to the result. If the climate models have the wrong oscillatory and relaxation spectra, they are just plain wrong, because the FDT:
https://en.wikipedia.org/wiki/Fluctuation-dissipation_theorem
basically asserts that the way an open system regresses to its local equilibrium state when perturbed is directly connected to the modes it uses to dissipate the energy added to it so that it remains in (near) detailed balance. This stuff is being studied only by a handful of mathematicians and I’m guessing most climate scientists (and a lot of the modelers) don’t even know it exists. The theory itself probably has to be modified for strongly nonlinear systems — I’ve looked at a few papers applying modified versions to e.g. the Lorenz model with at least some success — but I don’t think anyone has done a broad study of applying it to all the CMIP5 models on any basis at all, and I think such a thing would be enormously revealing on a semi-quantitative basis even without worrying about the details of using FDT in a nonlinear system. In particular, I would argue that if a climate model fails to reproduce the harmonic and exponential decay properties of the actual climate on timescales out to 20 years, this is strong evidence that they implement the wrong microdynamics, that they generate the wrong self-organized dissipative structures, that they will incorrectly predict the dissipation modes and likely temperature increase associated with increased CO2 forcing, and that their predictions not only “can’t be trusted”, they should be overtly rejected as too wrong to be of any use whatsoever, back to the drawing board.
To put this in simpler terms, if a climate model has the temperature fluctuating up and down by as much as 0.5 to 0.7 C over a single year (as some of the models in CMIP5 do) where the actual global average temperature shows nothing of the sort in any part of the well-resolved part of the anomaly by a factor of at least two, and where the actual model results being compared are not individual model runs but already average over still larger, still faster oscillations, there is a serious problem with the climate model involved. This is almost completely obscured by the spaghetti graphs that have become standard practice in the climate game. One sees only the general envelope of the tracks and the mind interprets this envelope as the “error of climate models”. It is not. There is no such thing. There is only the difference between one run of one model and the behavior of the actual climate. Sure, in a chaotic nonlinear system one cannot count on having the right long term behavior or identical traces (especially when the model runs are initialized from a state of extreme ignorance) but when the models exhibit quantitative and qualitatively different spectra in their short period behavior, it is simply direct evidence that the climate model in question is failing to capture or even come close to the meso-scale dynamics of forcing and dissipation.
How, then, can it possibly get a long term response correct?
Just something to think about.
rgb

Reply to  vukcevic
August 31, 2015 12:52 am

meant to go here
Dr. Brown
Thank you for your comment and the advice. I agree with your observation about 60 year component, just about hanging on the lowest branch of the frequency spectrum’s limit.
Your suggestions are welcome, and would be a rewarding exercise for an enthusiastic student of applied mathematics. In my case as it happens, time availability and interest in other people’s errors are transitory, so it is likely that I will decline your invitation. On the subject of ‘the most valuable’ of my contribution, I am not inclined to concur, but your opinion and views are this time as always highly appreciated.

Sheldon Walker
Reply to  DonM
August 29, 2015 12:50 am

That’s a good idea, DonM.
I thought about about including a graph of the acceleration of temperature in this article, but decided it would only confuse the issue of the rate of warming.
I plan to have a couple of follow up articles. One on the acceleration of temperature, and one on the relationship between CO2 level and the rate of warming.

Reply to  Sheldon Walker
August 29, 2015 11:51 pm

…although the moving average “representative guesses” at the ends (as the averages data is truncated) might be even more exaggerated as you take another derivative. But, without getting to deep into it I would guess that five years out (with full data) it would not change much. (you could chop it off at five years back to see how the estimated moving average compares with your above truncated moving average.
And for graph # 6. We could take the third derivative (the Jerk), throw in a bunch of fluffy language and claim that it is representative of something (make up a new theory that a Jerk of a certain size will kick us into the catastrophic change of state) and go after some grant money.
I’m to sleepy to try to tie in a Mann joke.

August 28, 2015 6:51 pm

Pointless using GISS or any of the land or land/sea data for anything but a good laugh. The extent of “administrative” changes have rendered all of them useless.

Michael Wassil
Reply to  Alan Poirier
August 28, 2015 6:59 pm

+10
Temperatures will be ‘adjusted’ again next month to make this the hottest August on record; adjusted the following month and each of the following months again to make Sep, Oct, Nov etc the hottest months on record ad nauseam . So the ‘rate of increase’ will remain a moving goal post, central moving averages not withstanding. LOL.

Robert of Ottawa
Reply to  Michael Wassil
August 29, 2015 7:00 am

I believe Steve Goddard has a graph of the rate of warming increase over the years due to the mysterious adjustments.

Reply to  Alan Poirier
August 28, 2015 8:08 pm

Exactly my concern. Why not start with an unadulterated data set, say one based on rural temperature sites or satellite data? Starting with data that essentially wipes out the cooling in the 1970s is to play the warmists’
game and tacitly agree that the planet is actively warming. Remember, the warmists love straight lines and assume a century of future warming, based on nothing, but they do it anyhow. They need it to be.

Reply to  Alan Poirier
August 29, 2015 6:47 am

Thank you Alan.
I was wondering why the comment section was stuck on how stupid Warren is compared to his cat, when the article uses “data” which is not data.
It is not even modified data, as if their was any such thing.
The kindest thing one could say about it is that it is a model of how the people who manufactured it think.
But the truth is that it is, at this point in time, nothing but a big fat set of lies.

Albert Paquette
August 28, 2015 7:01 pm

That’s interesting. There seems to be a peak warming year about every 21 years. Is there any reason for that?

Leonard Lane
Reply to  Albert Paquette
August 28, 2015 10:12 pm

Probably the smoothing technique introducing false periodicity. See the statistical sites for info on the Slutsky- Yule effect.

M Seward
Reply to  Leonard Lane
August 28, 2015 11:53 pm

Or its a solar signal – direct or indirect. Random sea states have a similar phenomenon where the sea level aboce a datume is normally distributed, i.e. is random but the successive crest to following trough distance is not, it is Guassian distributed. Not sure if that is mathematically similar or just a red herring but its all good food for the brain. Don’t know if you could fatten a climate scientist on this sort of diet though.

Reply to  Leonard Lane
August 29, 2015 6:59 am

The Slutsky-Yule effect?
Is that a thing?
I think there may be a joke there, but I am not gonna be the one…leastways not with Christmas just around the corner.

Ian W
Reply to  Albert Paquette
August 29, 2015 8:27 am

As vukcevic says above ( http://wattsupwiththat.com/2015/08/28/how-fast-is-the-earth-warming/#comment-2016891 ) this seems to be the Hale cycle. Whoever would have thought it?

August 28, 2015 7:08 pm

The fact that the 97-98 Super El Nino stands out, was there a mid-1930’s Super Duper El Nino?
Those are of course a redistribution of heat, whereby stored equatorial West Pac OHC can get transported via currents to highers lats and convection to the stratosphere for dumping back to 3 K outerspace.

Reply to  Joel O'Bryan
August 29, 2015 7:06 am

The mid-thirties one was a actually a triple-prodigious, cats meow, brobdingnagian, extra-wonderful, most ginormous, dog’s pajamas el nino.
Since you asked.

asybot
Reply to  Menicholas
August 29, 2015 11:48 am

@ Menicholas, Run by cats. (it just seems our two cats run our lives, so cats might as well run the climate as well they are doing it pretty nicely around our area).

john robertson
August 28, 2015 7:17 pm

Sorry the Estimated Average Global Temperature is specious rubbish.
To demonstrate this, you could show the error range on your graph.
All the “mathematical manipulation” in the world will not produce data we do not have.
The arrogance of the claimed information of anomalies of 0.2 to 0.7 C is breathtaking.
In audio this signal would be the noise .
Or is the sacred data used here accurate to 0.01C as well?
Enough already with these anomalies, I am sick of this pseudo science.
In all honesty, could one even produce an accurate average global temperature from the satellite data, accurate to 0.1C.?
Do we have usable data from 1979?
As in data useful for the purpose of accurately calculating an average temperature for this planet?

FAH
Reply to  john robertson
August 28, 2015 8:09 pm

I agree. The first problem is calling the change in the average over global temperatures a “rate of warming.” While that is intuitively pleasing it is not correct. Many have pointed out that averaging over an intensive quantity over a heterogeneous system does not yield a valid thermodynamic quantity. Thinking of other intensive variables clarifies the problem. Examples of other intensive properties are pressure, melting point, boiling point, conductivity, etc. Extensive properties would include mass, volume, length, number of particles, free energy, entropy, etc. Using, say, the example of melting point, one could average the melting point over the climate system, to include the water, land, air, etc. scrupulously accounting for differences due to mineral and soil composition, pressures, and other variables and come up with an average global melting point. The question then becomes: what use does that number have? If it went down a small amount would the globe be getting “more melty?” It would appear in no meaningful equation describing material behavior. This is because those equations need the intensive property to apply to a local system or uniformly to the entirety of the system being described. But the average melting point would not apply to any specific local evolution, nor to any global evolution if an equation could be obtained for that. The point is that an average of an intensive quantity over a heterogeneous system does not enter into the dynamics of the system in a causal way.
On the other hand, I think a more understandable analogy for an average of an intensive quantity would be a stock market index. The price of a stock could be viewed as an intensive property, somehow describing the activity of the individual stock within supply and demand. An average over the whole of the market or some subset of it could then provide an index, such as DJ, SP etc. which can be historically related to past values and used as an indicator of market dynamics. Historical experience may be used to derive expectations under various assumptions, but the dynamics are always an issue of study. The dynamics would cause the index behavior, but not the other way around. Nevertheless, the index would be a meaningful descriptor of the market just as global average temperature can be a meaningful descriptor of a climate system. However, the index itself would not necessarily be a driver of individual purchase decisions, although that might be possible in a general behavioral approach. The other attribute a stock market index has is difficulty using it alone to predict the future. Many attempts to predict stock market performance based on indices of various types have shown it to be difficult. Further it often happens that one index, say the DJ, goes up while another, say the SP or NASDAQ, goes down or is flat. Small upticks or downticks are relatively meaningless in what they say about the overall market health. A crash or boom can come at any moment and may be unrelated to the details of the index.
Another dubious claim is the error estimate for the global average temperature. Usually these seem to be estimates of how well the individual temperatures are measured, based on the instruments used. But only the individual location measurements are characterized by those errors. The global average temperature (index) is just that, a global average. It represents the quantity characterizing the distribution of temperatures over the globe, not at individual points. The uncertainty of its estimate of the mean is estimated by the standard deviation of the temperature over the globe, not at individual point measurements. For example, I took the UAH version 6 data recently released, which goes back to 1978. It gives the global hemispheric breakdowns of land and ocean temperatures as well as the global averages. For each month given, you can calculate the average over the hemispheres, land and ocean, and obtain the mean. (The weighting may not be just right, but the principle is the same.) For each month you can also calculate the standard deviations for each month. (You could do it yearly if you wish as well). Because there are relatively large differences between hemispheres and between land and ocean, the standard deviations of the means are relatively large, ranging from a low of about 0.2 C up to as much as 1.2 C. The mean standard deviation is about 0.4. The distribution of standard deviations is skewed but the log of the standard deviations appears to be nearly normally distributed. So the accuracy with which the global average temperature represents the temperature data of the globe is more like 0.4C, not even close to 0.01C or whatever is claimed for the temperature anomaly estimates. Ticks up or down of 0.1 C are meaningless. Only if one thinks of the temperature measurements as extensive properties that can be summed does the instrumental error characterize the uncertainty of the global average. As in the stock market example, trends would be very hard to distinguish statistically from random walks.

E.M.Smith
Editor
Reply to  FAH
August 28, 2015 8:28 pm

Exactly right. One can NOT average an intensive property and preserve meaning.
ANY average of different temperatures from different places is no longer a temperature and is a bogus thing.
GIStemp does lots of temperture averaging before making a grid box anomaly, so the anomaly canard can be ignored too. And that is based on a Monthly Average starting number based on daily min max averages…
It is bollocks from the first input monthly average to the end.
https://chiefio.wordpress.com/2011/07/01/intrinsic-extrinsic-intensive-extensive/

Reply to  FAH
August 28, 2015 9:02 pm

I think any rate of warming could only be meaningful if it was limited to a small region of similar climate. Once you average everything globally it loses all meaning. Imagine doing a study of the incomes of 100 people over a year. Assuming they start at the same income let’s say 10 people experience a 10% cut in pay, 80 people experience no change, and 10 people experience a 20 percent increase. It would never be correct to say that the whole group experienced an average 1% increase in pay.
Think about that while I try to mix up a batch of universally average paint color.

Robert B
Reply to  FAH
August 28, 2015 11:31 pm

A better example of an intensive property is density. Its the quotient of two extensive properties, mass and volume. You could take many samples evenly throughout if the sample was not homogeneous to get an average that would be an estimate of the total mass divided by the volume. How accurate it is would depend on how many samples, how evenly it is sampled and how inhomogeneous it is.
In the case of temperature, its the total thermal energy/heat capacity. We know that the sampling is dreadful, and only at the bottom of the atmosphere (near solid objects). This surface changes by 10-30°C and the variation around the globe is greater than 100°C. The specific heat capacity is not independent of temp, air cools as it rises and warms as it falls, and then there is the latent heat in water vapour.
Somehow, labelling the fudging as” homogenization” makes it all OK and the change from month to month pretty close to what you get from satellites for the lower troposphere.

Reply to  FAH
August 28, 2015 11:57 pm

E.M.Smith August 28, 2015 at 8:28 pm Edit

Exactly right. One can NOT average an intensive property and preserve meaning.

Doc, always good to hear from you. However, as I mentioned about someone else recently, you’re throwing the baby out with the bathwater.
As I understand it, if I follow your logic there is no meaning to the claim that a week in the summer with an average temperature of 80°F is warmer than a week in January with an average temperature of -10°F …
I ask because you say there’s no way way to average temperature and preserve meaning.. So if someone says to you “Last week the weather averaged below freezing, you’d better put on a coat”, would you reply “Sorry, you’ve averaged an intensive property, that has no meaning at all, so I’m going to wear my shorts outside” …
Or suppose I’m sick, and I take my body temperature every hour. If it averages 104° (40°C) over two days, would you advise that I ignore that average because temperature is an intensive property, and so my average has no meaning?
What it seems that you are missing is that very often, what we are interested in is the difference in the averages. Take the El Nino3.4 index as an example. It is the average of sea surface temperatures over a huge expanse of the Pacific. And as you point out, that’s an average of an intensive quantity. However, what we are interested in are the changes in the El Nino index, which clearly “preserve meaning” … and for that, the intensive nature of temperature doesn’t matter.
And the same thing is true about taking our temperature with an oral thermometer. Yes, it is only an average of the temperature around the thermometer, and we don’t care much what the average value is … but if it goes up by four degrees F, you don’t have to break out the rectal thermometer to know you’ve got problems,
Next, one can indeed average an intensive quantity, and preserve meaning. Let me take the water level in a wave tank as an example. The water level is an intensive quantity.
Now, if we measure the water level at say three location at the same time, our average won’t be very good, and won’t have much meaning at all. If we measure water level at 20 points at the same time, our average will be better. And if we use say a rapidly sweeping laser that can measure the water level at 20,000 points in a tenth of a second, we will get a very, very good average.
In general, you can measure any intensive quantity to any desired accuracy, PROVIDED that it is physically possible. For example, you might be able to put a hundred simultaneous lasers over the wave tank, for a total of two million measurements per tenth of a second … and you can be dang sure that that average tank level has meaning.
It’s like polling. The opinions of the people of the US resemble an intensive quantity, in that the only way to get a true average would be to ask everyone. BUT we can determine their average opinions, to a given reasonable level of accuracy, by taking something like two or three thousand measurements …
Or take density. Again, intensive. But if we take a cubic metre of a liquid with varying density, and we measure the density of every single cubic centimetre and we average those million cubic centimetres, we will get a very accurate answer. And if that is not accurate enough, just measure every cubic millimetre and average those.
(mmm … 1E+9 cubic mm per cubic metre … assume one second to measure the density of each cubic mm using some automated method … 31.6E+6 seconds per year … thats thirty years to do the measurements … looks like we need a Plan B.)
Finally, for some kinds of intensive properties there is a Plan B. Sometimes, we can take some kind of a sneaky end-run around the problem, and directly calculate a very accurate average. For example, if we take a cubic metre of a liquid with varying density as in the previous example and we simply weigh it, we can figure the average density to arbitrary precision with a single measurement … despite the fact that density is an intensive property.
And you may recall the story of Archimedes, who around 250 BC was asked by the king to determine if his crown was real gold. As the story goes, when Archy sat down in his bathtub, he saw the water level go up, and he ran through the streets of Sicily shouting “EUREKA”, which means “I’ve found it!” in Greek.
What had he discovered?
He’d discovered an accurate way to calculate the average of an intensive quantity, specific gravity. And he obviously knew how important that discovery was.
To summarize:
• While it is widely believed that there is no meaning in the averages of intensive quantities, in fact we routinely both calculate and employ such averages in a variety of useful ways.
• Extensive properties (mass, length, etc.) can generally be measured accurately with one measurement. Take out a tape measure, measure the length, done. Toss the object on a scale, write down the weight, done.
• With intensive properties, on the other hand, the more measurements that we take at different locations, the more accurate (and meaningful) our averages will be.
• As Archimedes discovered, the averages of some intensive properties, such as the specific gravity of a king’s gold crown, can be accurately measured with a single measurement. The value, meaning, and utility of averages of some intensive properties have been understood for over two millennia.
• The changes in averages of intensive properties, such as the El Nino 3.4 index, can contain valuable information provided that the measurements are repeated in the same locations, times, and manners.
Regards,
w.

FAH
Reply to  FAH
August 29, 2015 12:53 am

Willis, let’s look at the water level. While it may be considered an intensive quantity, if we are considering an isolated system, with a fixed volume and with other variables (such as pressure, chemical composition of the water, etc.) constant, the water level and volume of water ( or mass of water) are simply related. In fact proportional. Hence for a fixed, isolated system, the water level is proportional to an extensive quantity. One could make the same argument for the temperature of a fixed volume of water, i.e. the temperature is proportional to the total energy. (Or perhaps in a larger volume of water assumed to be in convective and thermal equilibrium, such as some fixed expanse of sea.) However, if one has a heterogeneous collection of containers of various shapes and volumes, each would have its own relationship between the extensive property – mass or volume of water (assuming constant temperatures and pressures) and the intensive property – water level. The volumes of the collection of containers could vary from very small to very large, and the shapes could vary so that the relationship between volumes and levels was quite complicated. Then the average over the water levels of all the containers would not be simply related to an extensive quantity such as volume. The level in a small column container could go up dramatically with small additions of water and the level of a large reservoir would increase only slightly with added water. One would have to convert each level to the appropriate subsystem dynamic to get an extensive quantity, the average of which would then have ready interpretation. The statistics of the average water level would be dominated by the variations in the volumes and shapes of the containers, not the amount of water. It doesn’t make any difference how accurately one measures the level in all the various containers, the variation is still dominated by the variation over the containers. The “meaning” of small changes in the water level is only physically clear if the average is able to incorporate the heterogeneity of the systems being measured.
In the oral thermometer, sea surface, and density examples, the implicit assumption is that the characteristics of the system being measured are well approximated by some known underlying function, such that the intensive quantity being measured is essentially the same across the whole of that system or is related to the whole system by some previously determined analysis. In those cases, small changes do indicate something about the system because the proportionality of the fixed system extensive properties and the intensive properties is valid, i.e. the underlying dynamics are known. This is equivalent to saying that the distributional nature of the numbers for which the average is being calculated is known. But that relationship has to be positively established before uncertainties of the average can be claimed. The relationship between oral and rectal temperatures across body types has been characterized extensively. If the underlying dynamics are not known, then differences in the intensive property have reduced meaning and estimates of uncertainties are, well, uncertain.
The polling example is similar. The utility of a poll is a strong function of the sample composition. An underlying assumption of a poll is that there is some average behavior across the sample to be measured. But if one has a sample comprised of two classes, one strongly against some stated position and one strongly for it, and the pollster simply selects at random from the mixture of both populations, the average result will be that the population is ambivalent to the question. However, if one breaks the system down into components, one can find that subpopulations differ, i.e. the intensive quantity applies to the subsystems to which it is related. This is the equivalent of breaking down a heterogeneous system into systems described by the quantity of interest. One could then talk about the behavior of the subsystems adequately described by their subsystem averages.

mobihci
Reply to  FAH
August 29, 2015 3:10 am

error margin? whats that? all you gotta do is fudge, smudge and ‘homogenise’ the crap out of every station you find, and then you have an ‘average’. of course a lot of the raw data may need to be ignored, and you may have to change cooling into warming on many stations to achieve an ‘average’, but in the end, it is worth it. i mean just think of how many millions that next grant is..
http://jennifermarohasy.com/2015/08/bureau-just-makes-stuff-up-deniliquin-remodelled-then-rutherglen-homogenized/

Reply to  FAH
August 29, 2015 5:26 am

E.M. Smith says: “ANY average of different temperatures from different places is no longer a temperature and is a bogus thing.”
——–
Willis I think ignoring this sentence is in error. You cannot average temperatures from the Yukon and Belize and get any meaning.

Reply to  FAH
August 29, 2015 7:17 am

“Willis I think ignoring this sentence is in error. You cannot average temperatures from the Yukon and Belize and get any meaning.”
I am tending to agree with the skeptics here.
Although sweeping statements can often be parsed and examined to find an example of when the statement may not be true, in general there is a good point to be made here.
As for the Yukon and Belize…which is a fine example…how about if we consider the relevance of averaging the water temperature over the vast open ocean and the interior of the Antarctic continent, and the frozen center of the Arctic ocean, in all of their highly homogenized and guess-worked glory, and throwing those values into a big blender with the surface records?
If one could even get accurate readings from all these places, and then correctly work out in sufficient detail the thermodynamics of the heat content of the various layers and humidity levels…what does it really compare?

MarkW
Reply to  FAH
August 29, 2015 9:14 am

While it is true that trying to average the melting point of many different materials is a meaningless excercise, all the temperature measurements being made are being done using air. (Yea I know that they have attempted to merge the sea surface and air temperature readings, and that is pure nonsense.)
If I have a room with 10 sensors reading air temperature, that is a meaningfull number, so long as you properly calculate the error bars.

MarkW
Reply to  FAH
August 29, 2015 9:22 am

emkelly, while averaging the temperature between Yukon and Belize will tell you nothing about the temperature at either Yukon or Belize, it is useful if you are trying to determine the temperature of the entire earth. To go back to my earlier example of a room with 10 sensors. Each sensor is perfectly accurate for the spot in which it is located, and averaging the temperature of the 10 sensors will add nothing to you knowledge of what the temperature at the location of each specific sensor. However if you change your focus to the room as a whole, The average between all of the sensors will inform you as to whether energy is being added to or subtracted from the room as a whole. The trick is to have enough sensors to adequately cover the entire room. More sensors mean the error bars on your average decrease. You start out with the accuracy of each sensor, and then add to the error bars based on the fact that you have less than perfect coverage.

Reply to  FAH
August 29, 2015 11:24 am

FAH August 29, 2015 at 12:53 am

Willis, let’s look at the water level. While it may be considered an intensive quantity, if we are considering an isolated system, with a fixed volume and with other variables (such as pressure, chemical composition of the water, etc.) constant, the water level and volume of water ( or mass of water) are simply related. In fact proportional. Hence for a fixed, isolated system, the water level is proportional to an extensive quantity.One could make the same argument for the temperature of a fixed volume of water, i.e. the temperature is proportional to the total energy. (Or perhaps in a larger volume of water assumed to be in convective and thermal equilibrium, such as some fixed expanse of sea.) However, if one has a heterogeneous collection of containers of various shapes and volumes, each would have its own relationship between the extensive property – mass or volume of water (assuming constant temperatures and pressures) and the intensive property – water level.

Thanks, Fan. You are operating under the assumption that I was measuring the water level (an intensive quantity) in order to get an estimate of the volume (an extensive quantity).
I said no such thing. I was only interesting in getting an accurate estimate of the water level. Here’s an example. Remember that we are discussing Doc Smith’s claim that averages of intensive quantities have no meaning.
Suppose I’m interested in building near the ocean. So I install a tide gauge, to measure an intensive property, the water level. Note that unlike your example, I’m NOT trying to figure out the ocean’s volume.
So I keep the tide station there, and after some years I can average and analyze the measurements and tell you the average water level at my station. It’s an intensive quantity, to be sure … but I hold strongly that that average water level assuredly has meaning, as it allows me to make decisions regarding how high above the (intensive) water level I should build.
You go on to say:

In the oral thermometer, sea surface, and density examples, the implicit assumption is that the characteristics of the system being measured are well approximated by some known underlying function, such that the intensive quantity being measured is essentially the same across the whole of that system or is related to the whole system by some previously determined analysis.

Again, you seem to be under the misapprehension that “the intensive quantity is the same … or related to the whole system”. Not true for body temperature, temperature varies all over my body. Not true for sea surface, the sea surface level varies everywhere.
And regarding density (or specific gravity), so what if the intensive quantity is related to the whole? The same is true of temperature, with the relationship being the heat content of the substance. But that’s not what Doc Smith said. He said that any average of an intensive quantity is meaningless … but Aristotle’s “Eureka” falsifies that claim.
Someone also said I should have commented on Doc Smith’s assertion that:

ANY average of different temperatures from different places is no longer a temperature and is a bogus thing.

Again, I would use the average of the NINO3.4 index. It is an index which is most certainly an “average of different temperatures from different places” … but are y’all really claiming that Doc is right and that the Nino3.4 index is a “bogus thing” with no meaning???
Regards,
w.

FAH
Reply to  FAH
August 29, 2015 11:49 am

Mark, there are two reasons the general notion of using an average of air temperatures needs careful thought, one physical (intensive versus extensive) the other statistical (the meaning of an arithmetic average). Sorry if this discussion is a little long.
When folks think of the physical issue and have comfort averaging over temperatures, they invariably have made the tacit assumption that one is trying to describe a thermodynamically isolated, relatively homogeneous system, such as the room you mention. The tacit assumption is that the room’s temperatures are relatively close together at different locations and the air is at the same pressure and humidity and at equilibrium. Under that tacit assumption, indeed the heat content of the total air is closely approximated as proportional to the temperature and averages over the temperature in the room approximate the narrow distribution of air in the room. Notions of “warming” or “cooling” based on the average temperature have some support. The uncertainty in the average temperature (or heat) is closely estimated by the standard deviation of the temperatures measured. This is essentially morphing the intensive temperature into a good approximation of an extensive property for the particular assumed restricted homogeneous system. But let’s look at what happens when the system under consideration consists of a collection of systems (such as the poles, the tropics, hemispheric lands, and the ocean).
Let’s think of the poles, tropics, hemispheric lands, and ocean as different rooms and we want to know something about their thermodynamics. The air temperatures in each of those locations depends on a variety of thermodynamic quantities including humidity, albedo, convective mixing, aerosols, etc. etc. Call the poles Room P, tropics Room T, lands Room H, and ocean Room O, or just P, T, H and O for short and think of the air temperatures measured there (or anomalies, it doesn’t make a difference just modifies the statistics a little). Let’s say the measured temperature at P is -2, at T +2, at H +1 and at O it is 0.0, all degrees C (these are just hypothetical example numbers but they make the point). Assume these numbers are individually measured repeatedly with thermometers to precision 0.01 C. Now the average temperature is the average of -2, +2, 1, and 0, which is 1.0 C. The standard deviation (more about that later) of these numbers is 1.7 C. The uncertainty with which the average represents the distribution of the numbers (whatever they “mean”) is 1.7 C not 0.01 C. The reason the averaged numbers do not have a small standard deviation such as the individual temperature numbers is because they have not been individually related to the underlying extensive properties. If the average goes up or down one cannot say what it means thermodynamically unless one knows what the underlying distribution is doing. Hence the average number 1.0 C does not describe any thermodynamic system in any causal way and the uncertainty is much more than the measurement error of the thermometers. It does not describe the differential relationship between total system entropy and any of energy, volume, pressure, etc. So if the tacit assumption is not true (and it is not for the globe) then averages of air temperatures over the globe do not represent a thermodynamic quantity.
Now let’s address the statistics issue, i.e the notion of an average. What is commonly called an average is usually meant as an arithmetic average, i.e. the sum of a subset of some set of numbers, possibly all, divided by the number of measurements. There are other averages, such as a geometric average (the product) or a logarithmic average. All of these are simply prescriptions for calculating a number from some set of numbers. Now these prescriptions are, in statistics, generally used as estimates of parameters that characterize a presumed underlying distribution obeyed by the numbers. Arithmetic averages are usually used to estimate the mean of such assumed underlying distributions. Other prescriptions yield estimates of other parameters such as modes, quantiles, variances etc.
The prescription for calculating an estimate can be followed for any set of numbers. However, the utility of the number lies in its ability to estimate a parameter of the underlying distribution. The nature of that distribution is linked intrinsically to the utility of the estimate. Let’s use an illustrative example. If one calculates an average and standard deviation of the numbers 1,1,1,1,10,10 one gets an average of 4 and a standard deviation of 4.6. The prescription for calculating a standard deviation is the average over the deviations of the mean from each of the data points (actually the square root of the sum of their squares). If the assumption of an underlying Gaussian distribution is true to some extent, then this prescription yields a statistically “good” estimate of the standard deviation of the Gaussian whose mean was estimated by the average. (There is a slight nuance concerning sample size and whether we are thinking of the uncertainty of the estimator versus the underlying distribution parameter, but it is unimportant here.) For the example numbers above, it is obvious that these estimates don’t themselves give a clear picture of the underlying distribution (this is because in the back of our mind we are thinking of a Gaussian distribution). We could not draw the distribution of the numbers using just the average and standard deviation. So the utility of a number from the prescription depends on whether the underlying numbers at least somewhat obey the distribution we have in mind. (Another problem with this little example occurs a fair amount in climate science, such as in averages of precipitation amounts or the number of storms, namely the numbers are all positive, but that is another story. The usual Gaussian distribution goes from minus infinity to plus infinity. If the numbers are by definition positive, then some other distribution is needed (e.g. binomial, lognormal, etc.) and estimates of parameters describing those distributions are different.) Inferences drawn (e.g. trends) based on arithmetic averages of such quantities require a careful attention and are usually wrong if done using Gaussian based tools without assuring the underlying distribution is Gaussian on its face or by transformation. If the numbers are far enough away from zero, sometimes a Gaussian is a fair approximation, but otherwise not. Macroscopic temperatures are fairly far away from zero in the natural thermodynamic units of degrees K.
So to use an average wisely, such as global average temperature or temperature anomaly, requires examining the underlying distribution of the numbers. In my earlier post I looked at constant time global temperature distributions from UAH as an example. The statistical utility of the number is based on how well it estimates the distribution of temperatures. (Note that this is related to, but distinct from its physics utility.)

FAH
Reply to  FAH
August 29, 2015 12:01 pm

Willis, I did not mean to imply that an index such as water level has no utility. The analogy of the stock market indices is an example of an index that is very useful to a number of people. As in the tide gauge data, one can build up a historical set of data that one can empirically and heuristically use to base behavior, such as where to build. Oral temperatures of individuals of various ages have been historically determined to be a good indicator of the human or animal body in its relatively well understood dynamical attempt to maintain equilibrium at a particular ideal temperature. Nothing wrong with that. The difficulty comes when the number is used to infer something about the dynamics of the whole, when the dynamics as a whole is complicated, not in equilibrium, and not well understood in a predictive sense.
For example, one can use the tide gauges at a beach location over 100 years to make a fairly informed decision about what to build locally and where and how to mitigate against the outlying inundations that have occurred. The effects of global sea level rise, local subsidence, erosion, etc. would all be incorporated in the local measurement and one does not really care what happens globally. As we all know all real estate, like politics, is local. But extending that local tide gauge measurement to the globe as a whole needs a lot more effort.

Reply to  FAH
August 29, 2015 1:07 pm

I’m afraid I’m stuck on simple . I’ve been accused of taking simple to the extreme and that definitely might be said to be the design goal of my 4th.CoSy language .
As Willis recently commented , http://wattsupwiththat.com/2015/08/24/lags-and-leads/#comment-2015825 :

Me, I’m a great fan of very simple models, what I call “Tinkertoy” models. I like them because i can use them to see where the earth does and doesn’t act like the model predicts. …

The thorough quantitative understanding of models significantly simpler than Tinkertoys is the classical method of physics . In http://cosy.com/y14/CoSyNL201410.html , I paraphrased from one of the classics I’ve bought after it was mentioned in a blog :

Goody makes the interesting observation , … , that there are two approaches to understanding planetary temperature , one starting from the measured phenomena and working to explain it . That appears to be the ubiquitous method . But without the other , the traditional analytical understanding of abstracted systems simple enough to be quantitatively confirmed by experiment , one never can be said to understand .

David Appell in particular has spammed my explication of calculating the temperature of a radiantly heated colored ball observing that a planet is much more complicated than a simple colored ball .
But , if you don’t know how to calculate the temperature of a radiantly heated colored ball — which can be easily experimentally tested — but which I venture to say many career “climate scientists” don’t , you are blowing smoke to claim you understand anything more complicated .
Averaging temperatures clearly has quantitative meaning . To claim otherwise would imply that it’s just dumb luck that our estimates of observed average global temperature is within 3% of the temperature calculated by
simply summing the energy impinging on a point in our orbit .
As has been pointed out , many intensive properties like temperature and density are intensive because they are ratios with volume or mass already in their definitions . That does not make doing arithmetic with them meaningless . I recently had an exchange with someone so confused by the intensive distinction that he could not understand the purpose of the Kelvin scale or that it was meaningful to say that one temperature was twice another . Such arguments reduce ( Lord Chris , what’s the Latin ) to the impossibility of constructing any temperature scale at all .

FAH
Reply to  FAH
August 29, 2015 1:29 pm

Bob, agree wholeheartedly.
Maybe the issue of locality is useful. In thermodynamics, intensive properties are the local, differential coefficients on the manifold of thermodynamic states, well defined if the system is in equilibrium, more difficult if it is not. As a local differential property defined at a point on an assumed locally continuous manifold, it is always possible to find a small enough region around the point such that the quantity varies little enough to make some approximations. If the system is in equilibrium the region is easier to define (the space is “flatter”) than if the system is not in equilibrium (not “flat”). It may be that over some useful small region, the intensive quantity is nearly constant, in which case an average may be very close to the value at the point. It may be possible to find some well-behaved function that describes the behavior of the differential over the region. In either case, the estimate of the differential (and its variation) over the region may be a good approximation of the thermodynamic intensive property within that region. This may be the case with the Nino index. I have never looked at that index or its calculation so I have no informed opinion on it one way or the other.
So there is no (thermodynamic) problem using an average of an intensive quantity over a sufficiently restricted region in the thermodynamic manifold that the intensive quantity varies little in some sense. In this case the approximate value of the average of the numbers is approximately a thermodynamic intensive quantity. The problem is in trying to obtain a thermodynamically meaningful quantity from values of the local differential intensive properties over a larger region of the manifold or if the manifold is not in equilibrium. In that case the average of numbers from different parts of the manifold are not guaranteed to be thermodynamically valid numbers. In other words the average does not represent the local differential behavior of any part of the manifold or the manifold as a whole. It may be a method can be obtained to relate the values by integration, transport along paths or symmetry arguments, but that has to be demonstrated before the averaged quantity can be used as thermodynamically meaningful. But this means directly useful thermodynamically. It often is the case that we build up a history of index behavior and seem to see correlations between the behavior and other phenomena (such as model outputs). Nothing erroneous about thinking there may be something there. It only means it is a good topic for study, not “settled science.”
Bottom line is that there is nothing intrinsically “wrong” in using averages over intensive properties. It is only that one needs to be thoughtful about the utility of the number, its relationship to thermodynamic notions, the details of the system under consideration, and what uncertainties in the number represent

FAH
Reply to  FAH
August 29, 2015 3:32 pm

As an aside, Axel Kleidon has been doing some very interesting work using rigorous thermodynamics to explore the planetary (climate) system. One paper from 2012 focuses on free energy within the planetary system (an extensive quantity). It is titled “How does the Earth system generate and maintain thermodynamic disequilibrium and what does it imply for the future of the planet?” It was in Proc. Roy. Soc. but is available online in pdf at
http://rsta.royalsocietypublishing.org/content/roypta/370/1962/1012.full.pdf .
He published another interesting paper in 2011 on wind energy, “Estimating maximum global land surface wind power extractability and associated climatic consequences,” available online at
http://pubman.mpdl.mpg.de/pubman/item/escidoc:1693451/component/escidoc:1693450/BGC1565.pdf
Both are interesting, accessible reads and firmly grounded in thermodynamics. His work is neither pro-AGW nor anti-AGW. It is refreshing to think about the planetary system without the usual baggage of “hottest year evah!” … “no its not!” …. “yes it is!” etc. etc.

Robert B
Reply to  FAH
August 29, 2015 4:21 pm

Each sensor is perfectly accurate for the spot in which it is located,

I find it strange that the mean temperature at a station is expected to remain the same if the temperature in a region is constant. I looked at three stations in my city that are 6-8km apart on a plain. The airport (AP), the old site in parklands (WT) and the new site (KT) that has a glass building reflect sunlight on to it in the afternoon. The plot is the difference between each station and the average for the day with 5 day moving mean to smooth out the plot.
http://s5.postimg.org/tbbiq6wdj/3_sites_Adelaide.jpg
The KT site is usually warmer by about a degree than the WT site in summer but the three are pretty close to each other in winter. Even if the station doesn’t move, the area becomes built up or the screen gets repainted, how good is the mean of maximum and minimum temperatures an indicator of the change in energy of the pocket of air above the station?
Then we rely on homogenised stations to pretend that they never moved, the area was built up or that it was repainted.

Reply to  Robert B
August 29, 2015 5:48 pm

This is a good example to test what I’m doing, if you don’t mind.
Calculate yesterday’s rising
Tmax (d-1) – Tmin (d-1)=Trise
Falling temp
Tmax (d-1) -Tmin (d0)=Tfall
Min difference
Tmin (d-1) – Tmin ( d0 ) =MnDiff
Tmax (d-1) – Tmax (d0 ) =MnDiff
I would expect to see the 2 different values to be very close, but you might see differences between the rising and falling values (representing UHI ).

MarkW
Reply to  FAH
August 29, 2015 7:50 pm

FAH, with enough sensors, the atmosphere can be measured, just as the room is. The same problems you cite in regards to the atmosphere also exist in the room model. We have fans and AC exhausts and returns that stir up the air as well as add or remove energy from the system. We have windows that have different thermal qualities from walls, even with walls you have inside walls and outside walls. During the day the sun shines through the window, unless the blinds are pulled. At night it doesn’t. Then you can have people and electrical equipment that put out heat.
The solution is to either have sufficient sensors, or to calculate the error bars appropriately. The same is true with the atmosphere. With enough sensors you could make a definitive statement regarding the absolute energy content of the atmosphere and how this energy content is changing over time.
With out enough sensors, you need to calculate the appropriate error bars to compensate for the data you do not have.

FAH
Reply to  FAH
August 29, 2015 9:14 pm

MarkW, absolutely, with enough sensors the atmospheric energy content can be measured. The question is how many would one need, could all the sensors be temperature sensors, and would the energy content be expressible as a simple sum of the temperatures.
For the case of a habitable room, HVAC systems are designed to hold the relevant quantities of temperature and humidity within comfortable bounds and to mix the air sufficiently so that the distribution of heat is as uniform as possible, despite the vagaries of day/night, amount of window area, inside/outside walls etc. So the HVAC controlled room air is designed to be a small region in a thermodynamic space. The control point is to keep the temperature (and humidity and mixing) within a small set of bounds. Under these conditions, a few temperature measurements (such as done by a few thermostats around the house or room) can fairly adequately capture the energy content. Given that the HVAC system keeps the room at some equilibrium state, the average temperature is a good proxy for the energy content. The uncertainty on the measurement would be the uncertainty on the control system bounds and the uncertainties of the temperature measurements. The temperature measurements could be made as precise as one wished, but the uncertainty of the energy estimate would generally be dominated by the variation allowed by the control system.
For the atmosphere, there is no human designed control system maintaining the system in global or local equilibria. Recall that temperature appears in a partial derivative in general and only in restricted systems in a total derivative. There are a variety of other thermodynamic variables necessary to characterize energy content of a system. Some of these are humidity, pressure, convection, turbulence, work being done by or on the air mass, etc. For example the specific heat of air varies with humidity by about 5 percent or so over atmospheric ranges. Atmospheric pressure and density varies with altitude exponentially, so we need to consider the size of the volume for which we want to measure the energy and measure the temperature as a function of the altitude desired. The work done on and by flowing air masses varies with the surface roughness, wind speeds, and characteristics determining the turbulence such as the height of the planetary boundary layer which essentially sets the Reynolds number of the local flow. Convective flow itself comprises a significant amount of the energy. So if we want to measure the energy content of some fraction of the atmosphere, we need to measure a lot of things, not just temperatures. Further, although we may be able to measure individual temperatures very precisely, say 0.001 degree C, the uncertainty of the energy estimate will be dominated by the variations in the other thermodynamic variables. A simple average of temperatures alone has large uncertainties as a measure of an extensive quantity like energy content over the globe. Given these uncertainties, there is no way to distinguish between variation in the temperature measurements due to the above stated factors versus the simple spatial variation over the globe and the strict statistical estimate of the uncertainty is quite large. I suspect this underlies the penchant for using even local anomalies, although that introduces another statistical quirk since anomalies are actually deviations. The issue of the distribution of temperatures about the calculated average and the import of that for inferences drawn is a part of another long conversation.
This is not to say that a simple average of temperatures is useless, just that it alone is not a thermodynamic quantity. (Unless the system is being artificially maintained in some narrow ranging equilibrium state like a habitable room.) Like the tide gauge measurements a historical record of temperature average can be used as an index and observed and searched for relationships to other things, under the assumptions that the underlying system is either cyclic, static, evolving, or whatever one wants to consider. Correlations to other variables can be examined. It is just how stock market players calculate the big price indexes, measures of debt/earnings, volatility indices, etc. and search the behavior over time for patterns relating them to consumer confidence, durable goods purchases, inventory levels or whatever.
The main problem with the simple global temperature index occurs when it, along with its statistical uncertainty, is thought of as a thermodynamic quantity and erroneous inferences are made.

Reply to  FAH
August 29, 2015 11:03 pm

@ Bob Armstrong:
reductionem ad impossibilitatem?

E.M.Smith
Editor
Reply to  FAH
August 30, 2015 12:48 pm

@Willis:
The way an intensive property can be averaged and be useful is to find those things that make it into an extensive property and account for them. So to turn ‘temperature change’ into ‘heat change’ takes more data.
With calorimetry, that is the mass, the specific heat, the heat of fusion and the heat of vaporization (if those phase changes happen), and any changes of composition (as they reflect into specific heat) along with any heat leakage paths.
Do all that, then you are measuring A THING that is well defined and that temperature can have meaning after averaging (as you are not really just averaging a temperature anymore, you are averaging a quantity of heat – ‘by proxy’ via that temperature of that specific thing once qualified for mass, specific heat etc. etc.
So go back and look at your examples:
Taking your body temperature: Mass approximately constant. Specific heat approximately constant.
Taking density of water: You specified a specific mass (cubic mile or whatever) and with constant composition.
That’s the whole point. You MUST account for those other factors for any hope of utility to the intensive property average, after using them to find the extensive property that you can average.
A counter example:
Take two cups of water. One is 0 C the other is 40 C. Mix them. What is the final temperature?
The average of the temperatures is 20 C. How useful was that math?
Well, was that first cup frozen water at 0 C or liquid? (water can be in either phase at that temperature).
Were both cups full, or was one 2 g and the other 2000g?
Was one salt water and the other fresh? Or was one D2O?
See the problem?
That is why just averaging an intensive property gives useless meaningless results. Because in all the ways you can use if for benefit, you must account for the rest of the properties of the things measured. To find, and or control changes enough, to have an extensive property you are really averaging.
Now for a single thing you can assume those properties acceptably constant (though even there it isn’t always a safe assumption… if you have ONE cup of water at 0 C warmed to 40 C how much energy did it take?… you don’t know…) so you can assume that for ONE thermometer outside my front door if it is averaging 100 F in the afternoon, it’s a darned hot week. But if you have one reading 100 F (somewhere) and another reading 20 F (somewhere else) you can not say a thing about ‘if the day is a nice day” or not. And the average of 40 F is not very useful.
The problem with averaging temperatures is exactly that. We have a constantly varying set of instruments, in varying places, with varying methods (LIG, electronics, aspirated, latex vs whitewash) measuring a constantly varying air body (density, composition (humidity), specific heat, mass as rain leaves it ) in a constantly varying environment (snow, rain, dry, smoke) and then making as our very first step a monthly min-max average that then gets averaged in with a load of other crappy things.
Nothing is being standardized to make the intensive property useful. There is no real way to calculate the extensive property of heat gain / loss from that basket of heterogeneous intensive measurements and averaging them makes it worse, not better.
What is required is exactly what is required in EVERY real discipline of science and what was thoroughly drummed into me in Chemistry class in high school: Don’t screw around with the thermometer. MEASURE PRECISELY the mass of the material being measured. Know the specific heat, heat of fusion and heat of vaporization. ACCOUNT for ALL mass, phase, and composition changes. Then, maybe, you might be able to do some kind of mediocre calorimetry; but don’t count on it unless it is all in a Dewar flask. (And even then double check the seals and all).
So it isn’t that temperatures are useless, or that you can’t average readings for one thing and one thing only measured with the same instrument over time; it is, as was stated in the comment to which I was responding, they can NOT be heterogeneous. And every single air sample is different from the prior one as “you can never cross the same river twice”.
So if my Max thermometer records 100 F, 99 F, 110 F, 95 F I can say that the days were generally warm and all of them were above 95 F, but I can NOT say the average temperature was 103.F as that is a meaningless number. It is a statistic about temperatures and not a temperature. So the only right thing you can say is that the mean of the values was 103.5 (Note: No F) and at sometime it ranged from 95 to 110 but you don’t know how long, how hot most of the time (was it a 2 minute peak, or 100 for 12 hours). And absolutely NOTHING can be said about heat gain / loss as the key values of mass, composition, specific heat, phase change and those associated heats, etc. are all missing.
Add in a MIN reading and it gets worse. Is the average of a 0 C / 40 C day and a 20 C day “the same”? If 3 feet of snow fell at that 0C time, is it ‘the same’?
Simply averaging heterogeneous temperatures is useless.
(And if you think you have homogeneous temperatures from just using one thermometer at different times, remember that you are not relieved of the necessity to account for variations of mass, specific heat, etc. etc. So I might measure 100 F twice in a row, but one in dry air and the next in rain are two very different things… and one in the shade with one in the sun just as bad… and… Then calling that a proxy for ‘heating’ is just daft.)

Reply to  FAH
August 30, 2015 9:55 pm

E.M., this kind of pitching and tossing of your whole comment is just changing the goal posts. YOU SAID:

Exactly right. One can NOT average an intensive property and preserve meaning.

I gave you several examples where averages of an intensive property have lots of meaning, including averaging ENSO3.4 temperatures, using a laser to measure the surface of a tank of water (or the surface of the ocean for that matter), someone saying “the average temperature last week was -20” and whether that preserves enough meaning to encourage you to put on a coat before going outside, and Archimedes’ discovery of how to do an exact calculation of the average density of the king’s crown, an intensive property.
In response, you come back with this eye-popping opening …
E.M.Smith August 30, 2015 at 12:48 pm
@Willis:

The way an intensive property can be averaged and be useful is …

BZZZT! Wrong answer! You already claimed in capital letters that an intensive property can NOT be averaged and preserve meaning … and now you come back and you want to lecture me on how an intensive property CAN be averaged and preserve meaning?
My friend, that’s what I just got done explaining to you, exactly how an intensive property CAN be averaged and still have meaning, complete with examples. As I explained to you, to get a meaningful average of an intensive property, you need one of two things—either 1) lots of measurements, the more the better, to reduce your error estimate, or 2) some way to take an end run, like Aristotle did.
And you now are repeating those same two ways back to me in different words … save your breath.
Sorry to be so brusque, Doc, because I generally do like your work, comments, and ideas, but I don’t know how to describe you moving the goalposts, and then repeating my words back to me as if it was new information, in any but a negative manner …
w.

FAH
Reply to  FAH
August 31, 2015 2:43 pm

Willis and E.M et. al. in this discussion thread: I think the discussion is needlessly contentious and we all actually agree as long as we are precise in our terms and understanding of what each of us is saying.
First, it is not the case that an average of intensive qualities is meaningless, or useless, certainly not in general. Second, it is also not the case that an average of intensive quantities over a heterogeneous or non-equilibrium system or set of systems is IN GENERAL a well-defined THERMODYNAMIC variable, even while maintaining its utility. Third, it can be the case that for a restricted or well defined SPECIFIC system or subsystem an average over intensive quantities is a good measure of the intensive property relevant to the specifically defined system.
To the first point, Willis has given a number of examples in which an average over intensive quantities is useful and indicative of some integrated behavior of the whole system. Water levels are good example. The historical record of high and low tides over a variety of meteorological conditions at a specific location is very useful as a guide to how to plan future construction, tidal surge amelioration infrastructure and the like. It doesn’t matter whether it is or is not a rigorously defined thermodynamic quantity. We take meaning from the historical record as an indicator of the likely future behavior. Even the much maligned global average surface temperature is useful and not without meaning. If it is calculated consistently and as accurately as possible it serves as a historical record of itself and provides a useful target for modelers attempting to model the heterogeneous non-equilibrium climate to try to reproduce it. As long as one keeps in mind, when assessing the uncertainty of the average, to differentiate between the measurement uncertainties of the instruments distinct from the uncertainty with which the average approximates a thermodynamic quantity there is no problem.
To the second point, even though an average of intensive quantities may be accurately calculated and we have historical records indicating its utility as an indicator of the evolution of the system, it is not in general (but see the third point) a thermodynamic quantity. There is no shame in not being a thermodynamic quantity. It is still useful and meaningful. It can still indicate some amalgamated underlying behavior. Not being thermodynamic simply means that the average itself may not appear in a thermodynamic equation based in physics. Alas we do not have equations to represent everything we want, else we would not be having this discussion.
To the third point, if the system under consideration is sufficiently restricted that the intensive quantity is constant enough over the system (meaning its variations over the system are small compared to the underlying dynamics) or the dynamics are represented in some kind of weighting system, then an average of an intensive quantity can be a good approximation of the intensive quantity relevant to that subsystem. Water level in a fixed tank is a good example. It may be the Nino index is another, in the sense that variations over the sea surface considered are small compared to the dynamics of the system – I honestly know nothing about that quantity. It doesn’t make any difference except for how we want to use it, as a term in a thermodynamic equation, or as an indicator to use to predict through correlation or through use of underlying detailed models. It is useful and meaningful in either case.
In summary, depending on the specific case an average over intensive quantities may represent a thermodynamic average intensive quantity or it may not represent an average thermodynamic intensive quantity. In either case, the number can still have immense utility and meaning. In a complicated world, not all numbers can be well defined thermodynamic quantities, but they are still useful. It remains for us to use the numbers wisely and try to avoid arguments when we all agree, at least when we understand what we each are saying.
To emphasize the point, the statement that “An average over intensive quantities has NO meaning” is incorrect. Also, the statement that “An average over intensive quantities is ALWAYS a thermodynamic variable” is incorrect.

Robert B
Reply to  FAH
August 31, 2015 4:28 pm

Nicely written, EM. One suggestion is to use as an example of an intensive property the salinity of sea water. For most of the oceans it is between 3.1% and 3.8% and according to UCSB Science Line, the average is 3.47% or 34.7 parts per mil.
While you can be fairly confident that taking a 1mL sample will give you a good estimate of the salinity for miles around if its not right at the surface of still seas, raining or near the bilge pump outlet. The land temperature record is worse than taking the measurements just at the surface and near coastlines and then to say that the salinity has gone up to 3.48% and this is making sea snails less amorous. The data is an indicator but how much worth does it have?
The temperature record is only an indicator of change but it really should be separate max and min temperatures. The mean of this is meaningless.

richardscourtney
Reply to  john robertson
August 28, 2015 9:52 pm

john robertson and FAH:
Yes! And it has been said many times in many places. But few want to know.
Please see this, especially its Appendix B.
Richard

Reply to  richardscourtney
August 31, 2015 5:49 am

I wish to highly recommend EM Smiths comment here… http://wattsupwiththat.com/2015/08/28/how-fast-is-the-earth-warming/#comment-2017711 which puts in laymen terms the entire discussion about intensive and extensive properties. It takes clear understanding of a subject to so effectively communicate.
I disagree with the Willis rebuttal because it is a science discussion, and for Willis to take such a literal and complete interpretation about a subject that by it very nature is always relative, is not logical. Clearly E M was referring to the matter and fact that how the global average surface record is currently calculated, does not turn the intensive T reading in an extensive property, and an intensive property derived GMT, so averaged, is meaningless. Also the fact that the examples Willis used turn and or apply extensively as well is intensively, is cogent to what EM Smith initially stated.
The EM Smith post clearly explains the difference between extensive and intensive properties , and that my friends is highly valuable.
Thanks for the post.

Reply to  richardscourtney
August 31, 2015 10:29 am

David A August 31, 2015 at 5:49 am

I wish to highly recommend EM Smiths comment here… http://wattsupwiththat.com/2015/08/28/how-fast-is-the-earth-warming/#comment-2017711 which puts in laymen terms the entire discussion about intensive and extensive properties. It takes clear understanding of a subject to so effectively communicate.

And I wish to highly recommend my reply to E. M. Smith here, which points out that Doc is just parroting back to me what I said to him, and that his “explanation” is in total contradiction to his original statement.
Originally he said:

Exactly right. One can NOT average an intensive property and preserve meaning.

After I pointed out, with a number of examples including Aristotle, that he was wrong and that one CAN average an intensive property and preserve meaning, he replied starting with …

The way an intensive property can be averaged and be useful is …

David, that’s not an “effective communication”. It is a failed attempt to get people to ignore his first statement, and it is merely a restatement of what I’d just said. It seems that that impresses you, so it appears he was at least partially successful in his attempt at misdirection.
w.

Reply to  john robertson
August 28, 2015 11:16 pm

if you dont like anomalies add a constant.
you realize that all you do when you calculate anomalies is scale by the mean.
ya.. subtract an anomaly.
Guess what. do the algebra. subtracting a constant doesnt change anything.

Stephen Richards
Reply to  Steven Mosher
August 29, 2015 4:40 am

Steven its not the anomalies that are the problem but the derivation of average temps calculated from manipulated data.

Jim G1
Reply to  Steven Mosher
August 29, 2015 7:21 am

If the intent is to propagandize through causing alarm, anomalies, by re scaling, sure do a great job. When one considers the precision of the equipment, sampling and adjustments to data, along with some of the statistical gyrations to which it is sometimes subjected, there is little meaning in the results. Charts of actual temperature over time are not nearly as alarming, particularly when we consider that we have been in an interglacial for about 12, 000 years.

john robertson
Reply to  Steven Mosher
August 29, 2015 9:00 am

Steven Mosher
And this improves the quality of the information available how?
When your constant has an error range of 2 to infinity degree C, what meaning does a 0.1C variation in it have?
When your average is a meaningless construct, defined high precision deviations mean what?
What information is being provided?
One of the features of this nonsense masquerading as science, is the deliberate refusal to define the terms.
Climatology is babble by choice, prattling on about the number of angels that can dance upon the end of a pin, does nothing to confirm the existence of said angels.
The false precision of these variations in the estimated global average temperature, reminds me of the fabric of the Emperors New Clothes.

Solomon Green
Reply to  Steven Mosher
August 30, 2015 5:23 am

What about adding 16.0 C? To put the anomalies into perspective?

Reply to  Steven Mosher
August 30, 2015 10:21 pm

Stephen Richards August 29, 2015 at 4:40 am

Steven its not the anomalies that are the problem but the derivation of average temps calculated from manipulated data.

Thanks for the comment, Stephen. The problem with your claim is that almost all climate data used by competent scientists is manipulated in some fashion.
For example, Steve McIntyre discovered a lovely example in (from memory) the GHCN temperature data. Seems some African country where for a couple of years they took their temperature data in °F rather than in °C … and the GHCN didn’t catch it.
Now, when you see data like that, which is the norm rather than the exception in climate science, what do you do? Throw it away? We don’t have enough data to do that, and besides, it was obvious what was wrong. So you convert the °F to °C, and Bob’s your grandma’s illegitimate son … but then, of course you are using MANIPULATED DATA, oh, no, that’s evil, can’t have that …
Or suppose you’ve changed out your mercury thermometer for a new one in your temperature station. A few years later, after you’ve collected enough daily data, you notice that all your readings seem a bit high. So you check the new thermometer against a standard, and it reads 2° high. You check it again and again over time, it’s always 2°high, some kind of manufacturing error perhaps.
So … would you throw out all that data?
Or would you just subtract 2°C from it and use it? Me, if I’d spent several years getting out there every day and collecting the data, I know I’d vote for the latter … which means that I’d manipulate the data.
In climate science, the correct question is almost never “was the data manipulated”? Of course it was, at a minimum it will have undergone quality control, which may include things like breakpoint identification, checks for heterogeneity, outlier detection, decisions on how to deal with missing data, the list is long and very data-dependent … but regardless, any scientist would be a fool to just take raw data as is and use it without doing at least that amount of “data manipulation”.
Since there is almost always some type of data manipulation, the relevant question to ask is “Exactly HOW was the data manipulated?” Once you know that, you can discuss whether or not that particular change was justified or not.
In other words, both data manipulations and climate data itself are like spaghetti westerns—some are good, some are bad, and some are ugly. As a result, the oft-repeated mantra of “raw data good, ‘manipulated’ data bad” is a very misleading oversimplification of a much more complex and nuanced situation.
Best regards,
w.

MarkW
Reply to  john robertson
August 29, 2015 9:08 am

In 1880 and for many years afterword, the thermometers were read to the “nearest” degree C, using the mark one eye ball. The absolute best error bars you could get with such a reading is +/- 0.5C. Add in the problems with site maintenance, missed readings, undocumented equipment and location changes, and covering less than 5% of the earth’s surface.
The idea that we could tease a signal of just a few tenths of a degree C out of that mess is absolute nonsense.

Mike M. (period)
August 28, 2015 7:19 pm

Sheldon Walker,
You wrote: “there has been a significant drop in the rate of warming over the last 17 years”
But you present no actual evidence to support that claim. You can not decide if something has changed without evaluating the uncertainties in the analysis. That you have failed to due, likely because you know nothing about statistics.
The first rule of time series analysis is that you do not average, smooth, or filter data before doing the analysis. You don’t merely violate that rule, you average over the same time period as you fit, pretty much guaranteeing an invalid result.
p.s. I think there may be an exception to the above rule: removing a periodic signal the origin of which is well enough understood that one can be sure it is not relevant to the subsequent analysis. But some people have conniptions over that.

E.M.Smith
Editor
Reply to  Mike M. (period)
August 28, 2015 9:55 pm

Since all the surface temperature series start with daily min max averages that are then averaged into monthly averages that then get turned into regional and even a global average, doesn’t that also mean they can not speak to trend either…

Mike M. (period)
Reply to  E.M.Smith
August 29, 2015 8:01 am

E.M. Smith,
“Since all the surface temperature series start with daily min max averages that are then averaged into monthly averages that then get turned into regional and even a global average, doesn’t that also mean they can not speak to trend either”
Maybe, maybe not. I do not have the expertise to make that judgement with any confidence, although I do have enough expertise to see the flaws in Walker’s analysis. Science is like that; it is not so much a matter of getting things right as getting rid of things that are wrong.
I think it likely that monthly averages can be justified on the grounds of time scale separation, especially since variance on time scales of up to a week or so is very different from variance on times scales of a few months and longer. The removal of seasonal variation by using anomalies likely falls under the exception I mentioned above; we certainly understand the cause of those variations. We also understand the origin of diurnal variation, but the correct average in that case is unclear. Daytime T’s are usually representative of a boundary layer that can be up to 2 or 3 km deep; nighttime T’s often only represent the nocturnal inversion layer that might be no more than 100 meters deep. So averaging max and min is averaging measurements of two different things. There are many such issues that arrive with spatial averaging.
Given the importance that the mainstream climate science community seems to attach to these numbers, it is important that issues such as the above be addressed. It frustrates me that the climate scientists do not appear to have made that effort. I find myself wondering if the numbers are as important as claimed, or if they are just convenient for propaganda.

richardscourtney
Reply to  Mike M. (period)
August 28, 2015 10:01 pm

Mike M. (period):
I agree. Any data can be processed to show anything.
The climate data are all processed prior to presentation. This provides severe doubt to any conclusions drawn from climate data.
Additional processing of the presented climate data increases the doubts concerning any conclusions – e.g.”rate of warming” – drawn from the additionally processed data.
None of the data indicate anything in the absence of justified estimation of its inherent error range(s).
Richard

Reply to  richardscourtney
August 29, 2015 5:13 am

The month of December, 2014 is an interesting case study regarding these anomaly products. At the end of November, 2014 the earth had a single global mean surface temperature. At the end of December, 2014 the earth also had a single global mean surface temperature. The difference between the global mean surface temperature at the end of November, 2014 and the global mean surface temperature at the end of December, 2014 is a unique value. However, the change in the global mean surface temperature anomaly, and thus the change in the global mean surface temperature, reported by the three primary producers of global mean surface temperature anomaly products for December, 2014 is not a unique value. GISS reported an anomaly increase of precisely 0.06oC; NCDC reported an anomaly increase of precisely 0.12oC; and, HadCRUT reported an anomaly increase of precisely 0.15oC.
It is possible that the global mean surface temperature anomaly change in December, 2014 was precisely 0.06oC, or precisely 0.12oC, or precisely 0.15oC. However, it is clearly not possible that the anomaly change, and thus the temperature change, was precisely 0.06oC and precisely 0.12oC and precisely 0.15oC. It is certainly possible that the anomaly change was somewhere within the range of values reported by the three primary anomaly product producers. It is also possible that the actual anomaly change was not within that range. It is interesting, however, that each of these disparate anomaly change estimates was just large enough to permit the producer to claim that 2014 was the warmest year on record, even if with less than 50% certainty.

Reply to  richardscourtney
August 29, 2015 7:25 am

“I agree. Any data can be processed to show anything”
The real crime is the number of people who are absolutely fooled into thinking we actually are living in the warmest year EVAH, and are sure of it to a dead certainty, to the point of thinking shutting down our economic lifeblood…our energy infrastructure…is a good idea.
Worse yet…to the point of wanting to tell children in schoolrooms how doomed and f**ked they are.
This is the real crime.

Mike M. (period)
Reply to  richardscourtney
August 29, 2015 8:05 am

firetoice2014 wrote: “GISS reported an anomaly increase of precisely 0.06oC; NCDC reported an anomaly increase of precisely 0.12oC; and, HadCRUT reported an anomaly increase of precisely 0.15oC.”
That is incorrect. The anomaly values are not precise, they are approximate. I am not sure what the error estimates are, but it would not surprise me if they are large enough for the various numbers to be consistent with each other.

Reply to  richardscourtney
August 29, 2015 8:21 am

Yes, Mike, consistent with each other, but entirely inconsistent with UHA and RSS, and nothing in the physics of the IPCC climate models can explain that.

Reply to  richardscourtney
August 29, 2015 8:58 am

“I am not sure what the error estimates are”
Indeed you do not.
No one does.
Because, unlike the scientific method, the climate science method most commonly makes no mention of error bars or certainty levels.
How on Earth would you, or anyone else, know something which is absent from the story?

Brandon Gates
Reply to  richardscourtney
August 29, 2015 5:19 pm

Menicholas,

Because, unlike the scientific method, the climate science method most commonly makes no mention of error bars or certainty levels.

Bullcrap. Here is just one example of an entire paper devoted to error and uncertainty in HADCRUT4: http://www.metoffice.gov.uk/hadobs/hadcrut4/HadCRUT4_accepted.pdf
Here’s Hansen, et al. (2010) which describes the entire GISSTEMP process, which includes a lengthy discussion of error and uncertainty estimates: http://pubs.giss.nasa.gov/docs/2010/2010_Hansen_etal_1.pdf
Both papers are cited in the FAQ pages for their respective temperature product.

Reply to  richardscourtney
August 29, 2015 10:03 pm

Brandon, the divergence from the satellites has become an overwhelming issue, impossible for the physics contained in the IPCC models to overcome. They could run their models from here to eternity, and they will not have the surface rapidly warming while the troposphere is flat or cooling.

richardscourtney
Reply to  richardscourtney
August 30, 2015 1:13 am

Brandon Gates:
I wrote

None of the data indicate anything in the absence of justified estimation of its inherent error range(s).

In the subsequent sub-thread Menicholas wrote

unlike the scientific method, the climate science method most commonly makes no mention of error bars or certainty levels.
How on Earth would you, or anyone else, know something which is absent from the story?

Menicholas is right because as he says the climate science method most commonly makes no mention of error bars or certainty levels. But you have replied to him saying

Bullcrap. Here is just one example of an entire paper devoted to error and uncertainty in HADCRUT4: http://www.metoffice.gov.uk/hadobs/hadcrut4/HadCRUT4_accepted.pdf
Here’s Hansen, et al. (2010) which describes the entire GISSTEMP process, which includes a lengthy discussion of error and uncertainty estimates: http://pubs.giss.nasa.gov/docs/2010/2010_Hansen_etal_1.pdf
Both papers are cited in the FAQ pages for their respective temperature product.

And your reply ignores the word “justified” in my original statement that “None of the data indicate anything in the absence of justified estimation of its inherent error range(s).”
The teams you cite each provide 95% confidence limits for their results. However, the results of the teams differ by more than double those limits in several years, and the data sets provided by the teams have different trends. That is a clear discrepancy because these data sets are compiled from the same available source data (i.e. the measurements mostly made at weather stations using thermometers), and purport to be the same metric. This discrepancy demonstrates that the 95% confidence limits of at least one of these data sets are wrong.
So, we know the error estimates for these data sets are wrong but we do not know how wrong they are. And, therefore, the data sets are not meaningful.
Richard

Brandon Gates
Reply to  richardscourtney
August 30, 2015 3:24 pm

richardscourtney,

Menicholas is right because as he says the climate science method most commonly makes no mention of error bars or certainty levels.

Once again: discussion of error and uncertainty is found on the documentation pages for GISTEMP LOTI: http://data.giss.nasa.gov/gistemp/FAQ.html
… and HADCRUT4: http://www.metoffice.gov.uk/hadobs/hadcrut4/data/current/download.html
Both groups published papers in literature describing how they account for and estimate errors and uncertainties in their respective products:
GISTEMP: http://pubs.giss.nasa.gov/docs/2010/2010_Hansen_etal_1.pdf
HADCRUT4: http://www.metoffice.gov.uk/hadobs/hadcrut4/HadCRUT4_accepted.pdf

And your reply ignores the word “justified” in my original statement that “None of the data indicate anything in the absence of justified estimation of its inherent error range(s).”

You repeat a false claim after it has been shown to be wrong. Multiple sources of error and uncertainty are detailed at length in the papers I have twice now linked to.

The teams you cite each provide 95% confidence limits for their results.

More or less yes … for GISTEMP it’s a 2-sigma error bound. The 95% confidence level is 1.96 sigma, which is what HADCRUT4 publishes. And still you yet maintain that this is “most commonly” not done in climate science. Bizarre.

However, the results of the teams differ by more than double those limits in several years, and the data sets provided by the teams have different trends.

One wonders how to do percentage calculations on the difference between two ANOMALY time series with arbitrary baselines.
From 1880-2015, the trends (in C/decade) are 0.065 for HADCRUT4, 0.070 for GISTEMP, difference of 0.005.
For HADCRUT4 the mean annual CI from 1880 to 2015 is +/- 0.108 C, for GISTEMP it is +/- 0.054. Using a 1981-2010 baseline, there are 7 years from 1880 to 2015 where the error envelopes do not intersect: 1932, 1938, 1948, 1949, 1951, 1952 and 1953. In the other 95% of the records, the CIs overlap, which makes a lot of sense if you think about it. The mean annual difference between the two products using the same base period over the same interval is 0.058, the Pierson correlation coefficient is 0.98, and the two-tailed P statistic is 2.5 * 10^-20 … or effectively zero.
In other words, the two datasets are not statistically significantly different from each other at the 95% confidence level.

That is a clear discrepancy because these data sets are compiled from the same available source data (i.e. the measurements mostly made at weather stations using thermometers) …

GISTEMP LOTI uses ERSST.v4, whilst HADCRUT4 uses ICOADS for sea surface temperatures. HADCRUT4 uses some land surface station data not included in GHCN, therefore those stations are not represented in GISTEMP. HADCRUT4 also discards some stations in GHCN entirely due to excessive gaps, whereas GISTEMP infills gaps by interpolation based on neighboring stations.

… and purport to be the same metric.

Both organizations characterize their products as estimates of mean global temperature trends subject to uncertainty and error. They also both routinely compare their own product to others, by final results as well as by methodological differences.

This discrepancy demonstrates that the 95% confidence limits of at least one of these data sets are wrong.

The fact that confidence intervals are being calculated in the first place implicitly tells anyone with the barest inkling of common scientific practice that the results of both products are wrong regardless of whether they’re using the same source data and methods (which they are not). The question of wrongness then becomes one of degree, estimates of which are in fact published, easy to find, and readily accessible. One need only bother to go looking for the details … or at the very least review them once citations have been provided — TWICE no less.

richardscourtney
Reply to  richardscourtney
August 31, 2015 12:25 am

Brandon Gates:
I do not know if your reply to me is obtuse or idiotic. Perhaps you can clarify?
I wrote

The teams you cite each provide 95% confidence limits for their results. However, the results of the teams differ by more than double those limits in several years, and the data sets provided by the teams have different trends. That is a clear discrepancy because these data sets are compiled from the same available source data (i.e. the measurements mostly made at weather stations using thermometers), and purport to be the same metric. This discrepancy demonstrates that the 95% confidence limits of at least one of these data sets are wrong.
So, we know the error estimates for these data sets are wrong but we do not know how wrong they are. And, therefore, the data sets are not meaningful.

You have replied with much irrelevant waffle and conclude saying

The fact that confidence intervals are being calculated in the first place implicitly tells anyone with the barest inkling of common scientific practice that the results of both products are wrong regardless of whether they’re using the same source data and methods (which they are not). The question of wrongness then becomes one of degree, estimates of which are in fact published, easy to find, and readily accessible. One need only bother to go looking for the details … or at the very least review them once citations have been provided — TWICE no less.

NO! The provision of confidence intervals tells everybody except you that the right values are claimed to be probably within the stated error range.
The fact that
“The teams you cite each provide 95% confidence limits for their results. However, the results of the teams differ by more than double those limits in several years, and the data sets provided by the teams have different trends.”
shows that the right values are NOT within the stated error range for at least one of the data sets; i.e. the data are wrong.

And this observation that the data are wrong is not overcome by publication of how the incorrect estimates of the error ranges of the data sets were calculated.
Richard

Reply to  richardscourtney
August 31, 2015 7:19 pm

Mr. Gates,
You seem to be interpreting my use of the phrase “climate science” very narrowly, if you think citing a few examples refutes my point.
I would wager that every skeptical voice on this blog could cite dozens of RECENT examples of press releases and articles which make all manner of claims and never mention uncertainty or error bars.

Brandon Gates
Reply to  richardscourtney
September 1, 2015 12:05 pm

Menicholas,
That your previous statements were ambiguous, and therefore subject to “incorrect” interpretation, is not my responsibility.
This skeptic wagers (nay: observes) that press releases from other scientific disciplines are similarly lax when it comes to mentioning uncertainty or error estimates. Which does not make it right, and in fact I deem it lamentable.

Reply to  richardscourtney
September 1, 2015 2:21 pm

OK, It’s MY fault. It is and has all been my fault, always.
I am not even sure what you just said.
I mean, I can read the words, but I am not grasping the thought you are intending to impart.
In any case, I was wondering…is it just my imagination, or were we deprived of your wonderful company here for some length of time, Mr. Gates?
You mentioned a book…is it a new one?

Lance Wallace
August 28, 2015 7:20 pm

“Calculating the slope over 121 months (the month being calculated plus 60 months on either side),”
You have points all the way to 2015. How did you locate the temperatures for the 60 months after 2015 to include in your regression?
You also show values for 1979 in the satellite series. How did you average in the 60 months of satellite temperatures before the satellites were up?

Reply to  Lance Wallace
August 29, 2015 7:36 am

There are three kinds of lies.

Peter Sable
Reply to  Lance Wallace
August 29, 2015 7:55 am

You have points all the way to 2015. How did you locate the temperatures for the 60 months after 2015 to include in your regression?

The first and last 5 years of each rate of warming curve has more uncertainty than the rest of the curve. This is due to the lack of data beyond the ends of the curve. It is important to realise that the last 5 years of the curve may change when future temperatures are added.

I think he answered that. He reduces the size of the kernel at the ends. It’s one of the “how to deal with endpoints” methods I”m evaluating.
I don’t like his choice of kernels though. Why a square? Because it’s convenient in Excel? That’s not a good choice. A Gaussian kernel or one of the kernels Loess uses would be more appropriate.
I have all this plus data going back to GISS 2005 release. Curious to see how this analysis changes over time. Will post later.
Peter

Peter Sable
Reply to  Peter Sable
August 29, 2015 7:57 am

Here’s a set of kernels in common use by statisticians. Signal processing folks use different ones, which I always find bizarre:
https://en.wikipedia.org/wiki/Kernel_%28statistics%29#Kernel_functions_in_common_use
Peter

MarkW
Reply to  Peter Sable
August 29, 2015 9:32 am

The problem in my mind is that once you change your method of processing the data, you are no longer comparing like to like. If you can prove that changing the kernel has no impact on the data being presented, they why not use that new kernel to process all the data, since it’s just as good?

MarkW
Reply to  Lance Wallace
August 29, 2015 9:30 am

He did say that the graph is less accurate after 2010 due to the fact that we don’t have data points in the future yet.
I agree that the graph should have been truncated since the dropping accuracy means we are no longer comparing apples to apples.

Rbabcock
August 28, 2015 7:20 pm

Great read.. thanks for the effort.
Up next, how about a rate of rate of warming?

August 28, 2015 7:39 pm

2.5 C per century. That’s, let me check, century = 100 years, aka .025 C per year. Bogus!!! Where do you locate daily/annual measurements of that precision? Nothing but statistical hallucinations, lost in a cloud of +/- instrument error/uncertainty. And nothing about these temperature curve fits ties back to CO2 concentrations.
IPCC has no idea what percentage of the CO2 increase between 1750 & 2011 is due to industrialized mankind because there are no reliable measures of natural sources and sinks. Could be anywhere between 4% and 196%.
The 2 W/m^2 additional RF due to that added CO2 per IPCC is lost in the 340 W/m^2 ToA +/- 10 W/m^2 incoming and the -20 W/m^2 of clouds.
IPCC AR5 Text Box 9.2 admits their models didn’t capture the pause/lull/hiatus/stasis and have no idea how to fix them.

Reply to  Nicholas Schroeder
August 29, 2015 7:40 am

“Could be anywhere between 4% and 196%.”
I think you need to extend the percent range down to zero percent (and perhaps into negative territory, since because the warmist hypothesis is obviously incorrect, it opens up the possibility that those who posit a net cooling may, in fact, be right). What do we have to indicate that at least 4% of anything is directly attributable to CO2 increases?

Louis Hunt
August 28, 2015 8:23 pm

When you said that “The entire temperature series is used,” I expected you to calculate an overall rate of warming for the entire period. I didn’t see any such calculation. I’m curious what the rate of warming would be when calculated or averaged over the entire time period from 1880 to 2015.

TonyL
Reply to  Louis Hunt
August 29, 2015 12:54 am

Here is what I get.
http://i58.tinypic.com/2q36tkg.png
Intercept = -0.6961 deg. C.
slope = 0.1002 deg/decade
Just exactly 1.00 deg/century. This sounds a bit low, but we remember that the GW narrative is ~0.8 deg C over the time from 1880 to 2000, or so. So maybe it is OK, after all.
ALL: Just because I plotted this up and calculated a trend does not imply that I think this means anything at all. Particularly in regards to planet Earth.

TonyL
Reply to  TonyL
August 29, 2015 6:52 am

I did not notice this when I first posted the graph.
(It is a big graph, click to embiggen)
The fitted line crosses some grid lines at two neat points. First is -0.50, 1900, nice and neat (within a pixel or so). The second is +0.50, 2000, nice and neat. 1.00 degree from -0.50 to +0.50 and 100 years from 1900 to 2000. All nice and neat and tidy.
This is 1627 data points with lots of noise, disparate data sets, adjustments and corrections. And it just happened that way.
I get the feeling I am being laughed at. I think somebody is mocking us.

Reply to  TonyL
August 29, 2015 7:44 am

Is it embiggen, or enbiggen?
Sophisticated minds need to know!
Ok, I got my answer.
Maybe I had wax in my ears the night that episode of The Simpsons aired.
http://www.kottke.org/07/06/embiggen-cromulent
Just goes to show…you can learn something new EVERY damn day!

Reply to  TonyL
August 29, 2015 7:45 am
BFL
Reply to  TonyL
August 29, 2015 10:34 am

@TonyL: Just eyeballing the graph one could discern about 0.5 deg from 1880 to 1940 then flat to about 1980 then another ~0.5 to 2015 or about 1 deg over 135yrs. Why can’t the models can’t do this; perhaps no understanding of what’s going on?

TonyL
Reply to  TonyL
August 29, 2015 10:59 am

@ BFL
You are right, but I think people know altogether too well what is really going on. One has to go back in time to the pre-internet era to get a clear idea of what was going on. Back in those ancient days, information was recorded by the use of indelible stains applied to a material produced from dead trees. Information was transmitted by physically transferring the processed material from person to person. This system, as primitive as it was, had the great advantage of permanence within reason. Information thus recorded, was quite resistant to change, and attempts at edits would leave obvious tell-tale signs. Some of those records do survive today, and they tell quite a different story. Here is one of those records, from WUWT:comment image
There are many other similar records, and they all show the same thing. There was a strong cooling trend for four decades, from the mid 1930s to the mid 1970s.
Original post here:
http://wattsupwiththat.com/2010/03/18/more-on-the-national-geographic-decline/
H/T Willis.
It seems Winston Smith, working at the Ministry Of Truth, has edited the past here.

Reply to  TonyL
September 1, 2015 12:13 pm

GISS’ “data” are a fabrication.
In reality, the world warmed in the late 19th century a fraction of a degree, then cooled. It warmed in the early 20th century (c. 1920-40) a fraction of a degree, then cooled from about 1945-77 so notably that scientists feared the next ice age was coming. Then it warmed perhaps a fraction of a degree from c. 1977-96, perhaps aided by clearer skies from anti-pollution efforts. It’s now cooling again.
The total warming since the end of the LIA in the mid-19th century might be as much 0.8 degrees C, of which most occurred before CO2 really took off after WWII. Recall that the response of the planet to that rise was over three decades of chilly cooling.

Reply to  TonyL
September 1, 2015 12:32 pm

The warming since c. 1850 could also be quite a bit less than 0.8 degrees C. No one can know, since it’s effectively impossible to measure the average temperature of the surface of the earth even now, let alone in the past, when seawater was collected in buckets.
My own guess is around 0.7 degrees C, of which 0.4 or 0.5 occurred before WWII. The late 19th and early 20th century warmings were two steps forward (warmer), followed in both cases by one or more steps back (cooler). The mid-20th century cooling was so pronounced that it almost cancelled out all the strong warming during the 1920s and ’30s.
Odds are good that the coming cool spell will similarly cancel out most of whatever warming might actually have occurred in the late 20th century.

SAMURAI
August 28, 2015 9:17 pm

Once the CAGW hypothesis is laughed and eye-rolled onto the trash heap of failed ideas in about 5~7 years, real scientists without agendas will have to dig through GISTEMP and HADCRUT raw data and fix all the contrived “adjustments” that have been made.
Until that difficult process is completed, it’s almost senseless to use GISTEMP and HADCRUT temp data for anything other than objects of ridicule…

Chris Hanley
Reply to  SAMURAI
August 28, 2015 9:56 pm

I was going to say that.
The next five or so years will be the conclusive.
If the satellite data trend shows no sign of the exponential increase, so necessary to the narrative, the show’s over.
Plaintiff lawyers should be sharpening their pencils.

MarkW
Reply to  Chris Hanley
August 29, 2015 9:35 am

It is my personal belief that we will see actual cooling over the next 10 years.

Reply to  Chris Hanley
August 31, 2015 7:26 pm

Ditto me on that cooling trend Mark.
And I think the eyeroll is the perfect response to the inanity of the warmistas.
Toss in a good eyelash flutter for emphasis too.
The problem, or course, is that the inanity is the least of it.
This BS is serious shite nowadays.
The US economy and much of our energy infrastructure is being regulated out of business.

Dawtgtomis
Reply to  rhymeafterrhyme
August 29, 2015 8:13 am

Excellent, Will.
In these mass-ignorant, pessimist times,
The truth is more poignant but fun when it rhymes!

E.M.Smith
Editor
Reply to  SAMURAI
August 28, 2015 10:01 pm

GIStemp is not temperature data. It uses GHCN and some USHCN to make averages in grid boxes via data averaging and smearing and adjusting, then makes anomalies between those grid boxes most of which contain no actual thermometer at al…
It fabricates fantasy numbers in boxes, not temperatures.

SAMURAI
Reply to  E.M.Smith
August 28, 2015 11:50 pm

E.M.Smith– Everyone knows UAH, RSS, HADCRUT, JISTEMP, etc., are temperature “anomaly” data sets.
Few, people go through the tedious task of writing out “anomalies” when referencing them, as it’s understood..
I’m just sayin’….

Reply to  E.M.Smith
August 29, 2015 8:16 am

SAMURAI,
GIStemp and HADCRUT are anomaly calculations, but they are not DATA sets, even though they began with data sets. Once the data are “adjusted”, they cease to be data, but are merely estimates of what the data might have been, had they been collected timely from properly selected, calibrated, sited, installed and maintained measurement instruments.

E.M.Smith
Editor
Reply to  E.M.Smith
August 30, 2015 1:40 pm

:
GIStemp presents temperature graphs for locations. It presents temperatures for zonal and hemispheric and global temperatures, and it carries the data as temperatures through multiple stages of adjusting, averaging, and worse to make the final prossessed data-food-product. ( I’ve ported the code, back about 2009, to Linux, and run it, and read all of it…). Only in the very last stages does it make an anomaly of ‘grid boxes’ and not of temperatures from thermometers.
Since, last I looked, they had upped the number of boxes to about 16,000 but there were at MAX 6000 thermometers and in many decades less than 1200: By Definition the bulk of their ‘anomalies’ are calculated between two “grid boxes” that contain no actual thermometer and no actual temperature.
To call that a ‘temperature anomaly’ is as much a farce as calling it a temperature.
It is, at best, an anomaly of two fictional values created by a questionable means from prior frequent averaging of intensive properties based on a non-Nyquist sample, that have themselves already been reduced to a monthly average of intensive properties. I.e. IMHO garbage. And not at all a temperature OR a ‘temperature anomaly’ as temperatures ‘left the building’ with the first monthly average making GHCN and were completely assassinated by the time two physically empty ‘grid boxes’ have an anomaly created between their fictional values.
But you can call that a ‘temperature’ if you like…. I’m sure “it’s understood”…
I call that the “anomaly canard” as folks put it forth as though the very first step were turning the temperatures into anomalies (at which time the complains about averaging temperatures would evaporate) but they do not in GIStemp. They carry temperatures AS temperatures though to the very end, doing all their math on them AS temperatures. THAT is not and “anomaly” based process, is not using anomalies, and is not producing “anomalies”. It produces fictional temperatures in grid boxes. Then makes an “anomaly” between the fictions… that does not produce a ‘temperature anomaly data set’. It produces a distorted averaged temperature fiction used to make an anomaly of voids.

TonyL
August 28, 2015 10:08 pm

Same question as Lance Wallis, above.
Using a 121 month filter, you run out of data at July, 2010. To go beyond that point, some special technique must be used. This might be extrapolating a trend to 2020, or collapsing the filter down from 121 to 11 months, month by month, or some other method.
How did you handle the “end point problem”?

MarkW
Reply to  TonyL
August 29, 2015 9:38 am

From the comments in the article, I’m assuming that he just truncates the leading edge of his 121 month filter. In other words, he continues to use 60 months into the past, but just 59 months into the future, them 58 months, then 57 months, and so on until he gets to the present and is using 0 months into the future. Hence his comment about decreasing accuracy after 2010 and possible changes to the curve as new data points are added. If you want to compare like to like, you have to stop your analysis at 5 years from the beginning and ending of your data.

August 28, 2015 10:58 pm

This is what I calculate from the day to Dat difference between yesterday’s rising temp minus last night’s falling tempcomment image
These are the 72million surface records for stations with 360 samples per year.

4TimesAYear
Reply to  micro6500
August 29, 2015 12:00 am

Hey….I really like this chart!

Reply to  4TimesAYear
August 29, 2015 4:30 am

If you take the day to Day change and plot it out for the norther extatropic it’s a sine wave, if you take the slope of the 2 zero crossings, it’s the rate of seasonal temperature change.comment image
The Northern hemisphere is better sampled
This shows the rate it’s warming and cooling over the years.

Reply to  4TimesAYear
August 29, 2015 7:56 am

I like day to Dat better.
It says it all.

MarkW
Reply to  4TimesAYear
August 29, 2015 9:39 am

Dis, dat and the udder thing.

Mike
Reply to  micro6500
August 29, 2015 12:25 am

This may be interesting and valuable Mike, but without some explanation of _exactly_ what you are plotting it is meaningless to me. I am unable interpret what this is showing and what it may mean about climate.
What does “last night’s falling temp” actually mean? Is this an average rate of change? Over what period, how many readings.? Is it a linear regression, mean or what?
You also totally fail to say what “temperature” you are talking about or what your data source is.

Reply to  Mike
August 29, 2015 4:44 am

My chart is data from NCDC global summary of days data set. For each day by station I calculate yesterday’s rising temp as Tmax -Tmin = rising, then Tmax – Tmin from the following morning =falling. Then difference = rising – falling. Each station included has greater than 360 samples per year, so that if there’s no trend it returns to zero.this is 72 million samples from around the world.
When you do the by continent, there are large swings at different times in this. Rising – falling is the same as yesterday’s min – today’s min. If you do the same with tmax, yesterday’s max – today’s max it is very flat.
What it shows is the average isn’t changing, pools of warm water are moving around changing surface temps regionally, but over all there are a couple of blips with no apparent loss of nightly cooling.

Mike
Reply to  Mike
August 29, 2015 7:16 am

I calculate yesterday’s rising temp as Tmax -Tmin = rising, then Tmax – Tmin from the following morning =falling. Then difference = rising – falling.

Thanks for the reply. So if I follow your verbal explication correctly and use more precise mathematical notation, I get:
rising = Tmaxn – Tminn
falling = Tmaxn – Tminn+1
rising – falling = (Tmaxn – Tminn) – (Tmaxn – Tminn+1) =Tminn+1- Tminn
ie you are in effect calculating the daily change in Tmin.
If that is wrong could you please give a clear mathematical description of you calculation, words are not clear.

Reply to  Mike
August 29, 2015 7:59 am

You have it right.
It’s also equal to
Tmin – Tmin +1
It’s all calculated for every station one day at a time, it’s an anomaly using the specific stations prior day, so it is as accurate as possible, and since it’s also a rate of change (deltaF/day) you can use it that way as well.
The chart averages to slight cooling, 50 of the 74 years are cooling, 30 of the last 34 years are cooling.
Since the measurements are +/-0.1 F, if you use that as the uncertainty the average for the last 74 years is 0.0 +/-0.1F
And the same operations done with tmax (tmax – tmax +1) is much closer to zero.
This does not mean summers are not warmer, just that any extra heat is lost in the fall. And it seems to me that that is the signature of Co2 warming, and is not evident is the surface record.

Reply to  Mike
August 29, 2015 7:59 am

If I am interpreting this correctly, what it means is that if you wave your arms around in a big windmill fashion while talking real fast and using some big words and colorful charts…it is very impressive.

Reply to  Menicholas
August 29, 2015 8:09 am

Right, 4 grade math is fancy arm waving with big words.
Lets compare that to all the arm waving and Interpolating data 1,000 km away.

Reply to  Mike
August 29, 2015 8:54 am

Honestly micro, I thought you were goofing him…my bad.

Reply to  Menicholas
August 29, 2015 9:12 am

No problem, I’m protective I guess. I think it’s the best use of the surface data, and it shows that at the actual stations nothing catastrophic is going on, it’s all going on in the the post processing of the data.

Steve Garcia
August 28, 2015 11:05 pm

I am pretty sure that the two warm periods correlate well with the Pacific Decadal Oscillation cool phases and that the cool periods of this/these curve(s) correlate well with the PDO warm phases – including the less-warming period beginning around 2000. This is damned near exactly what was predicted back in the very early 2000s – that the phase change (regime change) in the PDO would cause the warming to slow and probably reverse and had already begun to change.
I LIKE this method.
It is VERY significant that the high point of warming/century was in 1937. We’ve been beating them over the head bout the 1930s for over a decade bow, and they keep pretending that the 1930s didn’t happen.
Pretend science isn’t science. When you have to pretend that inconvenient data is not there, you are not an f-ing scientist; you are a priest of The Church of Preaching to the Choir.

4TimesAYear
Reply to  Steve Garcia
August 29, 2015 12:01 am

Someone told me the 30’s hotter temps were just for the U.S. Apparently not.

Reply to  4TimesAYear
August 29, 2015 8:03 am

Funny how with jet stream winds blowing air masses all around he globe a couple of times every week, all year long, we could nonetheless have one continent be warmer for an entire decade while not affecting the rest of the globe.
Very strange indeed.
And so coincidental, that this just happens to be the place with the most complete and comprehensive temperature records in the whole world.
Rolling the eyes hardly captures the disbelievability of this level of sophistry.

Reply to  Menicholas
August 29, 2015 8:16 am

It’s quite possible if you consider that ocean surface temps could induce a alternate path changing where the change between tropical air and polar air.
You can see this over the US ,most of this summer the Midwest has been under Canadian air, as opposed to tropical air.
That’s a 10 or 15 F swing in temps.

Reply to  4TimesAYear
August 29, 2015 9:12 am

The heat during the thirties was virtually coast to coast for not one or two, but many, years.

MarkW
Reply to  4TimesAYear
August 29, 2015 9:41 am

A shifting jet stream would heat some areas and cool other areas for as long as the jet stream remained “shifted”.

Reply to  MarkW
August 29, 2015 10:42 am

If the area doesn’t change, but it seems like it could easily do both.
I live just south of Lake Erie, the 60’s and early 70’s were cool, most homes and cars didn’t have air conditioning. That could easily have been the jet stream was south of us, and the hot 80’s and 90’s could easily be explained by the jet stream being north of us.
Then with all of the post processing of surface temps that could be all the warming in general was.
There does seem to be regional differences between min temperature at that time between the US and Eurasia.
And the ocean cycles could be a source of forcing on the path of the jet stream.in fact we know the pdo/El Nino’s change where the jet stream hits land coming off the Pacific, and it’s track over the western half of the continent.

Dt, not just he USavid A
Reply to  4TimesAYear
August 29, 2015 10:07 pm

1930s and early 40 NH and global T (hin

Dt, not just he USavid A
Reply to  4TimesAYear
August 29, 2015 10:31 pm

The 1930s early 40s were globally warmer according to NOAA
From Tony Heller post…. Note only using NOAA’s own charts…
In 2001, NASA reported 0.5C warming from 1880 to 2000, with an error bar of less than 0.2C.
http://realclimatescience.com/wp-content/uploads/2015/08/Fig.A_20010501.gif
Now they show 1.3C warming during that same time interval, an increase of 0.8C, and nearly tripling the amount of warming. The changes they have made are 400% of the size of their 2001 error bars – a smoking gun of fraud.comment image
But it is worse than it seems. NASA had already done huge amounts of data tampering by 2001, erasing the 1940 spike shown in the NH AND globally.
From: Tom Wigley
To: Phil Jones Subject: 1940s
Date: Sun, 27 Sep 2009 23:25:38 -0600
Cc: Ben Santer
It would be good to remove at least part of the 1940s blip, but we are still left with “why the blip”.
So what was this…comment imagecomment image
eventually became the record this post used, claiming record warmth at the surface while the satellites (verified by thousand of weather balloon reports) are .3 degrees (a huge margin) below 1998, which 100 percent of all climate models run from here to eternity could never duplicate.

Gloria Swansong
Reply to  4TimesAYear
August 30, 2015 2:49 pm

Dt,
It’s not just a trillion dollar, megadeath fraud, but a global criminal conspiracy.

Reply to  4TimesAYear
August 31, 2015 6:56 pm

Thanks Dt, not just he USavid A , (what up wit’ dat handle, meng?)
I was going to get around to posting this blog post by Tony Heller. I do not really have time to monitor all the different places I am commenting. This site is only one of many, and climate only one of the topics I am given to commenting on. Right now the stock market takes up a lot of my time ( as does my full time job and keeping by cats bathed).
I wanted to post it further up-thread as part of a counterpoint argument to one Mr.Gates, who, incredibly, disputes the notion that climate science is not so scrupulous about minding error bars and certainty levels.
Maybe he (and Mr. Courtney?) will notice here and respond.
What the hell good are error bars or confidence intervals if subsequent revisions ignore them?

August 28, 2015 11:47 pm

Does this mean they lied when they said “the science is settled” ?

M Seward
Reply to  jon2009
August 28, 2015 11:58 pm

No, “the science is settled” is just shorthand for “la la la la la la la la la…”

Gary P Smith
Reply to  jon2009
August 29, 2015 6:34 am

If “the science is settled”, shouldn’t they stop asking for more grant money to study the science of climate?

Reply to  Gary P Smith
August 29, 2015 8:06 am

OK, Mr. Smith, stop trying to be all logic-y about our money and our science!

Patrick
August 28, 2015 11:57 pm

Following the logic of warmists, I am left wondering if it’s all my fault. I was born in 1937 and retired in 1998!

Reply to  Patrick
August 29, 2015 2:53 am

You rotten bastard! I intend to start a class action suit against you to recover all damages due to global warming. (by the way, just how did you do it all by yourself? Are you one of those X-men or something?)
🙂

Reply to  markstoval
August 29, 2015 8:07 am

LMAO!

MarkW
Reply to  markstoval
August 29, 2015 9:42 am

Let me guess, his super hero name is Thermos, the man of heat.

toorightmate
Reply to  Patrick
August 29, 2015 5:22 am

You young blokes still have a lot to learn.
Is a 150 year or 70 year time span climate? Or is it simply weather?
Isn’t a 150 year time frame nothing more than a single point on a climate chart?

Reply to  toorightmate
August 29, 2015 6:12 am

IPCC AR5 glossary defines climate as weather averaged over thirty years. For what that’s worth.

Reply to  toorightmate
August 29, 2015 8:09 am

150 is a tiny point on a geology chart, it is true.
For climate, thirty years is the defining period, as others have noted.

MarkW
Reply to  toorightmate
August 29, 2015 9:43 am

Have they ever justified why they use 30 years? Or was it just picked out of a hat?

Reply to  toorightmate
August 31, 2015 7:39 pm

I am guessing that at some point someone needed to pick an interval which was long enough to smooth out a lot of the noise of erratic weather, but make it short enough that existing records could be used to do systematic classification of the world’s climatic zones.
It may have been Koppen, or one of his contemporaries.

Dt, not just he USavid A
Reply to  Patrick
August 29, 2015 10:34 pm

Patrick drove a Lincoln, before he bought a Suburban, which he traded in for a Hummer.

Reply to  Dt, not just he USavid A
August 31, 2015 7:42 pm

Why do I have the feeling that a dirty joke just went right over my head?

August 29, 2015 12:00 am

This all assumes of course that past rates of warming or cooling have a relationship with future warming or cooling. As this has been totally disproved with CO2 and claims for solar cycles and other theories remain unproven, the assumption that you can calculate how much it is currently warming (or even make a call on whether it actually is warming) is a giant leap of faith with no basis in science whatsoever!

Mike
August 29, 2015 12:43 am

OMG, where to start with the bad maths presented here.
Firstly moving averages are crap as a low pass filter which is basically what you are trying to do. Please read and understand the following:
https://climategrog.wordpress.com/2013/05/19/triple-running-mean-filters/

4) The first and last 5 years of each rate of warming curve has more uncertainty than the rest of the curve. This is due to the lack of data beyond the ends of the curve. It is important to realise that the last 5 years of the curve may change when future temperatures are added.

No, the first and last 5 years are invalid. This is not much better than M.E. Mann’s padding. If you don’t have data to do the average you stop. Period.

3) This method can be performed by anybody with a moderate level of skill using a spreadsheet. It only requires the ability to calculate averages, and perform linear regressions.

A pretty lousy argument for applying bad mathematical processing. If you don’t know how to do anything beyond a mean and trend in Excel maybe you should learn before writing. You could adopt a triple running average as suggested in the above article.
The biggest error in all this is that you can not meaningfully take the average of sea and land temperatures to most of the datasets you have chosen are bunk to start with. To accept them and use them as the basis for analysis is just playing along the alarmists that producing falsified warming.
UAH and RSS are atmospheric measurements and not a bastard mix, so are not directly comparable but you don’t even point this out in the article.
One positive point about this article is that it is looking at rate of change directly. Since this is what essentially what everyone is worrying about it makes a lot of sense to examine it directly. There should me more of this.

Reply to  Mike
August 29, 2015 3:19 am

When I was doing post grad work in mathematics (decades ago now), I had a Professor tell me that we should be teaching all students, from high school on, the subject of statistics and logic. He predicted that having computer programs that would do all the work of statistics for us would lead to fools thinking they were “doing statistics” when they had no clue what they were doing and no clue what could be logically done. But if the answer came out of a computer then they would think must be so! I think that my late professor made a very good prediction. (he was an applied math guy who knew computers and was not biased against their use either)
Also consider that many of the main alarmist “scientists” who are third rate intellects at best, do all kinds of statistical analysis without ever consulting a statistician. How the hell does a paper relying on statistics to make extraordinary claims make it past peer review if there is no statistician on the team that is trying to publish? How is that possible?
And how come there is no logician anywhere who will speak out about the utter lack of logic in many of the ideas and predictions of the warmist camp. (heck, and many of the luke-warmers to boot)
Where is Dr. Spock when you need him? Or Karl Popper even.

j ferguson
Reply to  markstoval
August 29, 2015 1:19 pm

Mark:

How the hell does a paper relying on statistics to make extraordinary claims make it past peer review if there is no statistician on the team that is trying to publish? How is that possible?

Isn’t that exactly how it’s possible?

jferguson
Reply to  markstoval
August 29, 2015 8:59 pm

Oops. .. because there is no statistician on the review team either?

Reply to  markstoval
August 30, 2015 2:43 am

I don’t think one needs to be a statistician to realize that if extraordinary results are asserted via statistical means that one needs statisticians to review the claim, methods, and data collection. So, even non-statistician editors should be able to see when a statistician is needed and crucial.

Mike
August 29, 2015 12:46 am

This all assumes of course that past rates of warming or cooling have a relationship with future warming or cooling.

No it doesn’t, it looks at the available data. If someone is projecting this into the future it is in your own head.

Reply to  Mike
August 29, 2015 5:40 am

+1
Statistical analyses have no predictive, or attributive, capabilities. Those belong to the statistician.

Mike
August 29, 2015 12:53 am

If you want to use a meaningless mix of land and sea temps, why don’t you use the British HadCRUT temp data? It’s now the only one that is not applying a BS pause busting adjustment.
I would suggest looking at SST and land averages separately and using a better filter with a shorter period. That would give a similar degree visual “smoothness”.

Pa
August 29, 2015 1:16 am

Graph 1 is showing that the Global temperature is increasing. How do get this information from Graph 2?

hunter
August 29, 2015 1:34 am

How fast is the Earth warming? Try “trivially”, “insignificantly”, “marginally”, “barely”, “illusorily”.

PeterF
August 29, 2015 1:36 am

How comes the length of the moving average series are just as long as the data series ?
They should be shorter by 60 months on both ends. Or 120 months on one end. Does not instill confidence.

Reply to  PeterF
August 31, 2015 7:48 pm

I am 97% sure that they have 95% confidence in their intervals.

Terry
August 29, 2015 1:59 am

the earth is not warming as both satellite temperature data sets confirm. all three terrestrial data sets hadcrut4, giss and NCDC have been manipulated to give the impression of warming. earth is heading for a ice age UK scientists at Reading, Southampton and Northumbria universitiesconfirm this is the case. Also Abdussamatov at Polkovo Observatory St Petersburg, also Mahatma Gandhi Institute of Astronomy and Technology. Terri Jackson

Chris Thixton
August 29, 2015 2:07 am

Shouldn’t the question be “How fast is the climate changing?”. Answer: Nobody actually knows.

richard verney.
Reply to  Chris Thixton
August 29, 2015 6:19 am

The climate is not changing (at any rate not so far to date) since climate is the mix of a number of different parameters each parameter constantly changing over a wide band, the width of which is set by natural variation.
Temperature is just one of the many parameters, and the change of 1/3 to 1 deg C is well within the bounds of natural variation
As soon as one accepts that climate is dynamic and constantly changes, then mere change alone is not in itself evidence of climate change. It is just what climate does.
Climate is regional, so for example, is the climate in the US materially different to that seen in the 1930s? Where is the evidence that it is? I have not seen any produced.
As far as I am aware, in my life time, not a single country has changed its Koppen classification, and those countries which were on the cusp of two climatic zones when the list was first produced, are still on the cusp and have not crossed the boundary into a new climate zone.

Reply to  richard verney.
August 29, 2015 8:48 am

Richard: do you have reference to work relating to Koppen changes over time? This would indeed be interesting, especially in locations near the original boundaries.

richard
August 29, 2015 5:24 am

WMO give urban data a zero for quality. 3% of land is urbanized and 27% of the temps stations are in urban areas. So 27% of temp data info straight off is of zero quality.
Africa is one fifth of the worlds land mass. The majority of the African temp data is from urban areas. So thats Africa out of the loop.
Add in the vast areas of the world where there are no temp stations.

Basil
Editor
August 29, 2015 5:25 am

Several observations.
1) I’ve always thought the rate of change in temperature was a more significant parameter than temperature itself. I would approach this more directly, before applying any smoothing (like CMA here), by using 12 month first differences.
2) Almost any smoothing method will provoke controversy. I like Hodrick-Prescott, but I get much the same pattern with a 36 month centered moving average. The longer CMA used here is going to smooth out some significant shorter periodicities, which are seen clearly in Vukcevic’s spectrum analysis above (at 12:48). In defense, finding periodicity was not the objective here. But it does lead to the next observation.
3) Once the temperature data is stated in terms of rate of change, I’ve always been intrigued by the apparent homeostasis in the data. Obviously there are physical processes constraining rates of change in temperature: when the rate of change gets too high, it falls, and vice versa. How well do we understand the physical processes that account for this?
4) There is still an obvious upward trend/slope in the rate of change. How much of this is real, and how much of it is imagined? By “imagined” I mean here the result of constant data massaging that may be motivated by a desire to demonstrate a particular conclusion (like “there is no pause”). Some of the real upward trend undoubtedly owes to warming from natural causes. Can we really extract an anthropomorphic cause after allowing for that?
5) As to the final conclusion –“So, the next time that you hear somebody claiming that Global Warming is accelerating, show them a graph of the rate of warming.”– this post hasn’t disputed that. (See Point #4.)

co2islife
Reply to  Basil
August 29, 2015 6:34 am

Once the temperature data is stated in terms of rate of change, I’ve always been intrigued by the apparent homeostasis in the data. Obviously there are physical processes constraining rates of change in temperature: when the rate of change gets too high, it falls, and vice versa. How well do we understand the physical processes that account for this?

Yep, Nature has built in safety valves. H20 is the the main moderator. H20 evaporates, absorbing heat, it rises, condenses releasing heat to the upper atmosphere. More heat, more H20, more coulds, less sunlight reaching earth to warm it. O3 also traps heat and alters the jet stream. Etc etc etc.

lgl
August 29, 2015 5:47 am

“So, the next time that you hear somebody claiming that Global Warming is accelerating, show them a graph of the rate of warming.”
Why bother? Like the author they will probably not understand that the graph shows that Global Warming is accelerating. Somewhere around 1 C/century^2.

MarkW
Reply to  lgl
August 29, 2015 9:55 am

Considering the fact that we only have 30 years worth of usable data, how can you be so confident of the long term rate of warming, especially considering all we have learned in recent years regarding decade and century long trends in climate data?

Dt, not just he USavid A
Reply to  lgl
August 29, 2015 10:38 pm

The entire troposphere (except for the corrupted surface) is .3 degrees cooler then 1998.

August 29, 2015 5:56 am

I am waiting peer-reviewed research that shows the optimum climate for our biosphere. The first question that would naturally flow would be where is our current climate and trend in relation to this finding.
Strangely, nobody seems interested in this vital comparison. Not so strangely, the solutions that are frequently demanded in the most urgent voice, all converge on a socialist worldview: statism, bigger government, higher taxes, less personal liberty, even fewer people. That bigger picture tells me all that I need to know about “climate science”.

richard verney.
August 29, 2015 6:27 am

Even Phil Jones (in an interview for the BBC) accepted that there was no statistical difference in the rate of warming between the early 20th century warming period of 1920 and 1940, and the modern era/late 20th century warming period between late 1970s and ending in 1998.
Accordingly, it is common ground, even with warmists, that the rate of warming has not accelerated between the time when CO2 is said to have driven most of the observed warming (ie., late 1970s to about 1998). and the time when manmade emissions of CO2 were too modest to have driven the warming (1920 to 1940).
I cannot recall, but Phil Jones might have accepted that the late 19th century warming had a statistically similar rate as that seen in the warming periods of 1920 to 1940, and the period late 1970s to 1998.
The fact that the rate of warming in these 3 warming periods is similar is strong evidence that CO2 is not significantly driving temperatures.

Reply to  richard verney.
August 29, 2015 9:21 am

Lindzen frequently made the same point. See essay C?agw. Cuts to the heart of the attribution issue.

MarkW
Reply to  richard verney.
August 29, 2015 9:57 am

I’ve talked with a number of warmists who proclaim that it doesn’t matter what caused the 1920 to 1940 warming, because we know that the current warming is being caused by CO2, the models prove that.

Reply to  MarkW
August 31, 2015 8:53 pm

Imagine if courts of law used such sophistry as evidence of crimes?
Anybody could be convicted of anything they were accused of, as being accused is proof of guilt in and of itself.

Gloria Swansong
Reply to  richard verney.
August 29, 2015 11:15 am

Further evidence is the fact that earth cooled during the first 32 years of the postwar surge in CO2, as it again has during the continued rise since c. 1996.

co2islife
August 29, 2015 6:30 am

Why this is so damning:
1) CO2 has a relatively linear rate of change (ROC). The rate of change of temperature is highly non-linear. The same analysis can be applied to sea level and the results will be the same.
2) CO2 has its greatest impact at the lower CO2 levels. As the concentration of CO2 increases it’s W/^M^2/PPM decreases. CO2 would show a much greater impact of the ROC of temperature when it increased from 180 to 250 than from 250 to 400. 1900 to 1940 ROC seems about the same as 1945 to 1980.
3) If this analysis is applied to Vostok ice core data which has steps of 100 years, you will see that the ROC variation between 1880 and 2015 is nothing abnormal, in fact it will likely fall at the low range of the scale. Even if you just use the Holocene it still won’t fall outside the norm.
4) CO2 in no way can explain the rapid decreases, negative or pauses in the ROC. The defined GHG effect drivn by CO2 is a doomsday model. CO2’s increase is linear, man’s production of CO2 is not linear, temperature has to be linear under the GHG effect as defined by the warmists.
BTW, where did the data come from between 1830 and 1880? The 1880 shows a 100yr ROC of -0.5°C. Where did the data come from to get that number? Is the 100 years for the 1880 number 1830 to 1930, or is it 1780 to 1880? If it is 1830 to 1930, how is the 2015 value calculated?

Robert of Ottawa
August 29, 2015 6:53 am

This is starting from a corrupted data set. Also, UAH and RSS data sets are too short to perform meaningful analysis.

Reply to  Robert of Ottawa
August 29, 2015 11:32 am

Actually, it is starting from a corrupted temperature anomaly set, since data ain’t data post “adjustment’, but only estimates.

Reply to  firetoice2014
August 31, 2015 8:57 pm

Post adjustments, the only thing that is estimated by the data sets is how much the warmista data manipulators estimate they can get away with…so far.

Mike
August 29, 2015 8:03 am

comment image
Here is the rate of change of HadCRUT4 , mixed land+sea “average” anomaly, using a couple of well-behaved filters.
Firstly we can note that the apparent trough around 1988 in Sheldon’s fig 2 is figment of the imagination due to using a crappy running average as a low-pass filter. Once again please note folks RUNNING MEANS MUST DIE.
Secondly, the downward tendency at the end has stopped by 2008 and we don’t have enough data to run the filter any further. The continued trend in Sheldon’s graph is a meaningless artefact or running a crap filter beyond the end of the data.
Unless you wish to get laughed at, it would be best not to show his Graph 2 to anyone, except as an example of the kind of distortion and false conclusions that can happen with bad data processing.
Finally, please note that taking the “average” of sea temperatures and land near surface air temperatures has no physical meaning at all. This was just a less rigged dataset than the new GISS and NOAA offerings. You can’t ‘average’ the physical properties of air and water.

Mike
Reply to  Mike
August 29, 2015 8:11 am

On the other hand, what the above rate of change graph does show is that the accelerating warming ( steady upward trend in rate of change ) that had everyone in a panic in 1999, had clearly not continued since. The link to every increasing atmospheric CO2 and the suggestion of “run-away” warming and tipping points are clearly also mistaken.

Peter Sable
Reply to  Mike
August 29, 2015 3:56 pm

Sheldon’s fig 2 is figment of the imagination

Or is a figment of him using GISS and you using HADCRUT. Try changing one variable at a time…
In general I support your criticism but I’d rather see the argument done correctly…
Peter

Gloria Swansong
Reply to  Peter Sable
August 29, 2015 4:01 pm

Both are ludicrous fictions in the service of a criminal conspiracy.

Salvatore Del Prete
August 29, 2015 8:41 am

Again cherry picking the data because if one goes back to the Holocene Optimum the question is how fast is the earth cooling?
Since the Holocene Optimum 8000 years ago the earth has been in a gradual overall cooling trend which has continued up to today punctuated by spikes of warmth such as the Roman ,Medieval and Modern warm periods.
The main drives of this are Milankovitch Cycles which were more favorable for warmer conditions 8000 years ago in contrast to today , with prolonged periods of active and minimum solar activity superimposed upon this slow gradual cooling trend giving the spikes of warmth I referred to in the above and also periods of cold such as the Little Ice Age.
Further refinement to the climate coming from ENSO, volcanic activity , the phase of the PDO/AMO but these are temporary earth intrinsic climatic factors superimposed upon the general broader climatic trend.
All the warming the article refers to which has happened since the end of the Little Ice Age, is just a spike of relative warmth within the still overall cooling trend due to the big pick up in solar activity from the period 1840-2005 versus the period 1275-1840.
Post 2005 solar activity has returned to minimum conditions and I suspect the overall cooling global temperature trend which as been in progress for the past 8000 years ago will exert itself once again.
We will be finding this out in the near future due to the prolonged minimum solar activity that is now in progress post 2005.

MarkW
August 29, 2015 8:58 am

I’d really like to see error bars put on those graphs.
The idea that we knew what the earth’s temperature was, within a few tenths of a degree C back in 1880 is utterly ludicrous. Given the data quality problems, equipment quality problems and the egregious lack of coverage, the error bars are more in likely in the range of 5 to 10C. The error bars have improved somewhat in recent decades, but they have at best been halved.
When the signal you are claiming is 1/2 to 1/5th your error bars, you doing pseudo science. And that’s being generous.

MarkW
Reply to  MarkW
August 29, 2015 9:02 am

Heck, the “adjustments” to the data are greater than the signal they claim to have found.
Junk from top to bottom.

Peter Sable
Reply to  MarkW
August 29, 2015 4:01 pm

I’d really like to see error bars put on those graphs.

Because the analysis transform is somewhat complex you’d have to do that in the form of a Monte Carlo simulation. I doubt you can do that in Excel. You need a Real tool, e.g. matlab, R, etc…
Even that is difficult because you’d have to know what size and distribution the errors should be. They may not be Gaussian, as the underlying distributions of temperature measurements (in space and time) are highly autocorrelated.
Peter

Brandon Gates
Reply to  Peter Sable
August 29, 2015 6:08 pm

Peter Sable,
http://www.metoffice.gov.uk/hadobs/hadcrut4/HadCRUT4_accepted.pdf
… references this paper: Mears, C.A., F.J. Wentz, P. Thorne and D. Bernie (2011). Assessing uncertainty in estimates of atmospheric temperature changes from MSU and AMSU using a Monte-Carlo estimation technique, Journal of Geophysical Research, 116, D08112, doi:10.1029/2010JD014954
You can get the output of each realization here: http://www.metoffice.gov.uk/hadobs/hadcrut4/data/current/download.html
… along with the calculated uncertainty and error estimates on a GRIDDED basis by MONTH if you’re feeling especially masochistic.

MarkW
Reply to  Peter Sable
August 29, 2015 7:56 pm

If you don’t know the accuracy of the data you are using, then you aren’t doing science.

Peter Sable
Reply to  Peter Sable
August 30, 2015 8:28 am

Assessing uncertainty in estimates of atmospheric temperature changes from MSU and AMSU using a Monte-Carlo estimation technique,

Nice, thanks, I’ll have to track this down.
Here’s another for you that’s potentially Yet Another Big Hole in CAGW: This paper (Torrence and Compo) uses an assumption of red noise (alpha = 0.72) to see if fluctuations in SST temperature are random in nature or not at any particular frequency. (the Null Hypothesis is that they are random, and test against that). They manage to find an ENSO signal using this method, but reject all other signals from the SST record.
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.28.1738&rep=rep1&type=pdf
Now take this same idea in Torrence and Compo and see if you can find a warming trend that exceeds the 95% confidence interval of alpha=0.75 red noise.. Here’s a preview hint: The confidence interval goes through the roof the lower the frequency of the data… which means since trend is the lowest frequency component all temperature signals are far below this confidence interval. In my early, unpublished replication the only two signals in GISS that I can find above 95% confidence interval is 2.8 years, which is roughly “once in a blue moon”, as well as the 1 year seasonal cycle. Which means all the “warming” going on is just random fluctuation, the null hypothesis.
Peter

Peter Sable
Reply to  Peter Sable
August 30, 2015 9:26 am

As the model of measurement and sampling error used here for land stations
has no temporal or spatial correlation structure,

from http://www.metoffice.gov.uk/hadobs/hadcrut4/HadCRUT4_accepted.pdf
Ugh, Morice Kennedy et. al. assume way too much normal distribution with no correlation.
There’s lots of spatial correlation even across 5 degree grids as well as inside grids. My Monte Carlo experiments indicate that the std error is 2.4x that of a Gaussian distribution when a surface is auto correlated. The distribution is also slightly skewed – to the high end…
They also assume that adjustments have a poisson distribution and are not autocorrelated and have a zero mean. They might be be correlated, both with themselves and with the surrounding grid as well. I think it’s been clearly shown that adjustments do not have a zero mean…
They should validate their “not correlated” assumptions and “zero mean” assumptions. There are well known techniques for doing this but they didn’t use them in the paper.
GIGO.
Peter

Brandon Gates
Reply to  Peter Sable
August 30, 2015 9:28 pm

Peter Sable,

Nice, thanks, I’ll have to track this down.

You’re welcome.

[Torrence and Compo (1998)] manage to find an ENSO signal using this method, but reject all other signals from the SST record.

I skimmed it, don’t see where they reject all other signals.

In my early, unpublished replication the only two signals in GISS that I can find above 95% confidence interval is 2.8 years, which is roughly “once in a blue moon”, as well as the 1 year seasonal cycle. Which means all the “warming” going on is just random fluctuation, the null hypothesis.

1) Why have you not published?
2) As this level of math is well above my paygrade, please explain to me how a wavelet analysis is feasible — or even desireable — when the hypothesized driving driving signal isn’t periodic … and even if it was, has not completed a complete cycle?

As the model of measurement and sampling error used here for land stations has no temporal or spatial correlation structure,

from http://www.metoffice.gov.uk/hadobs/hadcrut4/HadCRUT4_accepted.pdf
Ugh, Morice Kennedy et. al. assume way too much normal distribution with no correlation. There’s lots of spatial correlation even across 5 degree grids as well as inside grids.

The way I’m reading that, they’re saying that the error model has no temporal or spatial correlation, not that the data themselves lack it. From other readings, literature is chock full of discussing spatial correlations in the observational data — it’s my understanding that GISS’ homogenization and infilling algorithms rely on it.
The section you quote cites Brohan (2006): http://onlinelibrary.wiley.com/doi/10.1029/2005JD006548/pdf
… and Jones (1997): http://journals.ametsoc.org/doi/pdf/10.1175/1520-0442%281997%29010%3C2548%3AESEILS%3E2.0.CO%3B2
Perhaps those will clear it up for you … but I’m beginning to suspect that you’ll only find more GIGO … 🙂

They also assume that adjustments have a poisson distribution and are not autocorrelated and have a zero mean.

This one I’m certain you misread: To generate an ensemble member, a series of possible errors in the homogenization process was created by first selecting a set of randomly chosen step change points in the station record, with each point indicating a time at which the value of the homogenization adjustment error changes. These change points are drawn from a Poisson distribution with a 40 year repeat rate.

I think it’s been clearly shown that adjustments do not have a zero mean…

Clearly not, I’m quite sure they’re aware of that, and they’re certainly not claiming they do.

They should validate their “not correlated” assumptions and “zero mean” assumptions. There are well known techniques for doing this but they didn’t use them in the paper.

In both cases I think you’re conflating characteristics of observational data with things that are not.

Lady Gaiagaia
August 29, 2015 10:34 am

GISTEMP is entirely a work of science fiction, useless for any actual scientific purpose. It’s designed as a political polemical tool, not a real data series based upon observation.

Curious
August 29, 2015 12:15 pm

Why is the GISTEMP construction used instead of just the RSS and UAH numbers? I can understand why a reconstruction would be used for pre-1979 data, but what sense does it make to claim July was the hottest on record when the RSS/UAH data say that July was pretty average?

Brandon Gates
August 29, 2015 1:00 pm

Mr. Walker,

One of the strange things about the GISTEMP “Pause-busting” adjustments, is that the year with the highest rate of warming (since 1880) has changed. It used to be around 1998, with a warming rate of about +2.4 °C per century. After the adjustments, it moved to around 1937 (that’s right, 1937, back when the CO2 level was only about 300 ppm), with a warming rate of about +2.8 °C per century.

Comparing rate of temperature change to an absolute CO2 level at a point in time is not very meaningful. Comparing rate to rate would be better, but even then, change in temperature is responsive to change in forcing — which for CO2 is a function of the natural log of concentration. Regressing GISTEMP against the natural log of CO2 (120-month moving averages for both) gives a coefficient of 3.4.
The common rule of thumb is: ΔT = 3.7 * ln(C/C₀) * 0.8 = 2.96, which is within striking distance my calculated 3.4 but not very satisfying. When I add a solar irradiance time series (120 MMA again, in Wm^-1) to the regression, lo and behold, the ln(C/C₀) regression coefficient drops to 3.0 in line with expectations — about as good as an amateur researcher using simple spreadsheet functions could hope to expect.
In sum, ignore other significant and well-known climate factors at your peril.

If you look at the NOAA series, they already had 1937 as the year with the highest rate of warming, so GISTEMP must have picked it up from NOAA when they switched to the new NCEI ERSST.v4 sea surface temperature reconstruction.

Yes, that follows. Here’s a comparison between ERSST.v3b and v4 from NCEI itself:comment image
Just eyeballing the thing, it’s easy to see that the period between 1930 and 1942 was more steeply adjusted upward than 2002-2015. The source page for that image is here: https://www.ncdc.noaa.gov/news/extended-reconstructed-sea-surface-temperature-version-4
… wherein they explain:
One of the most significant improvements involves corrections to account for the rapid increase in the number of ocean buoys in the mid-1970s. Prior to that, ships took most sea surface temperature observations. Several studies have examined the differences between buoy- and ship-based data, noting that buoy measurements are systematically cooler than ship measurements of sea surface temperature. This is particularly important because both observing systems now sample much of the sea surface, and surface-drifting and moored buoys have increased the overall global coverage of observations by up to 15%. In ERSST v4, a new correction accounts for ship-buoy differences thereby compensating for the cool bias to make them compatible with historical ship observations.
This does NOT explain the changes in the ’30s and ’40s, which is annoying. Further, it’s much discussed elsewhere that during the war years, more temperature readings were taken from engine coolant intakes than via the bucket method relative to pre-war years. This would tend to create a warming bias in the raw data warranting a downward adjustment. Instead, the v4 product goes the other way, which is confusing … and also quite annoying.

So, the next time that you hear somebody claiming that Global Warming is accelerating, show them a graph of the rate of warming.

Revisiting your Graph 4 …comment image
… and again relying on my eyeballs, it’s easy to see that this rate graph has a positive slope with respect to time over the entire interval. Positive value of a 2nd derivative is positive acceleration, yes?
Next time a working climatologist says that Global Warming is accelerating, ask them, “Over what interval of time?” and use that interval in the rate analysis … because chances are they’re talking about something rather greater than a decade, and I find it’s best to compare apples to apples.

Brandon Gates
Reply to  Brandon Gates
August 29, 2015 1:02 pm

MOD: oi, I missed a closing italics tag after “In ERSST v4, a new correction accounts for ship-buoy differences thereby compensating for the cool bias to make them compatible with historical ship observations.” Please fix.

Reply to  Brandon Gates
August 29, 2015 11:51 pm

Fixed.
w.

Brandon Gates
Reply to  Brandon Gates
August 30, 2015 12:12 am

Thanks.

Dr. Bogus Pachysandra
August 29, 2015 1:46 pm

“The average temperature increase will be so much higher than the previous record, set in 2014, that it should melt away any remaining arguments about the so-called “pause” in global warming, which many climate sceptics have promoted as an argument against action on climate change.”
http://www.independent.co.uk/environment/climate-change/climate-change-2015-will-be-the-hottest-year-on-record-by-a-mile-experts-say-10477138.html

Brandon Gates
Reply to  Dr. Bogus Pachysandra
August 29, 2015 3:08 pm

I always find it somewhat morbidly amusing when someone predicts that a certain event or piece of evidence will end “any remaining arguments”. In this particular case, the rebuttal has been in place since the tail end of last year: it’s ENSO whut diddit.

Reply to  Dr. Bogus Pachysandra
August 29, 2015 6:08 pm

I think the problem here may have something to do with using units of distance (miles) to measure energy content of air.

MarkW
Reply to  Dr. Bogus Pachysandra
August 29, 2015 7:57 pm

Their belief system isn’t founded on evidence in the first place. Therefore nothing as trivial as evidence will shake their belief system.

dp
August 29, 2015 2:04 pm

Why begin a temperature trend at the end of the well-known “Little Ice Age”? The result is always going to be warming because that is what happens at the end of a protracted cold period. People condemn Michael Mann for hiding the LIA – posts like this one are in the same camp. The current trend is lacking a critical context and if this is all we have then I’d have to agree with the wackiest nutters out there that the world is on track to smouldering ruin. Stop doing that – it isn’t helping.

Brandon Gates
Reply to  dp
August 29, 2015 2:55 pm

dp,

Why begin a temperature trend at the end of the well-known “Little Ice Age”?

Almost certainly due to the relative dearth of thermometers and daily record keeping in the 17th century. Of course, when climatologists DO splice together proxy estimates of temperature trends with estimates obtained from the instrumental record, a great hue and cry of protest goes up from these quarters.

The result is always going to be warming because that is what happens at the end of a protracted cold period.

Sorry, but the planet does not just decide, “well, it’s been cold for a spell, time to warm up now because that’s what’s supposed to happen.” Physical systems do things for a physical reason. In this case, a good starting point is the Sun:
http://climexp.knmi.nl/data/itsi_wls_ann.png

dp
Reply to  Brandon Gates
August 29, 2015 5:15 pm

You are going to have to describe what your rationale is for a world that does anything but warm after an LIA event. Warming is the only option. Nothing else is logical.

Reply to  Brandon Gates
August 29, 2015 6:02 pm

*gasp*
The sun!
Talk about a hue and cry!
“HUE AND CRY…
a : a loud outcry formerly used in the pursuit of one who is suspected of a crime
b : the pursuit of a suspect or a written proclamation for the capture of a suspect “

Brandon Gates
Reply to  dp
August 29, 2015 5:46 pm

dp,

You are going to have to describe what your rationale is for a world that does anything but warm after an LIA event.

Again:
http://climexp.knmi.nl/data/itsi_wls_ann.png

Warming is the only option. Nothing else is logical.

As I mentioned elsewhere in this thread, the last glacial maximum was 6 degrees cooler than the Holocene average. Based on precedent alone, logically the LIA could have been much cooler for a much longer period of time. However, logic works best when it considers as much available evidence as is possible. Looking at solar fluctuations since the 1600s is only the barest beginning of that exercise … but it IS a good place to start.

Reply to  Brandon Gates
August 29, 2015 9:21 pm

Indeed.
It is no shock at all to me.
That the big shiny hot thing in the sky is responsible for not only the temperature of the Earth, but also to variations in such, is only logical to my way of thinking.
Powerful evidence that it could not possibly have any effect would need to be presented to even begin to rule it out, IMO.
I have never seen evidence to rule out the sun.
In fact we know it to be at least somewhat variable in it’s output.
And we know these variations in output are only part of the story, and that variances in the solar wind and magnetic fields exert powerful influence on the incoming cosmic rays.
There are also questions regarding the direct effects of the shifting magnetic and electric fields on the atmosphere and also on the interior of the Earth.
It would not surprise me in the slightest to find that we have incomplete knowledge of the amount that it can vary, and the number of ways these variations can effect the Earth.
I also wonder about ocean stratification and overturning, specifically in the Arctic region.
Many in the warmista camp have argued for many years that the sun can be disregarded.
And have specifically said as much in regard to the solar cycles, and any effects associated with these cycles.
That refusal to consider the sun as a source of climatic variation is a glaring blind spot in what passes for climate science these days.
IMO.

Dt, not just he USavid A
Reply to  Brandon Gates
August 29, 2015 10:42 pm

The entire troposphere, except for the maladjusted surface, is .3 degrees cooler then in 1998.

Brandon Gates
Reply to  Brandon Gates
August 29, 2015 11:29 pm

Menicholas,

Powerful evidence that it could not possibly have any effect would need to be presented to even begin to rule it out, IMO.

You DO realize that you’re preaching to the choir with me on this point.

I have never seen evidence to rule out the sun.

Neither have I. My own back of envelope calcs put it at 0.2 C per 1 Wm^-2 change in TSI (0.25 Wm^-2 to account for spherical geometry * 0.8 C / Wm^-2 climate sensitivity parameter). That works out to about + 0.1 C contribution to the global temperature trend from 1880 to present, or about 1/6 the total increase. By way of comparison to literature, in GISS model E net change in solar forcing works out to about 1/9 of the total forcing change since 1880.

It would not surprise me in the slightest to find that we have incomplete knowledge of the amount that it can vary, and the number of ways these variations can effect the Earth.
In a system this complex there’s always going to be something we don’t know. But CO2’s effects are obvious to me, backed by long-established and well-documented physics, and in my book all but beyond dispute. The main challenge, and everything I’ve read suggests it is a challenge, is constraining how much of an effect it has relative to other factors. Even so, I seriously doubt that it’s not the dominant contribution to the trend since 1950.

I also wonder about ocean stratification and overturning, specifically in the Arctic region.

Depending on which proxy reconstruction one consults for estimating the magnitude of the MWP/LIA transition, it’s pretty tough to explain that swing on the basis of solar fluctuations alone … which would only give about a tenth of a degree difference globally according to my above math, whereas, say, Moberg (2005) suggest ~0.8 degrees net change in NH temps.

Many in the warmista camp have argued for many years that the sun can be disregarded.

Well then, those in the warmista camp saying such things haven’t done their homework, are bonkers, and/or simply lying: literature by researchers I consider credible says otherwise.

dp
Reply to  Brandon Gates
August 31, 2015 9:14 am

Do you not understand that the end of a cold period unavoidably implies a warm period ensues? If this simple and self-evident process does not happen then the cold period cannot be said to have ended. The LIA did not stop getting colder, it did not stop being cold, the LIA ended and the world warmed and that has continued since the end of the LIA. And nobody knows why. The best brains in the moronosphere blame humans for the warming. They may be morons but they at least acknowledge it has warmed since the end of the LIA. They also like to use the ending of that LIA to exaggerate the rate of warmth and do so without giving the context of that starting point. That is cherry picking.

Brandon Gates
Reply to  dp
August 29, 2015 6:20 pm

Yes, the Sun. That really shouldn’t be a shocker.

a : a loud outcry formerly used in the pursuit of one who is suspected of a crime

I was going more for loud outcry, but the sense of pursuing a criminal fits. An already condemned one, at that.

Brandon Gates
Reply to  Brandon Gates
August 29, 2015 6:20 pm

above is for Menicholas August 29, 2015 at 6:02 pm

Reply to  Brandon Gates
August 29, 2015 9:24 pm

BTW, I agree that certain crimes have been committed.
This was why I thought it apropos to include that etymology of the phrase.
I am less certain that we agree on just what these crimes, who the criminals, are.

Reply to  Brandon Gates
August 29, 2015 9:26 pm

Seems my first reply got attached in the wrong place.
Apologies.

Brandon Gates
Reply to  Brandon Gates
August 29, 2015 10:40 pm

Menicholas,
Re: threading — I can’t tell which of us screwed up the threading, not worried about it.
I understand that it’s popular on my side of the fence to consider AGW contrarians criminal. I could, if pressed, rattle off some particularly egregious suspected offenders, but would rather not go there. Certainly neither our host nor the vast majority of participants here would qualify.

Gloria Swansong
August 29, 2015 2:18 pm

The earth is not warming on any meaningful time scale. Quite the opposite.
It is warmer now than 320 years ago, during the depths of the LIA and Maunder Minimum. It is warmer than 160 years ago, at the end of the LIA. It is however probably not warmer than 80 years ago, during the early 20th century warming. It is cooler now than 20 years ago, during the late 20th century warming, too.
But most importantly, it is colder now than during the Holocene Optimum, c. 5000 years ago, than the Minoan Warm Period, c. 3000 years ago, than the Roman Warm Period, c. 2000 years ago, and than the Medieval Warm Period, c. 1000 years ago. The planet is in an at least 3000-year, long-term cold trend.
This trend is worrisome.

August 29, 2015 2:25 pm

The first major error was your title:
“How fast is the Earth warming?’
I’m afraid you have fallen into the climate doomsayers “trap” of debating climate minutia.
Many other people here love to debate how much the Earth is warming, based on surface data handed to them by dubious, biased, highly political sources.
First of all, there is no scientific proof an average temperature statistic is important to know.
And there is no scientific proof that warming is bad news.
And there is no common sense in believing an average temperature change of less than one degree C. is important to anyone, and much sense in believing a few tenths of a degree C. change in either direction are nothing more than meaningless random variations.
I say average temperature data are so inaccurate:
— IT IS IMPOSSIBLE TO BE SURE that there was ANY global warming since 1880.
( using a reasonable margin of error — I’d say at least +/- 1 degree C. )
.
If you want to assume average temperature is a meaningful statistic, then you have to admit average temperature data are inaccurate.
— Especially the limited data from the 1800s, when thermometers were few, non-global, and consistently read low.
— And the data collection methodology was, and still is in different ways, very haphazard (such as sailors with thermometers throwing wood buckets over the sides of ships, almost always in Northern hemisphere shipping lanes, and then several significant changes in ocean temperature measurement methodology).
— The huge reduction in the number of land weather stations in use between the 1960’s and 2000’s, especially the reduction of cold weather USSR, other high latitude, and rural stations, which are now “in-filled” = a huge opportunity for smarmy bureaucrats to “cook the books”.
— And the owners of the data so frequently create “warming” out of thin air with “adjustments”, “re-adjustments”, and “re-re-re- adjustments”.
Even today, I doubt if more than 25% of our planet’s surface is covered by surface thermometers providing daily readings … and if that is true, that means a large majority of the surface numbers must be in-filled, wild guessed, homogenized, derived from computer models, satellite data, pulled out of a hat, or lower, etc.
We might be able to prove urban areas are considerably warmer than elsewhere (common sense), and urban areas cover many more square miles in 2015, than in 1880, so there must be LOCAL warming just from economic growth.
We might be able to prove LOCAL warming in the northern half of the Northern Hemisphere in recent decades, as measured by satellites (perhaps from dark soot on the snow and ice?), exceeded any reasonable margin of error.
Ignoring the (unknown) margins or error for a moment:
— My examples of LOCAL warming probably do add up to a higher global average temperature, but the details would be FAR different than the “global warming” envisioned from having more CO2 in the air (warming mainly at BOTH poles).
Free climate blog for non-scientists
– No ads
– No money for me
– A public service
– Only climate blog with climate centerfold
http://www.elOnionBloggle.Blogspot.com

Brandon Gates
Reply to  Richard Greene
August 29, 2015 4:45 pm

Richard Greene,

And there is no common sense in believing an average temperature change of less than one degree C. is important to anyone …

Consider that average global temperature during the last glacial maximum ~20 k years ago was only 6 degrees C cooler than the Holocene average. As well, note that the last time average temperature was 2 degrees C higher than the Holocene average during the Eemian interglacial ~140 k years ago, sea levels were 3-7 meters higher than present. A one degree positive change from present is halfway to that high water mark. You do the math.

… and much sense in believing a few tenths of a degree C. change in either direction are nothing more than meaningless random variations.

Since 1950, global temps have risen about 0.6 degrees C. From 1880 through 1949, the standard deviation of GISTEMP (monthly) is 0.18. I don’t think 3.4 standard deviations is something I can dismiss lightly.

I say average temperature data are so inaccurate:
— IT IS IMPOSSIBLE TO BE SURE that there was ANY global warming since 1880.
( using a reasonable margin of error — I’d say at least +/- 1 degree C. )

By that logic, it could be a whole degree hotter since 1880 than the current GISS mean estimate puts it.

dp
Reply to  Brandon Gates
August 29, 2015 5:21 pm

You just said above there was a dearth of thermometers in the 1700’s and here you are telling us you know the temperature of the world to +-0.0 degrees and that it was exactly 6 degrees cooler than now.

Brandon Gates
Reply to  Brandon Gates
August 29, 2015 5:52 pm

Notice the lack of decimal places after the 6 in “6 degrees C cooler”.

Reply to  Brandon Gates
August 29, 2015 6:34 pm

Sea levels are showing no trend to accelerate their steady rise of the past 150 years or so.
This is using actual NOAA tide gauge measurements.
The average of all tide gauges show a rate of rise of about 1.1mm/year.
At this rate, assuming it continues as is, in 100 years sea levels will have risen 101 mm.
About FOUR INCHES!
Sea level trends are barely perceptible, even using a direct comparison of old photographs and videos and comparing them to pictures and videos of the exact same locations today.
I have a collection of photos of various places, including one of the ocean at Collins Ave in South Beach from the 1920s. Same road, same hotels, same place, and the ocean looks…exactly the freakin’ same!
Mr. Gates, are you suggesting that there is a direct and invariant correlation between some measurement of the global average temp and the sea level of the world ocean?
You seem to be an individual given to backing up any claims a person might make.
Any particular evidence for your implication that sea levels must somehow rise several meters if the world warms two degrees?
Lummus Park, a long time ago(Note the cars):
http://img0.etsystatic.com/000/0/5744229/il_fullxfull.210079858.jpg
Lummus Park, now (more or less…note the cars):
http://media-cdn.tripadvisor.com/media/photo-s/06/79/e1/fd/lummus-park-from-our.jpg

john robertson
Reply to  Brandon Gates
August 29, 2015 8:00 pm

By that logic, it could be a whole degree cooler since 1880 than the current GISS mean estimate puts it.
Exactly the same problem, we do not know and can not be so certain.

Brandon Gates
Reply to  Brandon Gates
August 29, 2015 8:57 pm

menicholas,

Sea levels are showing no trend to accelerate their steady rise of the past 150 years or so.

Church and White (2011):
http://www.cmar.csiro.au/sealevel/images/CSIRO_GMSL_figure.jpg
Query: why else do you think they would be rising at at all?

The average of all tide gauges show a rate of rise of about 1.1mm/year.
At this rate, assuming it continues as is, in 100 years sea levels will have risen 101 mm.

Why would you assume that the rate is going to remain constant when:
1) data suggest it isn’t and
2) landed ice melt in both Greenland and Antarctica are also accelerating?

About FOUR INCHES!

1/100 is a reasonable general estimate for shoreline slope, so you’re talking 400 inches of beach lost at high tide.

Sea level trends are barely perceptible, even using a direct comparison of old photographs and videos and comparing them to pictures and videos of the exact same locations today.

That’s as good an argument as I can think of to NOT use anecdotal evidence like photographs for this exercise.

Mr. Gates, are you suggesting that there is a direct and invariant correlation between some measurement of the global average temp and the sea level of the world ocean?

Direct yes, though not the only factor (high latitude insolation a la Milankovitch, ice albedo, ocean current changes, ice “dam” formation are four others I can think of off the top of my head). Certainly not invariant, definitely not linear …comment image
… but almost certainly significantly and causally correlated.

Any particular evidence for your implication that sea levels must somehow rise several meters if the world warms two degrees?

Cuffey (2000) estimates at least three meters, probably more than five during the Eemian: ftp://soest.hawaii.edu/coastal/Climate%20Articles/Cuffey_2000%20LIG%20Greenland%20melt.pdf
Not that it will happen right away, mind. IPCC’s worst case AR5 estimate is 82 cm by 2100. Remember to multiply by 100 … nearly one American football field of beach gone really should register as a significant problem best avoided.

Brandon Gates
Reply to  Brandon Gates
August 29, 2015 9:32 pm

john robertson,

By that logic, it could be a whole degree cooler since 1880 than the current GISS mean estimate puts it.

Let’s keep in mind that +/- 1 C is an uncertainty “estimate” dp apparently pulled out of a hat. OTOH, GISS puts the uncertainty range at +/- 0.05 C for annual temps in recent years, +/- 0.1 C around 1900. x3 for monthly data. They, at least, went to the trouble of publishing their methods and reasoning for arriving at those figures. Why anyone would trust idle speculation from J. Random Internet d00dez over documented professional research is quite beyond me, but hey, to each their own.

Exactly the same problem, we do not know and can not be so certain.

I very much doubt any risk manager in their right mind would consider a coin-toss a good bet. OTOH, casinos and the Lotto are Big Business, so I perhaps should not be terribly surprised.
On that note, it’s my personal observation that the majority of participants in this forum consider whatever low-end bound they come across (or conjure out of thin air) the most likely for reasons I cannot discern from simple wishful thinking. And almost to a man (or woman) are DEAD certain that temperatures have not risen since 1998 based on lower troposphere (NOT surface) satellite estimates which don’t directly measure temperature at all.
The mind boggles.

Dt, not just he USavid A
Reply to  Brandon Gates
August 29, 2015 10:44 pm

Since 1998 the atmosphere has cooled, quite a bit as a matter of fact.

Brandon Gates
Reply to  Brandon Gates
August 30, 2015 12:04 am

Right on cue. Well, let’s see, the latest from UAH says for 1998 annual mean (which is the mother of all cherry-picks) vs the same for 2014, the change is -0.29 C. Yet elsewhere on this very thread we have folk saying a 1 degree increase is nothing to worry about. So you’re calling ~1/3 of nothing to worry about, “quite a bit”. Funny how numbers preceded by a negative sign are more significant than ones which are positively signed, innit.
Like I said, the mind boggles.

richardscourtney
Reply to  Brandon Gates
August 30, 2015 1:29 am

Brandon Gates:
You say

Right on cue. Well, let’s see, the latest from UAH says for 1998 annual mean (which is the mother of all cherry-picks) vs the same for 2014, the change is -0.29 C. Yet elsewhere on this very thread we have folk saying a 1 degree increase is nothing to worry about. So you’re calling ~1/3 of nothing to worry about, “quite a bit”. Funny how numbers preceded by a negative sign are more significant than ones which are positively signed, innit.
Like I said, the mind boggles.

Only a mind that is devoid of logical ability would be boggled by the greater importance of an observed negative trend in the data than an observed positive trend in the data when considering claims that a positive trend ‘should’ exist in that sub-set of the data.
And, as my above post to you, none of the data are meaningful because their error estimates are known to be wrong but it is not known how wrong they are.
Richard

Reply to  Brandon Gates
August 30, 2015 6:00 am

Well Brandon, I am sorry your mind boggles so easily.
The surface record is clearly FUBAR, with adjustments since 2001 only 400 percent larger then their error bars, let alone far larger adjustments prior to that. The satellites are calibrated against very accurate weather balloons, are immune to UHI and homogenization, incorporation of old SST and ship bucket and intake readings, and confirmation bias, and clearly cover far greater area.
I am also sorry your boggled mind and so easily accepts one SL data set clearly contradicted by numerous data sets and other peer review reports, as well as millions of eyes all over the world from folk who live on the ocean and observe that fifty years from now they MAY need to take two steps back to keep their feet dry.
Currently active NOAA tide gauges average 0.63 mm/year sea level rise, or two inches by the year 2100.comment image
University of Colorado (after yet more adjustments) claim five times that much. Eighty-seven percent of tide gauges are below CU’s claimed rate.
Reasonable minds rebel at FUBAR records being used to justify skyrocketing electrical rates and global government control.

Reply to  Brandon Gates
August 30, 2015 6:07 am

Oh, BTW Mr. Mind-Boggled, 1998 it is not a cherry pick at all. It is the answer to a question…
How much has the earth’s atmosphere COOLED since its warmest year on record, and how long ago was that.
Now that is certainly a reasonable question to ask before trillions are wasted on CAGW mandates.
The answer is .3 degrees and 17 years ago. NONE as in ZERO of the climate models come CLOSE to duplicating that.

Reply to  Brandon Gates
August 30, 2015 10:01 am

I did say thermometers (that survived) from the 1800s tended to read low, and I doubt if human eye readings could possibly be better than to the nearest degree (so a +/-0.5 degrees C. margin of error from that fact alone).
It could easily be, based on an assumed +/- 1degree C. margin of error, that there was really no warming since 1880 … or close to two degrees C. of warming.
.
The measurements are not accurate enough to be sure.
Based on the climate proxy work of geologists:
(1) They identified unusually cool centuries from 1300 to 1850, and
(2) Their ice core studies showed repeated mild warming / cooling cycles, typically lasting 1000 to 2000 years, in the past half million years,
… I think it would be common sense to guess the multi-hundred year cooling trend called The Little Ice Age, would be followed by hundreds of years of warming — let’s call this the Modern Warming, and estimate that it started in 1850 (not started by Coal power plants or SUVs).
It could last hundreds of years more, or it could have ended ten years ago, since the temperature trend since then has been flat. No one knows.
The Modern Warming is great news.
It was too cold for humans in The Little Ice Age, and green plants wanted a lot more Co2 than the air had in 1850, at least according to the wild guesses of Co2 levels based on ice cores. (of course I’m speaking on behalf of green plants and greenhouse owners).
I sure hope there really +2 degrees C.of warming since 1850 !
That would make the silly, wild guess, +2 degree C. “tipping point / danger line” look just as foolish and arbitrary as anyone with common sense already knows it is.
Of course I am that rare “ultra-denier” who wants MORE warming and MORE CO2 in the air.
I doubt if CO2 is more than a minor cause of warming, given the lack of correlation, but I’ll take more warming any way I can get it.
The only other choice is global cooling … or glaciation covering a lot more of our planet.
1,000 years of written anecdotes clearly shows people strongly preferred the warmer centuries.
And, getting personal, I live in Michigan and don’t want my state covered with ice again — I can’t ice skate.

Brandon Gates
Reply to  Brandon Gates
August 30, 2015 4:32 pm

richardscourtney,

Only a mind that is devoid of logical ability would be boggled by the greater importance of an observed negative trend in the data than an observed positive trend in the data when considering claims that a positive trend ‘should’ exist in that sub-set of the data.

1) The UAH v6 trend, when properly calculated using a linear regression instead of subtracting one end point from the other, is 0.001 C/decade.
2) When calculating trends on a subset of data, the analysis is so sensitive to choice of endpoint that spurious results are the default expectation, not the exception. For example, for the interval 2000-2010, the trends (C/decade) are as follows:
GISTEMP: 0.079
HADCRUT4: 0.029
UAH TLT v6: 0.033
Oh look, UAH agrees with HADCRUT4!
Same method for 1981-1999
GISTEMP: 0.203
HADCRUT4: 0.235
UAH TLT v6: 0.212
Oh look, UAH agrees with everything!
I can do this all day … picking cherries is easy for EVERYBODY.
3) The IPCC make it abundantly clear that future decadal trends from ensemble model means are not to be taken as gospel truth not only because THEY’RE DERIVED FROM MODEL OUTPUT with all the error and uncertainty that entails, but also because of the magnitude of decadal variability found in empirical observation.
4) From 1980-2015 I calculated the linear trend for all three products, calculated the annual difference from the predicted value, and took the standard deviation of the resulting residuals:
GISTEMP: 0.082
HADCRUT4: 0.087
UAH TLT v6: 0.142
Taken at face value, it would seem that the lower troposphere is more sensitive to change than the surface … consistent with GCM predictions. However, some of the “noise” in the UAH series could be due to larger error/uncertainty bounds. It’s difficult to tell because Spencer and Christy don’t publish annual uncertainty values as are done for GISTEMP and HADCRUT4 … only error estimates for long-term trends.
For sake of argument, let’s assume that the higher deviation in UAH is a reasonably real representation of annual temperature fluctuations. From that it follows that decadal trends could be similarly more sensitive.
I don’t know the answers. It’s my opinion that the experts don’t know either … there are many competing hypotheses which are not mutually compatible. I’d expect that an honest person who reviewed the extant literature would adopt the same attitude of uncertainty.

And, as my above post to you, none of the data are meaningful because their error estimates are known to be wrong but it is not known how wrong they are.

Yeah, and UAH publishes different error estimates than GISSTEMP and HADCRUT4.

Brandon Gates
Reply to  Brandon Gates
August 30, 2015 5:47 pm

David A,

The surface record is clearly FUBAR, with adjustments since 2001 only 400 percent larger then their error bars, let alone far larger adjustments prior to that.

It would be interesting to compare the magnitude of UAH TLT adjustments to their error bars.

The satellites are calibrated against very accurate weather balloons, are immune to UHI and homogenization, incorporation of old SST and ship bucket and intake readings, and confirmation bias, and clearly cover far greater area.

1) Weather baloons: Po-Chedley (2012) disagrees with you: http://www.atmos.washington.edu/~qfu/Publications/jtech.pochedley.2012.pdf
See Table 1, top right of p. 4 in the .pdf.
2) immunity to UHI: um, yeah, the people who do this are aware of the issue … and deal with it. One paper of many: http://onlinelibrary.wiley.com/doi/10.1029/2012JD018509/full
3) immunity to bucket brigades: UAH doesn’t cover the time period when bucket vs. ERI vs buoys issues were at their most extreme, namely during and after WWII.
4) immunity to homogenization: no, there are outliers, biases and other gremlins in the raw satellite data which need to be, and are, handled as they become known.
5) immunity to confirmation bias: LOL! Spencer and Christie are robots? You’re killing me.
6) spatial coverage: Temporal coverage is an issue. If one is interested in temperature trends since increased industrialization, satelites won’t help.

I am also sorry your boggled mind and so easily accepts one SL data set clearly contradicted by numerous data sets and other peer review reports …

And which datasets would those be?

… as well as millions of eyes all over the world from folk who live on the ocean and observe that fifty years from now they MAY need to take two steps back to keep their feet dry.

What millions of people allegedly think about SLR is not exactly what I consider compelling evidence of anything.

Currently active NOAA tide gauges average 0.63 mm/year sea level rise, or two inches by the year 2100.

Linear extrapolation applied to a non-linear phenomenon like ice sheet mass loss? Really?
http://climexp.knmi.nl/data/idata_grsa.png
http://climexp.knmi.nl/data/idata_anta.png

Brandon Gates
Reply to  Brandon Gates
August 30, 2015 6:23 pm

David A,

Oh, BTW Mr. Boggled, 1998 it is not a cherry pick at all. It is the answer to a question…How much has the earth’s atmosphere COOLED since its warmest year on record, and how long ago was that.

Ok, the COLDEST temperature anomaly on record for UAH v6 is -0.36 in 1985, through July 2015, 0.21, a warming of 0.57 C. Over the same interval, CO2 increased 54.6 PPMV. What’s the problem?

Now that is certainly a reasonable question to ask before trillions are wasted on CAGW mandates.

Eyah, because looking at one annual outlier and subtracting that value from one YTD value is SUCH a robust analytic method in a noisy data set representing processes which play out over multiple decades to centuries.

The answer is .3 degrees and 17 years ago. NONE as in ZERO of the climate models come CLOSE to duplicating that.

As I and others here have explained ad naseum the AOGCM runs used in IPCC ARs don’t even remotely attempt to model the exact timing of El Nino events because they’re designed to project climate outcomes based on various emissions scenarios, not to be 85-year weather forecasting systems. Were it not so, we’d be better off gazing into crystal balls or staring at randomly scattered chicken bones.

Brandon Gates
Reply to  Brandon Gates
August 30, 2015 8:21 pm

Richard Greene,

I did say thermometers (that survived) from the 1800s tended to read low …

As good a reason for any to do bias adjustments as I can think of.

… and I doubt if human eye readings could possibly be better than to the nearest degree (so a +/-0.5 degrees C. margin of error from that fact alone).

+/- 1 degree is a figure I’ve seen floating around, no idea its provenance, but it seems reasonable for sake of argument.

It could easily be, based on an assumed +/- 1degree C. margin of error, that there was really no warming since 1880 … or close to two degrees C. of warming.

Well .. no, for two main reasons:
1) Measurement uncertainty improved over the course of time.
2) We expect “eyeball errors” to be normally distributed. So for 30 days of observations from just one station in 1880, the standard error of the mean will be much smaller than 1 C.

The measurements are not accurate enough to be sure.

Low accuracy can be dealt with so long as the measurements are consistently inaccurate.

Based on the climate proxy work of geologists:

You’re not seriously implying that temperature proxies are more precise than thermometers … are you?

(1) They identified unusually cool centuries from 1300 to 1850, and
(2) Their ice core studies showed repeated mild warming / cooling cycles, typically lasting 1000 to 2000 years, in the past half million years …

1) Ok sure.
2) With pretty clear 140 k year major glaciation/deglaciation cycles.

… I think it would be common sense to guess the multi-hundred year cooling trend called The Little Ice Age, would be followed by hundreds of years of warming — let’s call this the Modern Warming, and estimate that it started in 1850 (not started by Coal power plants or SUVs).

It may be common sense, but I’m telling you that common sense can and does fail you when dealing with complex physical systems. Temperature trends don’t spontaneously occur … there are physical reasons for them, and one big part of those proxy studies you mentioned goes well and beyond just figuring out what temperatures did … but why as well.
One thing to look at is not just the magnitude of change since 1850, but the rate at which it occurred:comment image
If your what goes down must come up hypothesis holds any water, my own naive assumption would be that the rate of the rebound would be similar to the decline. I’m not seeing it.

It could last hundreds of years more, or it could have ended ten years ago, since the temperature trend since then has been flat. No one knows.

I have a pretty good idea why the surface temperature slowdown: prolonged period of La Nina conditions, plateauing of the AMO, and a slight decline in solar output. These notions come from reading the literature, and confirming it by crunching the data — a LOT of data — myself.

It was too cold for humans in The Little Ice Age, and green plants wanted a lot more Co2 than the air had in 1850, at least according to the wild guesses of Co2 levels based on ice cores.

lol, you hold up proxy data to support your argument for temperature trends, but for CO2 they’re just wild guesses.
Humans and plants made it through 180 PPMV CO2 and -6 C degree temps. My sense is that it’s not the absolute values of CO2 and temperature which are most important, but rates at which those things change. One argument for the success of our species is the relative stability of temperatures in the 10,000 or so years of the Holocene as compared to volatility of the several hundred thousand years prior. I’m inclined to put stock in that argument because any mass extinction I can think of has been tied to very rapid global climate changes … including both rapid warming or cooling.

I sure hope there really +2 degrees C.of warming since 1850 !

There’s a hard upper limit to human ability to tolerate heat: 35 C wet bulb temperature. Spend several days in those kind of temperatures and you will assuredly die.

That would make the silly, wild guess, +2 degree C. “tipping point / danger line” look just as foolish and arbitrary as anyone with common sense already knows it is.

I’ve never seen it written that 2 C is a tipping point. I have seen it written that it is mainly intended as a policy target which some experts considered feasible to stay below IF significant emission reductions were undertaken in a timely fashion. Which has not happened. As such, for the Obama Administration, apparently 3 C is the new 2 C.
On a less tongue-in-cheek note, the way I understand it is that risk increases as temperature does, and that there’s no temperature in the IPCC’s worst-case nightmare scenario at which everybody dies.

I doubt if CO2 is more than a minor cause of warming, given the lack of correlation, but I’ll take more warming any way I can get it.

Lack of correlation? Try looking at data prior to 1998.

The only other choice is global cooling … or glaciation covering a lot more of our planet.

While that was a passing notion promoted by some researchers in the 1970s, we know quite a bit more about Milankovitch orbital forcing cycles these days. According to that theory we’re in a sweet spot of the cycle where the decline of insolation at high northern latitudes is quite shallow … as in not enough to trigger a full-on ice age … and actually due for a another upturn within the next few centuries. This really is not a system following a completely indecipherable “random” walk.

1,000 years of written anecdotes clearly shows people strongly preferred the warmer centuries.

I’m sure there are some equatorial countries with favorable immigration policies that would let you move there right now. I’ve been to one … I loved everything except the oppressive heat. The locals were fine with it of course, having adapted to it over their many generations … but see again, no human can survive 35 C wet bulb temps for days on end and live to tell about it. If there’s any hard do-not-cross threshold in this topic, that would be it.
Also note: 1,000 years ago, world population was somewhere between 250-320 million people. Bit more freedom to move, much less built up infrastructure adapted to local conditions … basically not what I consider a reasonable comparison.
Let me put it this way: if several degrees cooling was the concern, I would see plenty of risk for some of the very same reasons you’ve cited, and would still be of the mind to stabilize temperatures as close to present levels if at all possible.

And, getting personal, I live in Michigan and don’t want my state covered with ice again — I can’t ice skate.

If there’s anything most working climatologists are NOT alarmed about, it’s a return of glaciers to any part of Michigan … not even the northernmost parts.

richardscourtney
Reply to  Brandon Gates
August 31, 2015 12:56 am

Brandon Gates:
Your irrelevant twaddle supposedly in response to my post here says

I can do this all day … picking cherries is easy for EVERYBODY.

Yes, of course you can, and you do it all the time.
But none of that is relevant to the contents of my post which it purports to answer.
And hereI have refuted other untrue nonsense from you in this thread.
Richard

Reply to  Brandon Gates
August 31, 2015 3:42 am

Response to Brandon’s response…
David A, says
Oh, BTW Mr. Boggled, 1998 it is not a cherry pick at all. It is the answer to a question…How much has the earth’s atmosphere COOLED since its warmest year on record, and how long ago was that?
Brndon Gates says…Ok, the COLDEST temperature anomaly on record for UAH v6 is -0.36 in 1985, through July 2015, 0.21, a warming of 0.57 C. Over the same interval, CO2 increased 54.6 PPMV. What’s the problem?
======================================================================
There is no problem. The pause turned into .3 degrees cooling over the last 17 years. Really, it did. Heat is not the mean of a smoothed five year tend line. The atmosphere was far warmer in 1998 then it is now. 1998 was the warmest year on record. The atmosphere has cooled .3 degrees since 1998. If YOU must put a cooling rate on that, the atmosphere is cooling at about 1.8 degrees per century. It has warmed about .4 degrees in the 46 years of the data set record.
=====================================================
Brandon quotes David A “Now that is certainly a reasonable question to ask before trillions are wasted on CAGW mandates.
Brandon says… Eyah, because looking at one annual outlier and subtracting that value from one YTD value is SUCH a robust analytic method in a noisy data set representing processes which play out over multiple decades to centuries.
=====================================================================
Brandon what was the question. It was, “How much has the earth’s atmosphere COOLED since its warmest year on record, and how long ago was that? Again Heat is not the mean of a smoothed five year trend, it is what it is, when it is gone, guess what, it is no longer there. The answer could have been 0 days and it is warmer, not cooler, but that did not happen. When the answer is already two decades, and over that period the answer is cooling not warming, and over that period, and over the entire data set the climate models predict three times the warming that did occur, there is no reason to spend trillions on a broken busted theory when the only consistent evidence for anything from additional CO2 is increased crop yields.
===================================================
Brandon quotes the anser to the question…The answer is .3 degrees and 17 years ago. NONE as in ZERO of the climate models come CLOSE to duplicating that.”
Brandon says….”As I and others here have explained ad naseum the AOGCM runs used in IPCC ARs don’t even remotely attempt to model the exact timing of El Nino events because they’re designed to project climate outcomes based on various emissions scenarios, not to be 85-year weather forecasting systems. Were it not so, we’d be better off gazing into crystal balls or staring at randomly scattered chicken bones.”
=================================================================================
Brandon, I am sorry now that you feel both ill and your mind is boggled. However you entirely missed the point of my comment.
None, as in ZERO of the climate models can produce a world where increased CO2 causes the surface to warm to record levels by a few hundredth of a degree, and the bulk of the atmosphere to cool by ten times the claimed warming, which is what the failed surface data sets show.
However the rest of your comment is of no value either. Let us discuss ENSO and CO2 emissions. Since our emissions are on track with Hansen’s highest emission scenarios, and since the atmosphere has not warmed at all and the atmosphere is in fact cooler then it was 17 years ago, and this years super duper El Nino does not appear to be getting us close to 98 warmth either, there is perhaps, oh say a 50/50 percent chance that your chicken bones would beat the failed climate models, but again, you missed the point.
NONE, as in zero of the climate models are remotely close over the past two decades or the entire data set, to getting the bulk of the atmospheric T correct. Also we have had multiple positive and negative ENSO events over this period, and the postive ENSO events in 1998 including the AMO at that time, likely explain what little warming there actually was. ENSO works both ways, so you can not claim it caused the cooling in the troposphere since 1998, but had nothing to do with the warmth.
So Brandon, why is the world spending trillions over a scientific method no more accurate then the casting of chicken bones?

Reply to  Brandon Gates
August 31, 2015 5:16 am

Response to Brandon G;s response.
Brandon quotes me,
The surface record is clearly FUBAR, with adjustments since 2001 only 400 percent larger then their error bars, let alone far larger adjustments prior to that.
Brandon says…It would be interesting to compare the magnitude of UAH TLT adjustments to their error bars.
============
Be my guest, but please compare against the total changes over time including the lowering of the past 1980 ish NOAA graphics.
==============================
Brandon quotes D.A. The satellites are calibrated against very accurate weather balloons, are immune to UHI and homogenization, incorporation of old SST and ship bucket and intake readings, and confirmation bias, and clearly cover far greater area.
Brandon says…
1) Weather baloons: Po-Chedley (2012) disagrees with you: http://www.atmos.washington.edu/~qfu/Publications/jtech.pochedley.2012.pdf
See Table 1, top right of p. 4 in the .pdf.
———————————————————————————————————————–
Brandon, The paper you linked is about small changes and advocates the need for RSS and UAH to more closely aligned, which they now are, and both data sets are indeed verified by the weather balloons. I am not certain how discussion of a radiosonde mean estimate for UAH of 0.051 plus or minus 0.031 for the period of January 1985 to February 1987 disputes this contention.
—————————————————————————————————————–
Brandon continues…
2) immunity to UHI: um, yeah, the people who do this are aware of the issue … and deal with it. One paper of many: http://onlinelibrary.wiley.com/doi/10.1029/2012JD018509/full
====================================================================
Brandon likes to ignores the papers that demonstrate how UHI is poorly dealt with. Since the publication of those papers the homogenization of UHI to rural areas has increased, with USHCN now making up up to fifty percent of their data. The satellites are non controversial in this manner, and are verified by weather balloon readings, the most accurate thermometers we have. There is little doubt that this is part of the reason for the impossible physics of the divergence between the surface and the satellites One of them is wrong, and the evidence strongly points to the surface.
========================================================================
Brandon contnues…
3) immunity to bucket brigades: UAH doesn’t cover the time period when bucket vs. ERI vs buoys issues were at their most extreme, namely during and after WWII.
==========================================================================
Who said they did. I just pointed out that they are immune to such problems which vastly increase the error bars of the surface record.
===========================================================================
Brandon continues….
4) immunity to homogenization: no, there are outliers, biases and other gremlins in the raw satellite data which need to be, and are, handled as they become known.
=============================================================================
Yes Brandon, and those relatively small adjustments, compared to up to 50 percent of valid USHCN stations not even being used and those records adjusted by stations up to 1000 K away, are verified by weather balloon readings verses the speculative nature of the surface changes, many of which are not even discussed, they just continue to happen.
===============================================================================
Brandon continues….
5) immunity to confirmation bias: LOL! Spencer and Christie are robots? You’re killing me.
=====================================================================
Ok Brandon, we get it, Your mind is boggled, your feel flu-like, and now you are dying…
Confirmation bias is classic social science, the primary factors involving finance, peer pressure, and career advancement. Hundreds of posts have been written and sections of numerous books have been dedicated on how these factors come into play to move Universities and University Scientists into promoting the CAGW agenda. Thousands of articles have been written that promote the ever missing100 percent failed predictions of this politically driven drivel. There is no remote similar evidence of the opposite happening. Spencer and Christie have none of the classic reasons for confirmation bias.
=========================================================================
Brandon continues…
6) spatial coverage: Temporal coverage is an issue. If one is interested in temperature trends since increased industrialization, satelites won’t help.
==============================================
So you agree the spatial coverage of the surface record is poor even now compared to the satellites. I never asserted that the satellite record is long, only that it is more accurate and spatial coverage is one of many reasons for that accuracy, and the divergence is a huge problem for the CAGW community.
==============================================
Brandon continues
I am also sorry your boggled mind and so easily accepts one SL data set clearly contradicted by numerous data sets and other peer review reports …
And which datasets would those be?
============================================================
Several discussed here… http://joannenova.com.au/2014/08/global-sea-level-rise-a-bit-more-than-1mm-a-year-for-last-50-years-no-accelleration/
It is actually similar to the surface satellite divergence issue only reversed, with however very logical regions to accept that the satellite T record is more accurate, and the TREND in the tide gauge record is more accurate. More papers available here. Also go to Poptech for additional papers. http://scienceandpublicpolicy.org/images/stories/papers/reprint/the_great_sea_level_humbug.pdf
============================================================================
Brandon continues to quote me …… as well as millions of eyes all over the world from folk who live on the ocean and observe that fifty years from now they MAY need to take two steps back to keep their feet dry.
Brandon says…
What millions of people allegedly think about SLR is not exactly what I consider compelling evidence of anything.
=======================================================================
The fact that millions of people who have lived all their lives on the coast, and have never been impacted by rising global sea levels that are suppose to have displaced millions by now or soon, with ZERO sign of that happening, is cogent to me, and them, regardless of your take on it.
====================================================================
Brandon continues, quoting me……
Currently active NOAA tide gauges average 0.63 mm/year sea level rise, or two inches by the year 2100.
Brandon responds…
Linear extrapolation applied to a non-linear phenomenon like ice sheet mass really?
=========================================================================
We are not discussing your one sided view of ice loss. Do you wish to?
Tide gage TRENDS over time are accurate, as land flux changes, up or down are very slow and not temporally relevant to most current studies, and so the trend is accurate. The gauges show no acceleration whatsoever. They would if there was. Also we DO NOT live in the maladjusted satellite sea level arena, we live where the gauges are. The paper I linked to above discusses the tide gauge trends in detail. Expand your mind Brandon so it does not boggle so easily and make you feel nausea and like you are dying.

Brandon Gates
Reply to  Brandon Gates
August 31, 2015 11:46 am

David A,

Be my guest, but please compare against the total changes over time including the lowering of the past 1980 ish NOAA graphics.

I would were it not for two things:
1) It’s your argument, not mine.
2) UAH doesn’t publish error estimates for either monthly or annual means.

The paper you linked is about small changes and advocates the need for RSS and UAH to more closely aligned, which they now are, and both data sets are indeed verified by the weather balloons.

Read Mears (of RSS) (2012) on the difficulty of determining whether balloons or MSUs are better at representing troposphere temperature trends: http://onlinelibrary.wiley.com/doi/10.1029/2012JD017710/full

I am not certain how discussion of a radiosonde mean estimate for UAH of 0.051 plus or minus 0.031 for the period of January 1985 to February 1987 disputes this contention.

Well now, I consider that a fair point. Upon closer reading, Po-Chedley also limited the analysis to NOAA-9. It appears that Mears (2012) does a more comprehensive analysis, and being that his day job IS producing temperature time series from MSUs, I think his is the more credible paper.

Brandon likes to ignores the papers that demonstrate how UHI is poorly dealt with.

Such as ____________________?

Since the publication of those papers the homogenization of UHI to rural areas has increased, with USHCN now making up up to fifty percent of their data.

I can’t parse the meaning of, “homogenization of UHI to rural areas has increased”. How is this measured? How much of an increase? What are the implications? What is your source of this information?

The satellites are non controversial in this manner, and are verified by weather balloon readings, the most accurate thermometers we have.

I find I’m out of creative ways to rebut this mantra. Read Mears (2012), particularly the parts where he discusses the various bias adjustments necessary to homogenize — yes, homogenize — radiosonde time series.

There is little doubt that this is part of the reason for the impossible physics of the divergence between the surface and the satellites One of them is wrong, and the evidence strongly points to the surface.

They’re both wrong.

I just pointed out that they are immune to such problems which vastly increase the error bars of the surface record.

And I am pointing out that when the questions involve temperature changes since the beginning of industrialization, we need data that extend back to that period of time. Satellites are 100% useless for determining trends from the mid to late 1800s regardless of their purported accuracy.

Yes Brandon, and those relatively small adjustments, compared to up to 50 percent of valid USHCN stations not even being used and those records adjusted by stations up to 1000 K away, are verified by weather balloon readings verses the speculative nature of the surface changes, many of which are not even discussed, they just continue to happen.

1) Please quantify the relative adjustments. With references. Thanks.
2) USHCN station dropoff effect on global temps (Zeke Hausfather):
http://rankexploits.com/musings/wp-content/uploads/2010/03/Picture-98.png
The full post with lots of other pretty pictures: http://rankexploits.com/musings/2010/a-simple-model-for-spatially-weighted-temp-analysis/

Confirmation bias is classic social science, the primary factors involving finance, peer pressure, and career advancement.

Yes, and nobody is immune. Not Spencer. Not Christy. Not even me.

Spencer and Christie have none of the classic reasons for confirmation bias.

He’s here all the week, folks. Oi. My sides ache.

So you agree the spatial coverage of the surface record is poor even now compared to the satellites.

No.

It is actually similar to the surface satellite divergence issue only reversed, with however very logical regions to accept that the satellite T record is more accurate, and the TREND in the tide gauge record is more accurate.

Bizarre.
A preprint of Beenstock (2015) can be found here: http://econapps-in-climatology.webs.com/SLR_Reingewertz_2013.pdf
Here’s the salient portion of their conclusion:
The substantive contribution of the paper is concerned with recent sea level rise in different parts of the world. Our estimates of global SLR obtained using the conservative methodology are considerably smaller than estimates obtained using data reconstructions. While we find that sea levels are rising in about a third of tide gauge locations, SLR is not a global phenomenon. Consensus estimates of recent GMSL rise are about 2mm/year. Our estimate is 1mm/year. We suggest that the difference between the two estimates is induced by the widespread use of data reconstructions which inform the consensus estimates. There are two types of reconstruction. The first refers to reconstructed data for tide gauges in PSMSL prior to their year of installation. The second refers to locations where there are no tide gauges at all. Since the tide gauges currently in PSMSL are a quasi-random sample, our estimate of current GMSL rise is unbiased. If this is true, reconstruction bias is approximately 1mm/year.
Boiled down to its essence: since SLR is not constant at all locations and because it is negative in some locales, SLR is not global. Which is a stretch. Then they immediately contradict themselves by saying the global mean is 1 mm/year … which is significant because the consensus estimate is double because it relies on (biased) data reconstructions (which are necessarily wrong, because all data reconstructions are BAD).
“IF this is true …” Well, I have to give them credit for allowing uncertainty in their findings. You? Not so much.
Also to their credit, next paragraph says:
In the minority of locations where sea levels are rising the mean increase is about 4 mm/year and in some locations it is as large as 9 mm/year. The fact that sea level rise is not global should not detract from its importance in those parts of the world where it is a serious problem.

The fact that millions of people who have lived all their lives on the coast, and have never been impacted by rising global sea levels that are suppose to have displaced millions by now or soon, with ZERO sign of that happening, is cogent to me, and them, regardless of your take on it.

Yes I get that. Anecdote is something you consider compelling. I do not.
Please supply the source of the “now or soon” prediction. Best if that comes from a source which is providing information intended for policy makers … like the IPCC.

We are not discussing your one sided view of ice loss.

Yes I know “we” aren’t discussing it.

Do you wish to?

By all means.

Tide gage TRENDS over time are accurate, as land flux changes, up or down are very slow and not temporally relevant to most current studies, and so the trend is accurate. The gauges show no acceleration whatsoever. They would if there was. Also we DO NOT live in the maladjusted satellite sea level arena, we live where the gauges are. The paper I linked to above discusses the tide gauge trends in detail.

Not a word about landed ice loss acceleration in any of that. Color me shocked.

Expand your mind Brandon so it does not boggle so easily and make you feel nausea and like you are dying.

Try understanding the concept of convergence of multiple lines of evidence, and then perhaps I won’t get gigglefits when you lecture me about mind expansion.

Reply to  Brandon Gates
August 31, 2015 12:40 pm

Brandon Gates is desperately nitpicking throughout this exchange, trying to support his belief in dangerous man-made global warming (MMGW). But his nitpicking misses the big picture:
There has been no global warming for almost twenty years now.
In any other field of science, such a giant falsification of the original conjecture (CO2=cAGW) would cause the proponents of that conjecture to be laughed into oblivion.
But that hasn’t happened, and the rest of us know the reason:
Money.
Federal grants to ‘study climate change’ in the U.S. alone total more than $1 billion annually. That money hose props up the MMGW narrative. But there is one really big fly in the ointment:
So far, there has never been any empirical, testable measurements quantifying the fraction of man-made global warming (AGW) out of total global warming, including solar, and the planet’s natural recovery from the LIA, and forcings from other natural sources.
Science is all about data. Measurements are data. But there are no quantifiable measurements of MMGW. None at all. No measurements of AGW exist. How does the climate alarmist clique explain that? They can’t. So they rely on nothing more than their data-free assertions.
The entire “dangerous MMGW” scare is based on nothing but the opinon of a clique of rent-seeking scientists, and their Big Media allies, and greenie True Believers like Gates and his fellow eco-religionists. The “carbon” scare is based on nothing more than that. It is certainly not based on any rational analysis, since global warming stopped many years ago. Almost twenty years ago! That fact has caused immense consternation among the climate alarmist crowd. Nothing they can say overcomes that glaring falsification of their CO2=CAGW conjecture.
The endless deflection and nitpicking, the links to blogs run by rent-seeking scientists and their religious acolytes, and the bogus pronouncements of federal bureaucrats running NASA/GISS and similar organizations for the primary purpose of their job security, are all trumped by the plain fact that their endless predictions of runaway global warming and climate catastrophe have never occurred. EVERY alarming prediction made by the climate alarmist crowd has failed to happen. No exceptions.
Fact: There is nothing either unusual or unprecedented happening with the planet’s ‘climate’ or with global temperatures. What we observe now has been exceeded naturally many times in the past, when human emissions were non-existent. The current climate is completely natural and normal. If Gates or anyone else disputes that, they need to produce convincing testable evidence. But so far, the alarmist crowd has never produced any testable, verifiable evidence quantifying MMGW (AGW).
Because there is no such evidence. The only credible evidence we have shows that the current warming trend is completely natural:
http://jonova.s3.amazonaws.com/graphs/hadley/Hadley-global-temps-1850-2010-web.jpg
As we see, the recent warming step changes have happened repeatedly in the past, when human CO2 emissions were negligible to non-existent. What is happening now has happened before, and it will no doubt happen again. But there is no empirical, testable evidence showing that CO2 has anything to do with global T. It may. But if so, its effect is simply too minuscule to measure. CO2 just does not matter.
That chart is derived from data provided by Dr. Phil Jones — one of the warmist cult’s heroes. If any alarmists have a problem with that, they need to take it up with Dr. Jones. The rest of us have yet to see any credible evidence showing that the current ‘climate’ and global temperatures are anything but completely normal and natural.

Brandon Gates
Reply to  Brandon Gates
August 31, 2015 2:27 pm

dbstealey,

Brandon Gates is desperately nitpicking throughout this exchange …

Thus begins another Stealey patented boilerplate Gish Galloping stump speech … which don’t actually address any of my specific, well-cited, points — or as he calls it, “nitpicking”. I think he missed his calling as court jester.

There has been no global warming for almost twenty years now.

For every thousand times you trot out this unsupportable statement …
http://climexp.knmi.nl/data/itemp2000_global.png
… I easily rebut it 1,001 times with data you assiduously ignore. I think “the rest of us know the reason”.

Science is all about data. Measurements are data.

[looks up]
[grins at the irony]

Because there is no such evidence. The only credible evidence we have shows that the current warming trend is completely natural:

Call Dr. Freud, you appear to be slipping.

http://jonova.s3.amazonaws.com/graphs/hadley/Hadley-global-temps-1850-2010-web.jpg
As we see, the recent warming step changes have happened repeatedly in the past, when human CO2 emissions were negligible to non-existent.

Yah. Rising AMO and solar output:
http://1.bp.blogspot.com/-o4vtAlhwkrI/VTrVEyu5ceI/AAAAAAAAAcs/MuA5KTmbm5I/s1600/HADCRUT4%2B12%2Bmo%2BMA%2BForcings%2Bw%2BTrendlines.png
The dotted green line is a model which includes both of those, plus volcanic aerosols, length of day anomaly and ENSO. R^2 in the high 90s. This stuff is not so magical and unfathomably mysterious as you would have your followers believe.
Remove the CO2 from the regression and the model goes as belly-up as your latest deluge of red herring.

What is happening now has happened before, and it will no doubt happen again … That chart is derived from data provided by Dr. Phil Jones — one of the warmist cult’s heroes. If any alarmists have a problem with that, they need to take it up with Dr. Jones.

Ayup, DB’s got no problem with surface temperature data. You read it here first.
The problem is not with Dr. Jones, but your (mis)interpretation of what else that plot shows … namely that pauses in the record have occurred in the past and they’ve ended after a period of about 30 years. By your “logic”, we’ve got about 10 years of “pause” to go.
Your most glaring error is your failure to notice that each subsequent uptrend ends at a higher point than the previous one. Simple addition and subtraction using the source data will get you there as obviously your eyeballs have failed to see it.

Peter Sable
Reply to  Richard Greene
August 30, 2015 10:04 pm

temperature trends don’t spontaneously occur …

Actually, trends do spontaneously occur when the data is autocorrelated. The AR1 autocorrelation of GISS temperature is at least 0.54. So trends occur naturally by random chance. I’m not going to repost stuff that’s in a thread below but there’s a way of determining whether any part of a signal is due to random chance or due to measurable physical phenomena.
See this paper, in particular Figure 3. Note as the period increases the confidence that it’s not noise goes down.
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.28.1738&rank=1
Peter

Brandon Gates
Reply to  Peter Sable
August 31, 2015 12:59 pm

Peter Sable,

Actually, trends do spontaneously occur when the data is autocorrelated. The AR1 autocorrelation of GISS temperature is at least 0.54. So trends occur naturally by random chance.

I see similar things about random chance written in consensus climate literature, and it drives me bats because I think it’s sloppy and unphysical. Weather, and therefore climate, are deterministic phenomena following a chain of causality. What people generally mean by “random” in this context is “unpredictable”. The better term would be “chaotic”. Sometimes we can broadly suss out causality after the fact. You’ve been discussing ENSO as an example, and I agree with you on that point. However: try predicting the next El Nino years in advance, and it’s all but certain abject failure will be the result.
In case you’ve not read it, try Lorenz (1962), Deterministic Nonperiodic Flow: http://journals.ametsoc.org/doi/pdf/10.1175/1520-0469%281963%29020%3C0130%3ADNF%3E2.0.CO%3B2
From your August 30, 2015 at 12:44 pm response to Willis:

BTW did you detrend the data first? It’s definitely not stationary if you don’t…

When you detrend the entire series, it stands to reason that you’re not going to find the long term signal we’re looking for.
See again my question to you from this post: http://wattsupwiththat.com/2015/08/28/how-fast-is-the-earth-warming/#comment-2017954
2) As this level of math is well above my paygrade, please explain to me how a wavelet analysis is feasible — or even desireable — when the hypothesized driving driving signal isn’t periodic … and even if it was, has not completed a complete cycle?
… which was part of our discussion about Torrence and Compo (1998).
For an example of what I mean by “driving signal”, check out Landais (2012), Towards orbital dating of the EPICA Dome C ice core using δO2/N2: https://hal.archives-ouvertes.fr/hal-00843918/document
This seems a more appropriate use of wavelet analysis.

Reply to  Peter Sable
August 31, 2015 1:36 pm

Brandon Gates August 31, 2015 at 12:59 pm

Peter Sable,

Actually, trends do spontaneously occur when the data is autocorrelated. The AR1 autocorrelation of GISS temperature is at least 0.54. So trends occur naturally by random chance.

I see similar things about random chance written in consensus climate literature, and it drives me bats because I think it’s sloppy and unphysical. Weather, and therefore climate, are deterministic phenomena following a chain of causality. What people generally mean by “random” in this context is “unpredictable”. The better term would be “chaotic”. Sometimes we can broadly suss out causality after the fact. You’ve been discussing ENSO as an example, and I agree with you on that point. However: try predicting the next El Nino years in advance, and it’s all but certain abject failure will be the result.

While his terminology may not be of the best, his point is clear. Trends in autocorrelated data are much more common than in random “white noise” data. And while in nature as you point out they are not “random”, in random autocorrelated data they are indeed random.
And in either case, regardless of their cause, this is important when determining statistical significance.
All the best to you,

When you detrend the entire series, it stands to reason that you’re not going to find the long term signal we’re looking for.

I don’t understand that at all. If you have a 100-year cycle in a thousand years of data which contains an overall trend over the period of record, detrending the data will do nothing to our ability to identify the 100-year cycles. What am I missing?
w.

Brandon Gates
Reply to  Peter Sable
August 31, 2015 3:15 pm

Willis,

Trends in autocorrelated data are much more common than in random “white noise” data. And while in nature as you point out they are not “random”, in random autocorrelated data they are indeed random.

I’m with you.

And in either case, regardless of their cause, this is important when determining statistical significance.

Sure. At risk of beating the point to death, I’m saying we should not then make the mistake of thinking of the underlying processes as “truly” random just because we modeled a statistical test that way. That is all.

I don’t understand that at all. If you have a 100-year cycle in a thousand years of data which contains an overall trend over the period of record, detrending the data will do nothing to our ability to identify the 100-year cycles. What am I missing?

I guess I missed it that he was running this over 1,000 years of data looking for 100 year cycles. Starting with this post: http://wattsupwiththat.com/2015/08/28/how-fast-is-the-earth-warming/#comment-2017285
… my impression is that he was attempting to compare his wavelet analysis to Sheldon’s results using HADCRUT4.
Cheers.

Reply to  Richard Greene
August 31, 2015 4:58 pm

The article simply asks how fast the planet is warming. But Boggleboi is arguing with everyone, as usual, about every nitpicking thing. That’s a tactic to distract from the plain fact that the planet hasn’t been warming at all:
http://realclimatescience.com/wp-content/uploads/2015/06/ScreenHunter_9549-Jun.-17-21.12.gif
I’m surprised the Boggled one doesn’t trot out his Marcott nonsense again:
http://www.realclimate.org/images//Marcott.png

Brandon Gates
Reply to  dbstealey
September 1, 2015 12:39 pm

dbstealey,

The article simply asks how fast the planet is warming.

No kidding.

But Boggleboi is arguing with everyone, as usual, about every nitpicking thing.

Note that DB can’t be troubled to point to any particular examples. Or explain why they’re “nitpicky”.

That’s a tactic to distract from the plain fact that the planet hasn’t been warming at all:
http://realclimatescience.com/wp-content/uploads/2015/06/ScreenHunter_9549-Jun.-17-21.12.gif

ROFL! Take it up with the author of the top post:comment image
And since you apparently didn’t even read the article, allow me to highlight this point from the body text:
1) There is no cherry-picking of start and end times with this method. The entire temperature series is used.
Compare to the method used to generate the plot you just posted.

Reply to  dbstealey
September 1, 2015 1:39 pm

GISSTEMP??
To quote a certain religious True Believer: “ROFL!”
GISTEMP is simply not credible. So let’s use the best global T measurements available: satellite data.
The endlessly predicted runaway global warming never happened. If the planet is warming, it’s not warming measurably. The plain fact that global warming has stopped for nearly twenty years would make any rational person re-assess their ‘dangerous AGW’ conjecture.
But not Brandon Gates. His mind is made up and closed tight. He staked out his position early on, and nothing is gonna change it now. That’s because he would have to admit he was wrong. Only honest scientists do that. The rest make excuses, pontificate, deflect, misrepresent, and argue endlessly, nitpicking everything to the point that most readers just move on.
Planet Earth is showing everyone that the alarmist cult was flat wrong. Does it surprise anyone that they can’t admit it?

Reply to  dbstealey
September 1, 2015 2:16 pm

Day to Day Temperature Differencecomment image
This is a chart of the annual average of day to day change in min temp.
(Tmin day-1)-(Tmin d-0)=Daily Min Temp Anomaly= MnDiff = Difference
For charts with MxDiff it is equal = (Tmax day-1)-(Tmax d-0)=Daily Max Temp Anomaly= MxDiff
MnDiff is also the same as
(Tmax day-1) – (Tmin day-1) = Rising
(Tmax day-1) – (Tmin day-0) = Falling
Rising-Falling = MnDiff
Average daily rising temps
(Tmax day-1) – (Tmin day-1) = Risingcomment image
Normalized Day to day difference with Daily Solar Forcing(WattHrs) and Rising tempscomment image
Yearly Average Min and Max Diff w/trend line Plus Surface Station count.comment image
Day to Day Seasonal Slope Change
If you plot daily MnDiff daily for a year, it’s a sine wave.comment image
You can take the slope of the months leading up to and past the zero crossing,
both for summer (cooling) and winter (warming)
and plot thoses.
Globalcomment image
Southern Hemisphere
is flat, other than some large disturbances in the 70’s and 80’s, and then again 2003.comment image
Northern Hemisphere has a slight curve. A disturbance in 1973 when surface stations were changed,
And 1988comment image
There are a number of regions with few stations, making some areas susceptible to large fluxuations,
or it could be a real disturbance in temps, they are timely to the transistions in the
Ocean cycles and the warm cycle and the start of the cooling cycle.
US Seasonal Slope
The US has the best surface station coverage in the world.comment image
Eurasia Seasonal Slopecomment image
Northern Hemisphere w/trend linecomment image
Southern Hemisphere w/trend linecomment image

Reply to  dbstealey
September 1, 2015 2:35 pm

Surface data from NCDC’s Global Summary of Days data, this is ~72 million daily readings,
from all of the stations with >360 daily samples per year.
Data source
http://sourceforge.net/projects/gsod-rpts/
ftp://ftp.ncdc.noaa.gov/pub/data/gsod/

JBP
August 29, 2015 6:43 pm

This comments section grossly exceeded my average ability to process; I concede victory to the blathering experts. BTW, what happened to the pause?

Reply to  JBP
August 30, 2015 6:16 am

The pause turned into .3 degrees cooling over the last 17 years. Really, it did. Heat is not the mean of a smoothed five year tend line. The atmosphere was far warmer in 1998 then it is now. 1998 was the warmest year on record. The atmosphere has cooled .3 degrees since 1998. If you must put a cooling rate on that, the atmosphere is cooling at about 1.7 degrees per century.

Reply to  JBP
August 30, 2015 10:14 am

The “pause” was “adjusted” during yet another “adjustment” to cool the warm 1930’s — two “adjustments” for the price of one.
The “pause” was really irritating the climate doomsayers — they had hoped calling it a “hiatus” would have disguised the truth, and it almost worked.
For years I thought a “hiatus” was a medical condition involving the abdominal muscles, and had nothing to do with the climate.
Both “pause” and “hiatus” are propaganda terms, because they strongly imply the 1850 Modern Warming will resume, which no one actually knows.
The climate astrologists know their computer games are right, so the raw data collected in the field must be wrong, therefore it needed “adjusting” to make them right.
After all, real science is sitting in air conditioned offices,
playing computer games,
on the government dole,
while making scary climate predictions that get you in the media
… while you tell friends at cocktail parties that you are working “to save the Earth”
After sufficient “adjustments”, my photograph, at age 60+, looks just like Sean Connery when he was a swimsuit model.
Long live “adjustments”!
Climate doomsaying really has nothing to do with science — it’s politics — the governments paying for the scary predictions could not care less about honest science — so sometimes I can’t take this subject seriously, as the real scientists here do. … I do have a BS degree, but forgot everything the day after receiving my diploma.

Gloria Swansong
Reply to  JBP
August 31, 2015 5:20 pm

Earth is now cooling under rising CO2, just as it did from about 1945 to 1977. The anomalous excursion was the slight warming, also under rising CO2, of 1978-96, when temperature just happened accidentally to go up along with the beneficial, plant food, essential trace gas carbon dioxide.
So for more than 50 of the past 70 years of monotonously climbing CO2, planet earth has cooled.

Peter Sable
August 29, 2015 6:58 pm

I did a little playing around with the lengths of your boxcar (moving average) filters and I’m finding those little peaks (e.g the small ones at 1925 and 1980 ) are very sensitive to how long your filter window is (especially the one where you take the slopes, aka the difference).
When publishing this kind of analysis you should also publish a sensitivity analysis alongside it.
Basically, you are doing one slice of a wavelet decomposition on the difference using a boxcar wavelet (aka moving average). This is a pretty poor choice of wavelets for this type of work. Also, by failing to do the full wavelet decomposition you are not showing where the strong locii of energies are at and which ones are relatively weak (such as 1925 and 1980 are weak, but 1938 is very strong).
What makes a boxcar wavelet an even poorer choice is using it on a difference signals. Difference signals have a blue noise type of spectrum and the poor filtering characteristics of a boxcar filter are especially prone to aliasing and phase distortion on blue noise.
I suggest you find a tool that can do wavelet decomposition and publish that result. It’d be more accurate and interesting. (alas Octave doesn’t have a wavelet library, I’m still hunting around for an open source version).
Peter.

Peter Sable
Reply to  Peter Sable
August 29, 2015 7:09 pm
Mike
Reply to  Peter Sable
August 29, 2015 10:02 pm

For decomposing the NINO3 SST data, we chose the Morlet wavelet because:
it is commonly used,
it’s simple,
it looks like a wave.

Sounds about as convincing as Sheldon’s reasons for using a boxcar filter ! Not encouraging.

Peter Sable
Reply to  Peter Sable
August 29, 2015 8:16 pm
Mike
Reply to  Peter Sable
August 29, 2015 10:04 pm

Abstract looks like it be useful but I can’t find a link to the paper on that page. Do you have a link to anything more than the abstract?

Peter Sable
Reply to  Peter Sable
August 30, 2015 12:47 pm

Abstract looks like it be useful but I can’t find a link to the paper on that page

Weird this link works for me:
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.28.1738&rep=rep1&type=pdf
The citeseer page, I grabbed the cached copy. If you don’t know how to use citeseer, you should learn, it’s very useful when doing “open source” science.
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.28.1738&rank=1
Peter

Mike
Reply to  Peter Sable
August 29, 2015 9:52 pm

Difference signals have a blue noise type of spectrum

Peter, you are definitely one of the more technically competent commenters here so I’m rather surprised you write this. It clearly depends what data is, as much as whether is a difference.
Temperatures are highly autocorrelated for obvious reasons and have a ‘red’ spectrum, so on the contrary it is likely the dT/dt will have a more random, white spectrum rather than a ‘blue’ one.
While you could technically argue that a ‘boxcar’ filter is one part of a wavelet analysis, it clearly isn’t because no sane person would use rectangular wavelet and one run is not ‘part of’ anything else it is just a crappy, distorting pseudo low-pass filter used by people who do not even know they are trying to use a low-pass filter.

Reply to  Mike
August 30, 2015 12:02 am

Mike August 29, 2015 at 9:52 pm

Temperatures are highly autocorrelated for obvious reasons and have a ‘red’ spectrum, so on the contrary it is likely the dT/dt will have a more random, white spectrum rather than a ‘blue’ one.

Thanks, Mike. Temperature assuredly has a “red” spectrum, meaning positively autocorrelated. Modeling HadCRUT4 as an ARIMA function with no MA (moving average), we get:

Call:
arima(x = hadmonthly, order = c(1, 0, 0))
Coefficients:
         ar1  intercept
      0.8928    -0.1069
s.e.  0.0102     0.0289

But the dT/dt of the Hadcrut data is just as assuredly blue, viz:

Call:
arima(x = diff(hadmonthly), order = c(1, 0, 0))
Coefficients:
          ar1  intercept
      -0.3714     0.0006
s.e.   0.0210     0.0022

w.

Peter Sable
Reply to  Mike
August 30, 2015 12:44 pm

Peter, you are definitely one of the more technically competent commenters here so I’m rather surprised you write this. It clearly depends what data is, as much as whether is a difference.

Sorry, I was referring to temperature difference data which as Willis just looked at is blue. It was probably wrong to apply it generically. Hacking Octave and posting too fast. Differences in general remove low frequencies so the general movement towards “blueness” is conceptually correct but I could probably make a more accurate general statement with more experiments but it’s not important, so I won’t.
Thanks Willis for the check on Hadcrut4. I’m looking at GISS 201505 with poor tools in Octave and I’m getting 0.77. I’m still learning this procedure though… BTW did you detrend the data first? It’s definitely not stationary if you don’t…
Using https://onlinecourses.science.psu.edu/stat510/node/33 as a reference and minimizing use of built in functions so I can get a feel for the underlying math…
Peter

Peter Sable
Reply to  Mike
August 30, 2015 12:55 pm

Thanks, Mike. Temperature assuredly has a “red” spectrum, meaning positively autocorrelated. Modeling HadCRUT4 as an ARIMA function with no MA (moving average), we get:

Willis, I think I’ve found a gun that might be smoking, see above post about whether a temperature signal is distinguishable from noise.
I’m finding at least for GISS that the global temperature history is indistinguishable from red noise. According to Torrence and Compo they can distinguish ENSO in SST but that’s it. I really like their technique of looking at frequency (period) bins and seeing if there’s a signal that’s above 95% CI against red noise.
Wish I could get your email address, would love to correspond. Anthony is free to give you mine, not sure how this works around here…
http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.28.1738&rank=1
Paper here:

Peter Sable
Reply to  Mike
August 30, 2015 1:36 pm

BTW here’s a very rough draft (not up to my usual standards) of GISS temperature data versus hopefully equivalent red noise.
Basically, the 1 year and 2 year signals are significant (well duh), blue moon interval of 2.7 years might be interesting (p=0.2), and possible ENSO spikes. The rest is indistinguishable from red noise. This tells me that trying to elicit a C02 signal from the temperature data is impossible. The lower frequency the signal you are looking for, the bigger the errorbars. A trend is as low frequency as it gets…comment image?dl=0
Source code:
https://www.dropbox.com/sh/qi9h70otb2p9j9h/AABPE2Uf-s8xe8iGGr1BhQULa?dl=0
Peter
Reference: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.28.1738&rank=1

Peter Sable
Reply to  Mike
August 30, 2015 9:57 pm

As usual, I found bugs. Scaling the red noise is fairly difficult. I found two grotesque errors and some subtle ones. The major error was taking the RMS of the non-detrended signal (whoops) and also not reading the paper closely for the method to turn an ar1+ar2 into an ar1 model for generating the red noise. You’ll note a change to the exponent in the title.
I also tried using an ARM model to generate the noise, because ar2 is significant according to an ARM on the residuals. Scaling that model proved more difficult. At any rate, there are some likely ENSO signals that are now shown as significant. I still have this nagging suspicion about aliasing lunar cycles (there’s a peak there, the “blue moon” peak @ 2.7 years) but it could also be an ENSO peak. I’d need hourly or daily records for 80+ years to be sure…
My conclusion is still unchanged. with an autocorrelated series the “it’s random” null hypothesis has very wide confidence intervals at low frequencies and thus there’s no way to determine that any part of the signal is caused by C02. I also have low confidence in any attempt to relate solar cycles or natural cycle XYZ to temperature. There’s simply not enough data to distinguish from random noise.
Source code same as link above. New graph:comment image?dl=0
Peter

richard
August 30, 2015 5:55 am

It’s all fraud –
“From: Tom Wigley
To: Phil Jones Subject: 1940s
Date: Sun, 27 Sep 2009 23:25:38 -0600
Cc: Ben Santer
It would be good to remove at least part of the 1940s blip, but we are still left with “why the blip”.
http://realclimatescience.com/2015/08/spectacular-fraud-on-the-other-side-of-the-pond/

basicstats
August 30, 2015 12:55 pm

There seems to be a form of circularity in the method of analysis. In calculus terms, a moving average is an integral, while the slope is a derivative. The two operations effectively cancel. What presumably results in this setting is some sort of approximation to a centered difference between temperatures 10 years apart. Nothing wrong with that, but a rather indirect way of getting something along those lines.

Peter Sable
Reply to  basicstats
August 30, 2015 2:12 pm

There seems to be a form of circularity in the method of analysis.

Yep. First thing I do when playing around with this type of thing is delete some operations to see if this has any effect. I also put in red noise, white noise, ramps, step functions, and impulse functions because you should be able to predict the results of those waveforms from basic principles, and if you don’t get the expected result, you need to fix something.
Peter

August 31, 2015 4:08 am

Following the highly appreciated comment by Dr. Brown (rgbatduke) at
http://wattsupwiththat.com/2015/08/28/how-fast-is-the-earth-warming/#comment-2017769
I decided to redo CruTem4 spectrum but for a limited period (just happen to be 121 years), which may be more representative of the reality.
http://www.vukcevic.talktalk.net/CT4-Spectrum2.gif

Reply to  vukcevic
August 31, 2015 6:32 am

I am wondering if I can get you to look at the data I’ve generated from NCDC, GSoD
https://sourceforge.net/projects/gsod-rpts/files/Reports/
The Yearly Continental zip has both global and regional scale spreadsheets.
In particular the MnDiff (Tmin day-1) – (Tmin day 0) shows a strong regional component. These are Yearly averages of daily anomalies.
There are also many different scales (1×1, 10×10, latitude bands) Plus there’s Daily reports of the many of the same areas. The daily reports would have a very strong yearly cycle (as you would expect), and it should have much the same yearly components.
MxDiff (Tmax d-1)-(Tmax d0) shows very little variation, basically no carry over of maximum temperatures from one year to the next.

Reply to  micro6500
August 31, 2015 7:17 am

Hi micro
Thanks for your note. I’ll be away from home and my pc for the most of September (many readers may be pleased to hear that) but I will eventually look at the files, time permitting.

Reply to  vukcevic
August 31, 2015 7:46 am

Thanks!
I think it’ll be worth the effort, it’s unmolested actual surface data, as opposed to the various published dregs.
This isn’t to say temps have not gone up during the summer in some areas (and quite possibly have gone down in other areas), just that there’s no loss of nightly cooling from Co2, and the GMT are constructed in a way to hide this(whether intentional or not).
I’ll look forward to hearing from you. If you’d like, mods you can give vukcevic my email address.

Ken Gray
August 31, 2015 6:02 am

I’m not a mathematician but this method, this statistic, feels like a very reasonable approach to establishing the empirical rate of warming at the earth’s surface. Thank you. 121 month central moving average. So simple. Too bad it doesn’t require a supercomputer and millions of dollars to calculate it! LOL.

August 31, 2015 3:05 pm

In response to prof. Dr. Robert Brown, rgbatduke (rgbatduke August 30, 2015 at 2:28 pm)
http://wattsupwiththat.com/2015/08/28/how-fast-is-the-earth-warming/#comment-2018314
“… your peak in the 60 year range may be an artifact and — although I note the same thing and it is apparent in the top article if one looks for it (1937 to 1998 being 61 years) — I would avoid making a big deal out of it.”
Dr. Brown, this is an important point since many learned papers refer to existence of the 60 year cycle.
Does CruTem4 as one of the leading GT indices have or has not 60 year periodicity or is it an end effect?
I looked at the CET, the much longer temperature record and it doesn’t have 60 year, but it has 55 and 69 year periodicities, which may average out for the shorter data to about 62 year.
There is a number of ways to reduce the end effect (de-trending if there is an up or down ramp, employing wide band Gaussian filter to reduce end transitions, symmetric zero padding, etc. or combination of two or more of the mentioned.
One thing I found in my early days of analysing audio signals is that it is difficult to destroy fundamentals even by sever data processing . When in doubt I often applied following method: take difference between two successive data points, three point difference, etc. then compare spectral components to the original after normalising much smaller amplitudes of the differentials. .
Here is what I found for the CruTem4 data
http://www.vukcevic.talktalk.net/CT4-Spectrum4.gif
None of three difference data streams contain 60 year, but all contain the CETs 55 year.
However I am pleased to report that the solar magnetic cycle is as strong as ever, no doubt about its presence!
Dr. Brown, thanks for the warning. Not only that “I would avoid making a big deal out of it” , but as result of this analysis, from now on will not ‘make any deal whatsoever out of it’.
Many readers may consider this as a dissent, but in my view it is best ‘not accept but investigate’.

rgbatduke
Reply to  vukcevic
September 2, 2015 11:43 am

The point is that if one FTs (say) 600 years worth of data — enough that a cycle or cycles anywhere in the BALLPARK has a reasonable chance of being extractable and not being artifacts or just plan accident — would the 60 year peak be there, or would it have broken up into several peaks?
Yet another issue appears when I fit HadCRUT4 — if I fit it to a log of CO2 concentration plus a (single) sinusoid, I get a best fit with a period of 67 years. The fit is pretty compelling. The point is that the background log warming has a fourier transform too! Then there are harmonic multiples of the visible 22 year peak (where a triplet would be 66 years and indistinguishable from 67 years over that short a fit region) Note that the 44 year peak exists but is (perhaps) suppressed because the log function is asymmetric and only picks up odd harmonics.
The absolute fundamental problem with the FT (or Laplace transform, which is equally relevant to analysis of the temperature curve as it gives one an idea of the decay constants of the autocorrelation time, maybe, if you once again can extract any slowly varying secular functional dependence) is that knowing the FT doesn’t necessarily tell you the physics. It might — as you say the 20-something year peak is logically and probably connected with the solar cycle. But the ten year(ish) peak(s)? The five year(ish) peak(s)? The five year peak I find compelling because eyeballing the data suggests it before you even do the FT. Ten year and up are more difficult. The 67(?) year peak is simple and obvious in HadCRUT4, not so much if you look back at longer proxy-derived records (that are, however, both lower resolution temporally and higher error so it could just be getting lost).
Getting lost is the fundamental issue. FTs, LTs, and other similar decompositions are ways of organizing information. But information is lost, irretrievably as one moves into the more distant past. We simply don’t have accurate scientific records, and if time of day corrections can add up to a significant chunk of the “measured” global warming using reasonably well placed scientific instrumentation think about how much less accurate proxy-based results are likely to be when they are almost invariably uncontrollably multivariate, noise-ridden, poorly temporally resolved projections of temperature (and rainfall, and animal activity, and wind, and ocean current, and insect subpopulations in a various predator-prey and breeding cycles, and disease, and ….) at a far, far smaller sampling of non-uniformly distributed sites.
I don’t think climate scientists realize how badly they shoot themselves in the foot when they shift contemporary assessments of the global temperature by a significant fraction of a degree due to a supposed systematic error (that one can never go back to verify, making it utterly safe to claim) in a sea surface temperature measurement method. If one can shift all the main global anomalies by (say) half of their claimed error in a heartbeat in 2014 with huge numbers of reporting sites and modern measuring apparatus, what are the likely SST error estimates in (say) the entire Pacific ocean in 1850, where temperatures were measured (if you were lucky) by pulling up a bucket of water from overboard and dunking a mercury or alcohol thermometer in it and then pulling it out into the wind to observe and record the results, from a tiny handful of ships sailing a tiny handful of “sea lanes”?
So some fraction of the long-period FT of the anomaly is noise, cosmic debris, accident. So is some (rather a lot, I expect) of the short period component. One can hope that at least some of the principle peaks or bands correspond to some physics down there — a generalized “climate relaxation rate”, some quasi-stationary periods in major climate modes. But it isn’t clear that we can get much information on the meso-scale processes between 20 or 30 years out to the long period DO events, the still longer glaciation cycles, etc. Certainly we can’t from the thermometric record.
rgb

September 2, 2015 1:39 pm

Very interesting, however:
1) IPCC AR5 has no idea how much of the CO2 increase between 1750 and 2011 is due to industrialized man because the contributions of the natural sources and sinks are a massive WAG.
2) At 2 W/m^2 the “unbalanced” RF IPCC attributes to that CO2 increase between 1750 & 2011 is lost in the magnitudes and uncertainties of the major factors in the global heat balance. A third or fourth decimal point bee fart in a hurricane.
3) IPCC admits in text box 9.2 that their GCM’s cannot explain the pause/hiatus/lull/stasis and are consequentially useless.

September 2, 2015 4:30 pm

Hey, Brandon Gates is BACK, baby!

September 4, 2015 12:27 am

Models vs measurement.
It seems to me that temperature + relative humidity (RH) is the proper metric to see if there is some coherence between the models and measurements. If we wanted to make it even more difficult we could add in pressure.
Hind casting temperature is not too difficult. Hind casting temp + RH is going to be trickier. And if they go global with that what exactly does a number for the global average RH mean? Global average air pressure? Well mountains are going to complicate that. And they complicate RH. And of course RH is a proxy for water vapor in the air. Which depends on temperature and pressure.
============
Willis is correct. I can tell the difference between a 10°C day and an 40°C day. But what do I really know about a 10.0°C day and a 10.1°C day if my measurement error in the field is .1° or worse 1°C. And I haven’t even brought in RH and local air pressure. Or all the other variables – like the 10°C day was measured in 1900 and the 10.1°C was measured in 2015. Is it really .1°C hotter in 2015 vs 1900? Or can we just say that we don’t know given the reading error and drift in calibration in 1900. Not to mention all the other effects that could affect the thermometer. The 2015 day could be hotter or colder than the 1900 day. The error bar of the difference has to be at least .5°C and it could be worse. Even if the 2015 thermometer was perfect.
And the error bars don’t decline from taking millions of measurements at thousands of points. They sum as the sq rt of the sum of the squares of the measurement errors on a particular day (if the distribution of the errors is Gaussian). What does this tell you? Well the error bars are HUGE. Lets say we have 10,000 measurements with an error of .5°C for a given anomaly day What is the error bar of the average? about 84°C. Heh. Well that says that the average for a system that is nominally 300°K is useless. Suppose we can get the errors down to .1°C (modern era) the error bar is 56°C. OK We are really good .01C measurement error gives more than 31°C. error bar. To get the error bar down to .1°C for those 10,000 measurements we need an accuracy of measurement that is obviously ridiculous. (temperature error of 1E-6)
But we can improve things with fewer measurements. Now isn’t that funny.
http://ipl.physics.harvard.edu/wp-uploads/2013/03/PS3_Error_Propagation_sp13.pdf

Reply to  M Simon
September 4, 2015 9:58 am

The surface record only has a single dew point measurement a day (well the data I’m using does). But from looking at my weather station it doesn’t change all that fast. At the stations it’s changed a little since the 40’s, but no trend following Co2.
But I did find something else I thought interesting, every night, rel humidity maxes out, and water is condensed out of the air, some of which is lost into the surface, so water is boiled out of the oceans in the tropics, transported poleward, where it cools and is deposited into the water table as it moves. It fits nicely with Willis’s idea that water regulates the tropics temperatures.comment image

September 4, 2015 12:40 am

Well suppose we divide the error by 10,000 to account for averaging. in the .5°C error case we get .01°C error roughly. But maybe the divisor to use is really the sq rt of 10.000. Then the error is .84°C.
So if the difference is less than .84°C we know nothing. And we don’t usually count it as significant unless it is 3X that. And even if .01°C is the correct number only differences of .03°C are significant.
I assume the math guys will correct my understanding. I look forward to that.

September 4, 2015 9:35 am

Per psychrometry the enthalpy of moist air contains two components: dry air at 0.24 Btu/lb-F and water vapor at about 1,000 Btu/lb. Water vapor is measured in grains per lb dry air, 7,000 grains per lb.
At 0% RH it’s all dry air Btus.
At 100% RH, saturated w/ all the vapor the air can hold (more w/ warm than cold), and the same Btus the dry bulb temperature will be much lower. It’s a balancing act, more vapor Btu, fewer dry air Btu. The added vapor cools the air. It’s how evaporative coolers work and it’s also how water vapor moderates the atmospheric heat balance.

Reply to  Nicholas Schroeder
September 4, 2015 9:51 am

Per psychrometry the enthalpy of moist air contains two components: dry air at 0.24 Btu/lb-F and water vapor at about 1,000 Btu/lb. Water vapor is measured in grains per lb dry air, 7,000 grains per lb.
At 0% RH it’s all dry air Btus.
At 100% RH, saturated w/ all the vapor the air can hold (more w/ warm than cold), and the same Btus the dry bulb temperature will be much lower. It’s a balancing act, more vapor Btu, fewer dry air Btu. The added vapor cools the air. It’s how evaporative coolers work and it’s also how water vapor moderates the atmospheric heat balance.

Added vapor might cool (Florida in the 40’s feels pretty cold), It also is responsible for carrying huge amounts of heat out of the tropics.
The difference between Tropical air, and Canadian air is 10-20Fcomment image
And it’s all due to water vapor’s heat carrying capacity.