R code to look at changing temp distributions – follow up to Hansen/Sato/Ruedy
Story submitted by commenter Nullius in Verba
There has been a lot of commentary recently on the new Hansen op ed, with associated paper. Like many, I was unimpressed at the lack of context, and the selective presentation of statistics. To some extent it was just a restatement of what we all already knew – that according to the record the temperature has risen over the past 50 years – but with extreme value statistics picked out to give a more alarming impression.
But beyond that, it did remind me of an interesting question I’ve considered before but never previously followed up, which was to ask how the distribution had actually changed over time. We know the mean has gone up, but what about the spread? The upper and lower bounds?
So I plotted it out. And having done so, I thought it might be good to share the means to do so, in case anyone else felt motivated to take it further.
I’m not going to try to draw any grand conclusions, or comment further on Hansen. (There will be a few mild observations later.) This isn’t any sort of grand refutation. Other people will do that anyway. For the purposes of this discussion, the data is what it is. I don’t propose to take any of this too seriously.
I’ll also say that I make no guarantees that I’ve done this exactly right. I did it quickly, just for fun, and my code is certainly not as efficient or elegant as it could be. If anyone wants to offer improvements or corrections, feel free.
—
I picked the HadCRUT3 dataset to look at partly because it doesn’t use the extrapolation that GISSTEMP does, only showing temperatures in a gridcell if there are actual thermometers there. But it was also because I’d looked at it before and already knew how to read it!
For various reasons it’s still less than ideal. It still averages things up over a month and a 5×5 degree lat/long gridcell. That loses a lot of detail and narrows the variances. From the point of view of studying heatwaves, you can’t tell if it was 3 C warmer for a month or 12 C warmer for a week. So it clearly doesn’t answer the question, but we’ll have a look anyway.
Then I picked the interval from 1900-1950 to define a baseline distribution. I picked this particular interval as a compromise – because the quality of the earliest data is very questionable, there being very few thermometers in most parts of the world, and because the mainstream claims often refer only to the post-1950 period as being attributable to man.
Of course, you have the code, so if you don’t like that choice you can pick a different period.
The first plot shows just the distribution for each month. Time is along the x-axis, temperature anomaly up the y-axis, and darker shading is more probable.

(From ‘HadCRUT3 T-anom dist 5 small.png’)
Because the outliers fade into invisibility, here is a plot of log-probability that emphasizes them more clearly.

(From ‘HadCRUT3 T-anom log-dist 20 small.png’)
You can see particularly from the second one that the incidence of extreme outliers hasn’t changed much. (But see later.)
You can also see that the spread is a lot bigger than the change. The rise in temperatures is perceptible, but still smaller than the background variation. It does look like the upper bound has shifted upwards – about 0.5 C by eye.
To look more precisely at the change, what I did next was to divide the distribution for each month by the baseline distribution for 1900-1950.
This says whether the probability has gone up or down. I then took logarithms, to convert the ratio to a linear scale. If doubling is so many units up, then halving will be the same number of units down. And then I colored the distribution with red/yellow for and increase in probability and blue/cyan for a decrease in probability.

(From ‘HadCRUT3 T-anom dist log-change 10 small.png’)
You can see now why it was convenient for Hansen to have picked 1950-1980 to compare against!
Blue surrounded by red means a broader distribution, as in the 1850-1870 period (small gridcell sample sizes give larger variances). Red surrounded by blue means a narrower distribution, as in the 1950-1990 period. Blue over red means cooler climate, as in the 1900-1920 period. Red over blue means a warmer climate, as in the post-1998 period.
The period 1900-1950 (baseline) shows a simple shift. The period 1950-1990 appears to be a narrowing and shift. The period post 1990 shows effects of shift meeting effects of narrowing. The reduction in cold weather since 1930 unambiguous, but increase in warm weather is only clear post 1998; until then, there was a decrease in warmer weather due to narrowing of distribution. There is a step change in 1998, little change thereafter.
Generally, the distribution has got narrower over the 20th century, but jumped to be wider again around the end of the century. (This may, as Tamino suggested, be because different parts of the world warm at different rates.) The assumption that the variance is naturally constant and any change in it must be anthropogenic is no more valid than the same assumption about the mean.
So far, there’s nothing visible to justify Hansen’s claims. The step-change after 1998 is a little bigger and longer than the 1940s, but not enough to be making hyperbolic claims of orders-of-magnitude increases in probability. So what have I missed?
The answer, it turns out, is that Hansen’s main results are just for summer temperatures. The spread of temperatures over land is vastly larger in the winter than it is the summer.
Looking at the summer temperatures only, the background spread is much narrower and the shifted distribution now pokes it head out above the noise.
I’ve shown the global figures for July below. I really ought to do a split-mix of July in the northern hemisphere and January on the southern, but given the preponderance of land (and thermometers!) in the NH, just plotting July shows the effect nicely.
(From ‘HadCRUT3 T-anom dist log-change 10 July.png’)
Again, the general pattern of a warm 1940s and a narrowing of the distribution from 1950-1980 shows up, but now the post-1998 step-change is now +2 C above the background. It also ramps up earlier, with more very large excursions (beyond 5 C) showing up around 1970, and a shift in the core distribution around 1988. The transitions look quite sharp. I think this is what Hansen is talking about.
Having had a quick look at some maps of where the hot spots are (run the code), it would appear a lot of these ‘heatwaves’ are in Siberia or northern Canada. I can’t see the locals being too upset about that… That also fits with the observation that a lot of the GISSTEMP global warming is due to the extrapolation over the Arctic. That would benefit from some further investigation.
None of this gives us anything solid about heatwaves, or tells us anything about cause, of course. For all we know, the same thing might have happened in the MWP.
Run the code! It generates a lot more graphics, at full size.
# ##################################################
# R Script
# Comparison of HadCRUT3 T-anom distributions to 1900-1950 average
# NiV 11/8/2012
# ##################################################
library(ncdf)
library(maps)
# Local file location - ***CHANGE THIS AS APROPRIATE***
setwd("C:/Data/Climate")
# Download file if no local copy exists
if(file.exists("hadcrut3.nc") == FALSE) {
download("http://www.metoffice.gov.uk/hadobs/hadcrut3/data/HadCRUT3.nc","hadcrut3.nc")
}
# HadCRUT3 monthly mean anomaly dataset. For each 5x5 lat/long gridcell
# reports the average temperature anomaly of all the stations for a
# given month. Runs from 1850 to date, indexed by month.
# i.e. month 1 is Jan 1850, month 25 is Feb 1852, etc.
# Four dimensions of array are longitude, latitude, unknown, and month.
hadcrut.nc = open.ncdf("hadcrut3.nc")
# --------------------
# Functions to extract distributions from data
# Names of months
month = c("January", "February", "March", "April", "May", "June",
"July", "August", "September", "October", "November", "December")
month_to_date = function(m) {
mn = (m %% 12) + 1
yr = 1850 + ((m-1) %/% 12)
return(paste(month[mn]," ",yr,sep=""))
}
# Function to show 1 month's data
plotmonthmap = function(m) {
d = get.var.ncdf(hadcrut.nc,"temp",start=c(1,1,1,m),count=c(72,36,1,1))
clrs = rev(rainbow(403))
brks = c(-100,-200:200/20,100)
image(c(0:71)*5-180,c(0:35)*5-90,d,col=clrs,breaks=brks,useRaster=TRUE,
xlab="Longitude",ylab="Latitude",
main=paste("Temperature anomalies",month_to_date(m)))
map("world",add=TRUE)
}
# Function to extract one month's data, as a vector of length 36*72=2592
getmonth = function(m) {
res=get.var.ncdf(hadcrut.nc,"temp",start=c(1,1,1,m),count=c(72,36,1,1))
dim(res) = NULL # Flatten array into vector by deleting dimensions
return(res)
}
# Given a vector of month indexes, extract data for all those months as a single vector
getmultimonths = function(mvec) {
res=vapply(mvec,getmonth,FUN.VALUE=c(1:2592)*1.0);dim(res)=NULL
return(res)
}
# Function to determine the T-anom distribution for a vector of month indexes
# Result is a vector of length 402 representing frequency of 0.1 C bins
# ranging from -20 C to +20 C. Element 200 contains the zero-anomaly.
# Data is smoothed slightly to mitigate poor sample size out in the tails.
gettadist = function(mvec) {
d = getmultimonths(mvec)
res = table(cut(d,c(-Inf,seq(-20,20,0.1),Inf)))[]
res = res/sum(res,na.rem=TRUE)
res = filter(res,c(1,4,6,4,1)/16)
return(res)
}
# --------------------
# Draw Summer maps 1998-2012
for(m in c(seq(1781,1949,12),seq(1782,1949,12),seq(1783,1949,12))) {
png(paste("HadCRUT3 T-anom map ",month_to_date(m),".png",sep="") ,width=800,height=500)
plotmonthmap(m)
dev.off()
}
# Calculate average distribution 1900-1950
# Late enough to have decent data, early enough to be before global warming
baseline = gettadist(c(600:1199))
# Plot the distribution to see that it looks sensible
png("HadCRUT3 T-anom 1900-1950 dist.png",width=800,height=500)
plot(c(100:300)*0.1-20.1,baseline[100:300],type="l",
xlab="Temperature Anomaly C",ylab="Frequency /0.1 C",
main="Temperature Anomaly Distribution 1900-1950")
dev.off()
# --------------------
# A few functions for plotting
# Add a semi-transparent grid to a plot
gr = function() {abline(h=c(-20:20),v=seq(1850,2010,10),col=rgb(0.6,0.6,1,0.5))}
# Plot some data
plotdist = function(data,trange=20,isyears=FALSE,dogrid=FALSE,main) {
if(isyears) { drange = c(1:dim(data)[1]) + 1850 } else { drange = c(1:dim(data)[1])/12 + 1850 }
trange = c((200-trange*10):(200+trange*10))
image(drange,trange*0.1-20.1,
data[,trange],
col=gray(rev(0:20)/20),useRaster=TRUE,
xlab="Year",ylab="Temperature Anomaly C",main=main)
if(dogrid) { gr() }
}
# Colour scheme and breaks for change plot
# Scale is logarithmic, so goes from 1/7.4 to 1/2.6 to 1 to 2.7 to 7.4
redblue=c(rgb(0,1,1),rainbow(10,start=3/6,end=4/6),rainbow(10,start=0,end=1/6),rgb(1,1,0))
e2breaks=c(-100,-10:10/5,100)
# This generates a scale for the change plots
png("E2 Scale.png",width=200,height=616)
image(y=exp(c(-10:10/4)),z=matrix(c(-10:10/4),nrow=1,ncol=21),
col=redblue,breaks=e2breaks,log="y",xaxt="n",ylab="Probability Ratio")
dev.off()
plotdistchange = function(data,baseline,trange=20,isyears=FALSE,dogrid=FALSE,main) {
if(isyears) { drange = c(1:dim(data)[1]) + 1850 } else { drange = c(1:dim(data)[1])/12 + 1850 }
trange = c((200-trange*10):(200+trange*10))
bdata=data/matrix(rep(baseline,each=dim(data)[1]),nrow=dim(data)[1])
image(drange,trange*0.1-20.1,
log(bdata[,trange]),
col=redblue,breaks=e2breaks,useRaster=TRUE,
xlab="Year",ylab="Temperature Anomaly C",main=main)
if(dogrid) { gr() }
}
# --------------------
# Make an array of every month's distribution
datalst = aperm(vapply(c(1:1949),gettadist,FUN.VALUE=c(1:402)*1.0))
# Plot it out for a quick look
for(w in c(3,5,10,20)) {
png(paste("HadCRUT3 T-anom dist ",w,".png",sep=""),width=2000,height=600)
plotdist(datalst,w,dogrid=TRUE,main="Temperature Anomaly Distribution")
dev.off()
}
# and a small one
png("HadCRUT3 T-anom dist 5 small.png",width=600,height=500)
plotdist(datalst,5,dogrid=TRUE,main="Temperature Anomaly Distribution")
dev.off()
# Log-probability is shown to emphasise outliers
png("HadCRUT3 T-anom log-dist 20.png",width=2000,height=600)
plotdist(log(datalst),20,dogrid=TRUE,main="Temperature Anomaly Distribution (log)")
dev.off()
# and a small one
png("HadCRUT3 T-anom log-dist 20 small.png",width=600,height=500)
plotdist(log(datalst),20,dogrid=TRUE,main="Temperature Anomaly Distribution (log)")
dev.off()
# Now plot the change
png("HadCRUT3 T-anom dist log-change 20.png",width=2000,height=600)
plotdistchange(datalst,baseline,20,dogrid=TRUE,main="Change in Temperature Anomaly Distribution")
dev.off()
# Plot the middle +/-10 C, red means more common than 1900-1950 average, blue less common
png("HadCRUT3 T-anom dist log-change 10.png",width=2000,height=600)
plotdistchange(datalst,baseline,10,dogrid=TRUE,main="Change in Temperature Anomaly Distribution")
dev.off()
# and a small one
png("HadCRUT3 T-anom dist log-change 10 small.png",width=600,height=500)
plotdistchange(datalst,baseline,10,dogrid=TRUE,main="Change in Temperature Anomaly Distribution")
dev.off()
# --------------------------------
# Analysis by month
# Reserve space
datam = rep(0,162*402*12);dim(datam)=c(12,162,402)
basem = rep(0,402*12);dim(basem)=c(12,402)
# Generate an array of results for each month, and plot distributions and changes
for(m in 1:12) {
datam[m,,] = aperm(vapply(seq(m,m+161*12,12),gettadist,FUN.VALUE=c(1:402)*1.0))
basem[m,] = gettadist(seq(m+600,m+1200,12))
png(paste("HadCRUT3 T-anom dist 5 ",month[m],".png",sep=""),width=800,height=600)
plotdist(datam[m,,],5,isyears=TRUE,dogrid=TRUE,
main=paste(month[m],"Temperature Anomaly Distribution"))
dev.off()
png(paste("HadCRUT3 T-anom log-dist 20 ",month[m],".png",sep=""),width=800,height=600)
plotdist(log(datam[m,,]),20,isyears=TRUE,dogrid=TRUE,
main=paste(month[m],"Temperature Anomaly Distribution (log)"))
dev.off()
for(w in c(10,15,20)) {
png(paste("HadCRUT3 T-anom dist log-change ",w," ",month[m],".png",sep=""),width=800,height=600)
plotdistchange(datam[m,,],basem[m,],w,isyears=TRUE,dogrid=TRUE,
main=paste(month[m],"Change in Temperature Anomaly Distribution"))
dev.off()
}
}
# Done!
# --------------------------------
# Interpretation
# --------------
# Blue surrounded by red means a broader distribution, as in the 1850-1870 period
# (small gridcell samples give larger variances).
# Red surrounded by blue means a narrower distribution, as in the 1950-1990 period.
# Blue over red means cooler climate, as in the 1900-1920 period.
# Red over blue means a warmer climate, as in the post-1998 period.
# Observations
# ------------
# Period 1900-1950 (baseline) shows a simple shift.
# Period 1950-1990 appears to be a narrowing and shift.
# Period post 1990 shows effects of shift meeting effects of narrowing.
# Reduction in cold weather since 1930 unambiguous, but increase in warm weather only clear post 1998;
# until then, there was a decrease in warmer weather due to narrowing of distribution.
# Step change in 1998, little change thereafter.
# Post 2000 change is a bit bigger and longer lasting but not all that dissimilar to 1940s.
# Monthly
# -------
# Picture looks very different when split out by month!
# Spread in summer is far narrower than in winter.
# Offset in summer in 21st C exceeds upper edge of distribution from 1900-1950.
# Excess over 1940s looks about 0.5 C.
# Looking at maps, a lot of it appears to be in Siberia and Northern Canada.
# There is still a narrowing of the distribution 1950-1990.
# Step change occurs a little earlier, around 1990, to 1940s levels
# then jumps again in 1998 to higher level. This is within 20th C annual bounds
# but outside narrower summer bounds.
# Caveats
# -------
# The data is averaged over 5x5 degree gridcells monthly. Spread of
# daily data would be 2-5 times broader (?).
# Spread at point locations would be broader still.
# Note also 'great dying of the thermometers' around 1990/2003 - could affect variance.
# Has been reported prior to Soviet collapse cold weather exaggerated, to get fuel subsidy (?).
# Distribution extremes poorly sampled, results unreliable beyond +/-5 C.
# (May be able to do better with more smoothing?)
# Temp anomalies outside +/-10 C occur fairly often, 'heatwaves' probably in this region.
# No area weighting - gridcells nearer poles represent smaller area.
# (This is intended to look at point distributions, not global average, though.
# Area weighting arguably not appropriate.)
# No error estimates have been calculated.
# All the usual stuff about UHI, adjustments, homogenisation, LIA/MWP, etc.
# ##################################################
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.

I’ve noticed that in the UAH satellite data and the Reynolds SST data a marked shift in recent years to increased summer anomalies and decreased winter anomalies. Something has changed in recent years and as it the opposite to what GHG warming theory predicts, it has nothing to do with GHGs. (likely reduced aerosols/increased insolation).
I’d be surprise if you don’t find the same in the surface record.
I have regularly found a similar shift in the data just about the time the “RS-232 cable” instruments were rolled out and most of the data started coming from Airports…
Thought Experiment for you:
Take 1930’s data from a (seasonally) snow covered grass field in Siberia or Canada.
Compare it to the same field today that is an international Jet Port with tons of kerosene being burned and with 10,000 foot long runways of (snow removed!) concrete with hectares of black tarmac (snow removed!) aprons, skirts, taxiways, etc. Add 10,000 people and large (heated!) terminal buildings and surround with km of roadways bringing all those folks to the airports (snow removed!)… Oh, and all the grass and trees removed too… don’t need any transpiration in summer dropping air temperatures…
Think there will be any differences in rage, excursions, etc.?
Think the “high excursions” will be more likely over solar heated black asphalt or over white snow?
Solar heated black asphalt or over green grass?
Think ‘vertical mixing’ via giant jet airplanes will prevent formation of “stagnation layers” near the ground on cold clear nights? Think that lack of cold still air will be reducing the ‘low excursions’?
All Hansen has found is that they have put most of the thermometers in exactly the worst possible places to get decent information about the actual global temperature trends; but great places to measure the changes in cities, towns, and airports; and pretty good at showing the effects of vertical mixing and snow removal at airports…
Hansen et al. show a graph as Fig. 1. using hadcrut3 temperatures of the past 132 years. The basic question in climate science is what the physical cause is of the temperature anomalies. The data contain not only ‘calibrated’ amplitudes, but also precise frequencies. An analysis of the temperature frequencies of the past 10 ky, and/or of the past 50 years show that solar tide functions, fitted in the strength of the tide pairs respectively, match with the global temperatures. Lower frequency simulations need 6 planets and higher temperature frequencies need 11 planets.
http://www.volker-doormann.org/images/temperatures_1880_ff.gif
Fast global temperature frequencies of about 6.3 periods per year can be found in the hadcrut3 data, and because it is well known that there is a relation between the temperature und the sea level, because of the changing volume of water, it can be shown that a geometric solar function gives evidence that the terrestrial functions of sea level and temperature are created on the Sun:
http://www.volker-doormann.org/images/sea_level_vs_me_er_ju.gif
I think it is a wrong way in science wasting time to find errors or to judge on errors. The basic question in climate science is what the physical cause is of the global temperature anomalies.
V.
Your third plot, the first colored one, does show a marked step in the extremes, or the extents of the range, right around the mid-1930’s and then another shift only in the minimums right around 1982-1985. Was this a shift in the climate by nature or was this a change in either the instrumentation or the methods applied in measuring or adjusting? That is some great analysis NiV, that plays right into Anthony’s post of http://wattsupwiththat.com/2012/08/08/an-incovenient-result-july-2012-not-a-record-breaker-according-to-the-new-noaancdc-national-climate-reference-network and the many comments.
“You can see now why it was convenient for Hansen to have picked 1950-1980 to compare against!”
No, I can’t. What difference do you think the choice of anomaly base period makes?
Volker Doormann says:
“An analysis of the temperature frequencies of the past 10 ky, and/or of the past 50 years show that solar tide functions, fitted in the strength of the tide pairs respectively, match with the global temperatures.”
I found the match amazing and I looked at your web pages and your graph of solar tides and temperatures extends out to 2015, and if you are right we are going to see a drop in temperature of about a half a degree from over 5 years. What happens if you extend your solar tides graph out to say 2050 when we are told a large part of humanity and the animal kingdom will be destroyed by the effects of global warming?
You also note that no one seems to be interested in your correlations, perhaps other WUWT readers would like to comment on what appears to be the natural cause of our climate variations as so far you seem to be ignored.
Monday 13th August 2012. Darwin Northern Territory, Australia has just shivered through its coldest August Winter night on record,, with data going back 71 years.
I believe that to investigate the change in distribution we should compensate for changes in the mean first – that would get rid of large blue/red bands and would really show just changes in the distribution shape.
Apart of that, very nice analysis, thanks!
It reminds me of graph I plotted last year using satellite (RSS) data:
http://www.volny.cz/kasuha/temperatures/tlt_anom_histo.png
Distribution in these appears much sharper and having greater variability, probably partially due to finer grid. But I did not go deeper into the analysis.
Warming, cooling, flooding, drought, they are all symptoms of global warming, or so I am told, so you obviously have a case of severe global warming there.
IMO what is needed is measurement of the wet bulb temperature as a more accurate measure and the understanding that the slight falls in the tropical oceans as far as the global energy budget is concerned are more important than rises in the arctic where there is less energy. There is a distortion of the temperature, brought about the imbalances created by the warm pdo and amo tandem, but the flip in the pdo, is now starting the process to return us to where we were. In other words a 1 degree drop where the wet bulb is 80 offsets a 10 degree rise where the wet bulb is 0 ( just a rough example). One can easily see that on a cold winter morning, how quick the temperature will change when there is no wind from site to site, because it takes very little energy to move the temperature when its cold, much more when its hot and humid
Still if one simply considers what temperatures truly means and then takes it a step further to look at changes in the global wet bulb temperature more so than simply the observed temperatures, one may get a better handle on the idea that there is really no true warming, just a distortion of the temperature, It is much easier to warm a cold dry air mass for instance since the change in energy needed to do it is relatively small, than to drop the pacific ocean tropical temperatures a degree.
Just for fun, I pulled together the NCDC century records for all 48 states into one animation loop. You can see the patterns pretty well. In fact July 2012 is the hottest ever in EXACTLY ONE STATE, Virginia. In some areas recent years are hot, but this year appears to be down from last year; in other areas recent years are nothing special.
Actually July 2011 was the most impressive year in terms of state records.
http://polistrasmill.blogspot.com/2012/08/hottest-ever-in-48-oops-in-one-state.html
I would have thought that before discussing anomalies we should discuss temperature accuracy. Anomalies mean zilch if accuracy is poor and at the moment temperature data sets are questionable.
Nick: “What difference do you think the choice of anomaly base period makes?” If it were only a base period for determining a mean which is then subtracted from all values to determine an anomaly, I’d see your point. But it’s also a base period for determining what “normal” variance of the anomalies is, and that makes a difference. Unless I’m misunderstanding the point.
Anthony: Hansen expands his baseline to the 1930’s in a follow-on paper. I don’t think expanding the baseline period is a good idea, since you’ll still end up averaging your additional baseline data with the 1950-1980’s which may have been exceptionally calm. Rather, we should compare 1925-1950, 1950-1980, and 1980-2010 data. That way we’re testing the two logical explanations for Hansen’s original results: the period after 1980 featured significantly more extreme events than in recent history, OR the period from 1950-1980 featured significantly fewer extreme events than the rest of recent history.
Thanks for the R code! (I’ve been experimenting with various denoising/smoothing/trend-finding techniques on the BEST data, and it appears to me that the 1950-1970 period’s trend is flat, which might have something to do with variability as well.)
Adrian Kerton says:
August 13, 2012 at 1:41 am
Volker Doormann says: “An analysis of the temperature frequencies of the past 10 ky, and/or of the past 50 years show that solar tide functions, fitted in the strength of the tide pairs respectively, match with the global temperatures.”
I found the match amazing and I looked at your web pages and your graph of solar tides and temperatures extends out to 2015, and if you are right we are going to see a drop in temperature of about a half a degree from over 5 years. What happens if you extend your solar tides graph out to say 2050 when we are told a large part of humanity and the animal kingdom will be destroyed by the effects of global warming?
Hi Adrian Kerton,
my main point here is to argue the way to analyse temperature frequencies instead of temperature amplitudes to look for possible geometric relations in nature.
In general there is no problem to extend the solar tide functions of the 11 relevant planets to 3000 CE. The relevant temperature frequencies are in the range of 0.00111 periods per year to 6.2 periods per year. Low frequency temperatures can be calculated in increments of years and there is a comparison with well known reconstructions of Bond et al. and E. Zorita et al.
http://www.volker-doormann.org/images/bond_vs_zorita3.gif
Because of the elliptic nature of the movement of some planets the lowest frequency of solar tide function show mostly three temperature maxima (first in 1997) and three temperature minima (Little Ice Age).
For a higher resolution in time it needs time increments of 1 day of all the heliocentric data. I have calculated the solar tide functions to >2040 CE.
http://www.volker-doormann.org/images/uah_vs_g2040.gif
http://www.volker-doormann.org/images/uah_2040.gif
The ‘Bond’ frequency of ~1/900 years is known from many periods back in time. There is no reason, why this oscillation is not to be continued.
You also note that no one seems to be interested in your correlations, perhaps other WUWT readers would like to comment on what appears to be the natural cause of our climate variations as so far you seem to be ignored.
I don’t know. It’s an unknown mechanism. The correlation values are significant. I would be glad, if in general there would be a change in the minds of the climate scientist to analyse frequencies instead of amplitudes.
V.
Anthony: I believe the “download” call should be “download.file” instead.
REPLY: Willis
Volker Doormann says:
August 13, 2012 at 6:00 am
…
I have calculated the solar tide functions to >2040 CE.
http://www.volker-doormann.org/images/uah_vs_g2040.gif
http://www.volker-doormann.org/images/uah_2040.gif
…
__________________________________________________
It would be nice if you were able to extend the comparison more to the past, too. Something like 1900-2000 or 1950-2000. Last 2-3 years may be just matter of coincidence.
This goes to a question I have had for a while:
Historical records have wide ranges of cold and hot over the years. The assumption is that most of this is data problems, errors and insufficient adjustments for various factors. A long-term running average is applied and we get a hockey stick or not based on the central, smoothed value. If the actual range from year-to-year was, in fact, much greater than that of the last 80 years, then we have incorrectly developed a picture of the past, and the smoothing function fails to give us the warm times as well as the cold.
Look to any proxy-based temperature graph of the past as it is merged with the thermomenter readings of the recent years. If the weather were more variable in the past than today, then the recent highs and lows that are supposed to be increases in weather extremes, or “global weirding” are not extreme or weird, they are normal: what is weird, though not extreme, would be the last 40 years of supposed stability, or stable upward warming.
Arguments to-date on climate change come from a place where the past was high or low, but basically steady state (for 30 years, at least) on a global scale. If it were not that way, but more particularly if regions were more variable in the past than today, then both global averages and global averaging mislead us as to the type of weather we experienced back then, and therefore how to put today’s weather into context.
So: what are your thoughts (everbody?)? Is there evidence that in the past climates were more variable on a regional and/or global scale than they are today? Should the temperature proxy representations of Mann, Hansen et al show more high-low variation on a shorter time-scale than they do?
This goes to the heart of CAGW: what is “normal”.
Is there any study that plots temperature/airborne particle contamination? Or are there any data sets of both anywhere ?
If airborne particles (C or other) reflect/absorb/diffuse solar heat energy, because there are less airborne particles in the atmosphere (due to clean air acts, less coal use, diesel and petrol CATs, etc) and the rise of gas/electric to replace coal heating systems, etc since the 1970’s. Would there not be some temperature rise due to that i.e. ‘Cleaner air warming’ ?
John Marshall says:
August 13, 2012 at 5:24 am
I would have thought that before discussing anomalies we should discuss temperature accuracy. Anomalies mean zilch if accuracy is poor and at the moment temperature data sets are questionable.
I agree that we should discuss temperature accuracy before or at least in conjunction with any other observations. Thanks for pointing that out. It bothers me to read about “record July Temps” when basically we keep reading that is not really so. http://wattsupwiththat.com/2012/08/08/an-incovenient-result-july-2012-not-a-record-breaker-according-to-the-new-noaancdc-national-climate-reference-network/ I hate to seem complicit in agreeing that temps are higher than they are just becasue the mainstream media keeps that hype going.
Now I need some help with the definition for “anomaly” in relationship to climate. I am not a meteorologist and it was only recently that I knew there was such a thing as a climatologist. I thought it was a made up word for the IPCC. Can anyone point me to a glossary of terms for climate science? Thanks in advance
Joseph Bastardi says:
August 13, 2012 at 4:01 am
I agree, and it’s a point often raised. If climate models are supposed to be energy balances, then quoting air temperature as an output without the corresponding humidity is pretty meaningless.
Kasuha says:
August 13, 2012 at 8:05 am
Volker Doormann says:
August 13, 2012 at 6:00 am
…
I have calculated the solar tide functions to >2040 CE.
http://www.volker-doormann.org/images/uah_vs_g2040.gif
http://www.volker-doormann.org/images/uah_2040.gif
…
__________________________________________________
It would be nice if you were able to extend the comparison more to the past, too. Something like 1900-2000 or 1950-2000.
OK.
http://www.volker-doormann.org/images/temperatures_1880_ff.gif
http://www.volker-doormann.org/images/ghi_11_1.gif
http://www.volker-doormann.org/images/g11_1950_2050.gif
http://www.volker-doormann.org/images/ghi_11_had1960.gif
http://www.volker-doormann.org/images/ghi_had_1960_3.gif
V.
That’s a very neat way of showing the changes year by year at overview level – something I’ve tried to find ways to do for a long time. However, as Joe Bastardi basically says above, anomalies without humidity are irrelevant.
“…You can see now why it was convenient for Hansen to have picked 1950-1980 to compare against!…”
Nick Stokes replied:
“…No, I can’t. What difference do you think the choice of anomaly base period makes?…”
Do you mean besides changing the “zero” line, or determining what the “climate scientists” consider “normal”?
I’ve been questioning this for a few years now. What peer-reviewed papers (besides those written by Hansen) makes him think that the period from 1951-1980 is the “normal” for the globe?
Every time I’ve suggested there was a problem, the cry of “it’s the trend that matters, not the zero” was raised.
I’ll agree to determining a trend. But I come from a world of electronics, where zero (or an established point of reference) DOES matter.
I need to know exactly how far above or below zero something is, and the accuracy of the devices used to measure that difference.
Every database uses a different reporting period, a different set of stations, and various “adjustments and extrapolations” to come up with an ANOMALY. And the highest anomaly (from GISS) appears to be about .6 above the zero set by Hansen’s team.
If a change in reporting period DOESN’T matter, then show us a chart, using GISS data, with a reporting period of 1981-2010 (instead of 1951-1980), and compare the two. If the “trend” is what matters, it should still be there, as large as it was before.
But that area above the zero will sure be smaller – and less scary.
Nick Stokes says:
“You can see now why it was convenient for Hansen to have picked 1950-1980 to compare against!”
No, I can’t. What difference do you think the choice of anomaly base period makes?
In this case, the choice of base period understates the variability expressed by the system. Given that the whole of the faux-science ornament that Hansen hangs on his otherwise trite political screed is an analysis of “extreme events” as determined by the variability of the base period, that is … uh … kind of important.
It also neatly avoids the periods of historic (i.e. natural) warming that have not yet been thermically cleansed from the record, puts off people asking to detrend the data of the obvious, natural warming trend that is evident in the historic data, and dissuades them from asking why you didn’t present the results of similar comparisons of “extreme events” for those other periods.
Hansen knows that his choice of base periods is dodgy. It is why he justifes his actions like a two-dollar thief pleading in night court – twice. Once in the Methods, and again in the Discussion. In fact, the only ‘method’ really given in the Methods section of this ‘scientific paper’ is his ‘innocent reason why you saw me kissing that lady’ explanation for choosing that base period.
Restrict the variability basis, don’t detrend, use a variance model that you don’t test for appropriateness, work your “results” in the tails of that distribution, where the slightest bit of inpropriety in the variance model will have disproprtionate effects – do all the little things that in aggregate fabricate the basis for scary pronuncements about “a new category of ‘extremely hot’ summers” and which appear to provide support for the preposterous assertion that individuals have the capacity to “percieve” the temperature field of an entire hemisphere to the tune of small changes in the distributional statictics of data collected by global sensor networks.
The only thing that this so called scientific paper “finds” is that current GHCN temp records are warmer than GHCN temp records from 1950-1980. All the rest is sensationlist restatement of that same triviality masquerading as science, and unsupported assertion of his favorite political talking points.
Also cute: Presenting global temp anomalies and maps thereof, for 1955, 1965, 1975 … weird that he didn’t start at the beginning of his pet base period … but it is reported every ten years, so the next one is obviously going to be … 2006. Huh?
He left out an entire “climatalogically determinative” 30 year period. Odd. Could that be because including … say … 1995, 2000 … might draw attention to the fact that the global surface temp field hasn’t changed much at all over the time frame starting then to the present?
Inconveniently, we can still pose the question using the data that weren’t left on the cutting room floor: If people should be able to percieve a 0.54C rise in the mean of the global temperature field over a period of 55 years (less than 0.01C per year) then they should also be able to percieve that over the last 6 years the mean of the global temperature field has fallen 0.01C, should they not? Do we really think that is the perception that people have? Is that the perception that Hansen’s ‘science paper’ is attempting to instill?
Uh-huh.
JJ says: August 13, 2012 at 10:23 pm
“In this case, the choice of base period understates the variability expressed by the system.”
All the base period does is express a reference value that you subtract from the grid or station value to get an anomaly. It’s true that there is a tiny effect whereby some base spatial variability appears as extra spatial variance in the anomalies. But it’s about 1/30 of the variance of the anomalies themselves, and you’re talking about small variations in that small fraction. Otherwise it doesn’t matter at all about whether the period is “typical”; the practical issue is whether most of the grid cells have data in that period.
The reason why Hansen chose 1951-80 is simple. He developed his indices in the early ’80s, and those were the three most recent calendar decades. And although it matters little what period you choose, once you’ve chosen, there is a big penalty to changing. GISS has a huge published record of anomaly data. If he was using a patchwork of different reference periods, that would be a nightmare.