Where's the Climate Beef?

Guest Post by Willis Eschenbach

A while back in the US there was an ad for a hamburger chain. It featured an old lady who bought a competitor’s hamburger with a great big hamburger bun. But when she opened it up she asked …

I got to thinking about this in the context of whether there is any real danger in a degree or two of average temperature rise, or whether it’s a big bun with no beef. In my previous post, “Lies, Damned Lies, Statistics … and Graphs”,  I closed by saying:

My conclusion? Move along, folks, nothing to see here …

A commenter took exception to this, saying

When talking about global average temperatures, tenths of a degree really do matter.

Now, if tenths of a degree changes over a century “matter” for the globe, they certainly must matter for parts of the globe.

So here’s your pop quiz for the day: Which US State warmed the most, which cooled the most, and by how much?

To answer this, I used the USHCN State Temperature Database.  Here are my findings:

Figure 1. Temperature trends by state, USHCN data. Seven states cooled, and forty-one warmed.

The state that warmed the most was North Dakota (top center), which warmed 1.4°C per century. The state that cooled the most was Alabama (middle of three dark blue states, lower right). It cooled by 0.3°C/century.

To compare with my previous post, here’s a similar graph, of the decadal changes in North Dakota by month.

Figure 2. North Dakota decadal average temperatures by month, 1900-2009. Red line is the average for the decade 2000-2009. Photo is an old North Dakota farmhouse.

As with the US, for much of the year there is little change, and the warming is in November to February. Note that unlike the US, during that four months, the temperature of North Dakota is below freezing (32°F) …

Now, if tenths of a degree “matter”, if they are as important as the commenter claimed, we should have seen some problems in North Dakota. After all, it has warmed by 1.6°C since 1895. That’s almost three times the global average warming.

But somehow, I must have missed all of the headlines about the temperature calamities that have befallen the poor residents of the benighted state of North Dakota. I haven’t seen stories about them being “climate refugees”. I didn’t catch the newspaper articles about how it has been so hard on the farmers and the frogs. I am unaware of folks moving in droves to Alabama, which has cooled by -0.4° since 1895, and thus should be the natural refuge of those fleeing the thermal holocaust striking North Dakota.

In fact, I don’t remember seeing anything that would support the commenter’s claims that tenths of a degree are so important. North Dakota has warmed near the low end of the range forecast by the IPCC for the coming century, and there have been no problems at all that I can find. So I have to say, as I said before,

My conclusion? Move along, folks, nothing to see here … where’s the beef?

APPENDIX: R Code for the US Map

(I think this is turnkey. Sometimes WordPress puts in extra line breaks. If so, it is also available as a Word document here.)

The code requires that you download the USHCN Temperature Data cited above and save it as a “Comma Separated Values” (CSV) file. I downloaded it, opened it in Excel. I split it using “Text to Columns …” into the following columns, as detailed in the USHCN ReadMe file:

FILE FORMAT:

STATE-CODE        1-3    STATE-CODE as indicated in State Code Table above. Range of values is 001-110.

DIVISION-NUMBER    4     DIVISION NUMBER.  Value is 0 which indicates an area-averaged element.

ELEMENT-CODE      5-6

02 = Temperature (adjusted for time of observation bias)

YEAR              7-10   This is the year of record.  Range is 1895 to current

year processed.

JAN-VALUE        11-17   Monthly Temperature format:  Range of values -50.00 to 140.00 degrees Fahrenheit.  Decimals retain a position in the 7-character field.  Missing values in the latest year are indicated by -99.90.

FEB-VALUE        18-24

MAR-VALUE        25-31

APR-VALUE        32-38

MAY-VALUE        39-45

JUNE-VALUE       46-52

JULY-VALUE       53-59

AUG-VALUE        60-66

SEPT-VALUE       67-73

OCT-VALUE        74-80

NOV-VALUE        81-87

DEC-VALUE        88-94

If that is too complex, the CSV file is here.

Here’s the R code:

# The code requires that you download

# the USHCN Temperature Data

# and save it as a "Comma Separated Values" (CSV) file.

# I downloaded it, opened it in Excel, and used

# "Save As ..." to save

# it as "USHCN temp.csv"

#Libraries needed

library("mapdata")

library("mapproj")

library("maps")

# Functions

regm =function(x) {lm(x~c(1:length(x)))[[1]][[2]]}

#Read in data

tempmat=read.csv('USHCN temp.csv')

# Replace no data code -99.9 with NA

tempmat[tempmat==-99.9]=NA

# split off actual temps

temps=tempmat[,5:16]

# calculate row averages

tempavg=apply(temps,1,FUN=mean)

# calculate trends in °C by state

temptrends=round(tapply(tempavg,as.factor(tempmat[,1]),regm)*100*5/9,2)

# split off states from regional and national

statetrends=temptrends[1:48]

#calculate ranges for colors

statemax=max(statetrends)

statemin=min(statetrends)

statefract=(statetrends-statemin)/staterange

#set color ramp

myramp=colorRamp(c("blue","white","yellow","orange","darkorange","red"))

# assign state colors

mycol=myramp(statefract)

# names of the states (north michigan is missing for ease of programming)

myregions=c("alabama", "arizona", "arkansas", "california", "colorado", "connecticut", "delaware",

"florida", "georgia", "idaho", "illinois", "indiana", "iowa", "kansas", "kentucky", "louisiana", "maine",

"maryland", "massachusetts:main", "michigan:south", "minnesota", "mississippi", "missouri", "montana", "nebraska",

"nevada", "new hampshire", "new jersey", "new mexico", "new york:main", "north carolina:main", "north dakota",

"ohio", "oklahoma", "oregon", "pennsylvania", "rhode island", "south carolina", "south dakota", "tennessee", "texas",

"utah", "vermont", "virginia:main", "washington:main", "west virginia", "wisconsin", "wyoming")

# draw map

par(mar=c(6.01,2.01,4.01,2.01))

return=map('state',regions=myregions, exact=T,projection='mercator',fill=T,

mar=c(5.01,8.01,4.01,2.01),col=rgb(mycol,maxColorValue=255),ylim=c(10,60))

# set up legend boxes

xlref=-.48

yb=.37

ht=.05

wd=.08

textoff=.025

# assign legend labels

mylabels=round(seq(from=statemin,by=staterange/12,length.out=13),2)

#draw legend

myindex=0

for (i in seq(from=xlref,by=wd,length.out=12)){

xl=i

xr=xl+wd

yt=yb+ht

rectcolor=myramp(myindex/11)

rect(xl,yb,xr,yt,col=rgb(rectcolor,maxColorValue=255))

text(xl,yb-textoff,mylabels[myindex+1],cex=.65)

myindex=myindex+1

}

text(xl+wd,yb-textoff,mylabels[myindex+1],cex=.65)

# add annotations

text(0,1.08,"US Temperature Trends (°C/century)")

text(0,1.03,"USHCN Dataset, 1895-2009",cex=.8)

Get notified when a new post is published.
Subscribe today!
0 0 votes
Article Rating
184 Comments
Inline Feedbacks
View all comments
April 16, 2010 2:57 pm

meklly,

So again you show the ineffectiveness of CO2. If the above quote is true that “All complex molecules emit in the IR…” Then O2 and N2, 99% of atmosphere, are what will control the transmission of IR around in the atmosphere. Not something with so small a percent like CO2. So CO2 is vindicated again.

The key word is complex. Complex molecules, consisting of many atoms (chlorophyll b, for example, has 136) absorb visible light and emit IR. Certain simple molecules, such as CO2 and H2O absorb and emit IR, because they have molecular bonds with the ability to vibrate.
O2 and N2, which make up 99% of the atmosphere, are unusual in that they do not have any ability to vibrate or absorb or re-emit in the low IR range (they can do so in the upper IR range, due to rotational energy, however the energy level there is so low, and the physics are such, that it is inconsequential).
This is because O2 and N2 consist of two equal atoms bound together by a single, very strong, very stable covalent molecular bond. There is no room for vibration in that bond, unlike H2O, which has a “V” shape in the bonds between the two hydrogen atoms and the central oxygen atom, or like CO2, which normally has a perfectly linear shape but is induced to bend and vibrate.
Again, I would encourage you to follow this link to read about the vibration of gases, and this link in general to read about the physics behind the absorption and emission of infrared radiation by molecules. It’s not that complicated.

April 16, 2010 2:59 pm

VicV,
I’m not sure what your point is. I explicitly said “but I’m not touching that here and now at all.” What part of that sentence is unclear, or untrue?

Urederra
April 16, 2010 3:04 pm

sphaerica (09:37:52) :
Thanks for the explanation.

Chuckles
April 16, 2010 3:34 pm

Allan M (13:14:02)
Anything that threatens the University of Southern North Dakota at Hoople, Professor Peter Schickele or the works of P.D.Q. Bach would certainly run into a great deal of resistance.
In fact, almost as much resistance as any attempt to perform any of those works usually generates?

Wren
April 16, 2010 3:37 pm

The 48 States as a group represent only about 1.5 % of the earth’s surface, so obviously aren’t a big part of global warming. But do State trends tell us anything about local trends? Perhaps they do for people who live in small States like Rhode Island. But for other States, particularly the large ones like Texas and California, the trends don’t tell residents much about where they live.

April 16, 2010 3:37 pm

sphaerica (08:34:02) :
If you add up a bunch of temperatures measured in different places which have an error of +/- 1 deg C and divide by the number of readings the average still has an error of +/-1 deg C.
This is a different situation from measuring a physical variable with a noisy sensor where you can reduce the error by averaging over time and taking lots of readings.

George E. Smith
April 16, 2010 4:13 pm

“”” Vincent (04:24:31) :
urederra,
“Here is my line of thought. If CO2 were the main culprit of warming, I would expect to see it acting mainly during summer in the north hemisphere. when there is more light and therefore more uv-visible-ir .”
Most people say the opposite. The problem with your theory is that the forcing due to GHG is constant, at about 1.5 W/M^2. During the summer, insolation is very high, maybe 500 W/M^2 so the proportion added to by CO2 is very small. In winter however, insolation may be only 100 W/M^2 and the addition made by CO2 is proportionally greater. Even more, during winter, absolute humidity is lower, so the proportion of CO2/H2O is greater than during summer. “””
Well there is also a problem with your theory. The 500 W/m^2 solar insolation is 6000 K black body spectrum radiation, which travels deep into the oceanic waters. The 1.5 W/m^2 +/- 50 %CO2 “forcing” is LWIR radiation which is stopped in the top ten microns of any water body, and leads promptly to evaporation.
so you cannot compare the two on a W/m^2 “forcing” basis because the physical response to each is entirely different.
Adn that seems to be what classical climate scientists do not seem to understand; they are comparing Strawberries and coconuts.

Anu
April 16, 2010 4:37 pm

Hypothermia is defined as the body’s temperature dropping below that required for normal metabolism and body functions: 95.0 °F
Normal body temperature is in a range of 98–100 °F
Now, I got to thinking: if 3 or 5 °F “matter” for the entire body, they certainly must matter for parts of the body.
Trying for the slam dunk, we got Timmy to prove that even with his hand at an average temperature of 50 °F (cooled in ice), this didn’t really “matter”:
http://www.upaa.org/winners_mic/2005_09/news3.jpg
Timmy quit the study.
Next, we asked Makiko to keep her hand in 60 °F water for 2 hours.
Later, using an infrared scanner, her hand measured an average of 91 °F in temperature. Now, if 4 °F “matter”, clearly we should have seen some problems with 8 °F, double the magic “cooling”. Somehow, Makiko managed to avoid shivering, mental confusion and hepatic dysfunction, proving that hypothermia is not a problem.
My conclusion? Move along, folks, nothing to see here with this hypothermia alarmism …

LarryOldtimer
April 16, 2010 4:42 pm

Smokey: “Mercury thermometers are relatively easy to calibrate. They are accurate to well within 1°C if done properly.”
My Chemistry professor back in 1953 didn’t agree with your statement. What you got from calibrating a mercury thermometer was that you knew, for an individual thermometer, was where the top of the mercury column was at the boiling point and the freezing point of water. He said that midrange readings were uncertain due to variances in among other things, the cross sectional area of the mercury tube itself, which was known to not be constant for any thermometer. The actual volume of mercury in the bulbs of thermometers varied, also making midrange readings uncertain as to accuracy.
Errors can also be introduced if the reader’s eye is not at a right angle to the thermometer at the top of the mercury column due to parallax.
If there was bit of a “pinch” (smaller cross section area) in the tube below the top of the mercury column, the temperature reading would be higher than the actual temperature, and there was a bit of a “wow” (larger cross section area), the temperature indicated would be lower than the actual temperature. Mercury thermometers aren’t perfect, nor are they the same.
We students were also cautioned not to assume that the “pinches” and “wows” would “average out”.
Even the $10 thermometers back then, however well calibrated, had these inaccuracies regarding mid-range readings.
Also, as to what the real margin of error was, there was no possible way to determine that one, as there was no standard of temperature by which it could have been determined for midrange temperatures.
And the above is with laboratory conditions. In the world outside of the laboratory, in the actual measurement of temperatures, lie a good many problems.
Another thing I was taught was that the result of averaging set of numbers could be no more precise than the least precise of the datum in the numbers set. That is, if the most precise was say 50.5, then the result could not be more precise than one figure after the decimal point. Averaging a set of numbers with one figure after the decimal point and then producing an average with 2 figures after the decimal; is most improper, and is implying a level of accuracy that simply isn’t there.
To average a set of numbers with the most precise having one significant number after the decimal point, with a margin of error of +/- 1 and showing a result with 2 figures after the decimal point is sheer folly, and most improper.

jaymam
April 16, 2010 6:33 pm

The heat release from nuclear power generators also has to go somewhere into the environment. In NZ we have no nuclear power, and mostly use hydro power with a bit of geothermal and wind power. Also a bit of coal since we have enough coal to last thousands of years.

Methow Ken
April 16, 2010 7:39 pm

ND warmed the most ??….
”Cold comfort” if I would have known; in mid-winter 2009 when it got down to -42.7 degrees BELOW zero F. one morning.
Of course have to admit it wasn’t as cold this winter here in northern ND:
Low temp for 2010 was ”only” -33 degrees below zero F. on New Years Day; not worth mentioning; just another routine winter day. Still:
It would be fine by me if it would only get down to, say, -25 or so degrees below zero F. for the lowest winter temp. I could accept that much ”global warming” during the ND winter; NO problem.

April 16, 2010 7:42 pm

LarryOldtimer (16:42:41),
You are exactly right in your analysis. I especially agree with your comments regarding the averaging of data points.
I worked in a metrology lab for thirty years, and one of my jobs was calibrating mercury thermometers.
There are two levels of calibration traceable to N.I.S.T. [formerly the National Bureau of Standards]. The most accurate calibrations are those using a physical standard as a reference, such as the triple point of water.
Secondary calibration is done by calibrating an instrument to another instrument, which in turn was calibrated to a primary physical standard, and remains within its calibration interval.
Normally the instrument used as a secondary standard for calibration is required to have a 4:1 ratio of accuracy over the instrument being calibrated. Secondary calibrations are much less expensive and time consuming, and are the industry norm. Primary calibrations are done only when the best possible accuracy is required.
Calibration labs are aware of the concerns you identified regarding the linearity issue, which applies to all calibrations, not just mercury thermometers. Glass slump over time is also an issue [but a relatively minor one]. There is much more to calibration than first meets the eye.
We found that even old mercury thermometers from the 1930s and 1940s retained their accuracy for the temperatures they were designed to measure. In fact, a good mercury thermometer retains its calibration much longer than modern electronic thermometers. We would often do a quick ‘n’ dirty check verifying an electronic thermometer using a known accurate mercury thermometer.
I would have to research the question, but I think by the 1890s the construction of precision mercury thermometers was very good. Methods of measuring column diameter were quite advanced by the 1890s, due to the requirement of accurate, linear bore diameters in rifles to within thousandths of an inch.
Even the old mercury thermometers we calibrated retained the required accuracy for their particular use. Unlike numerous electronic thermometers, I can’t recall a mercury thermometer ever failing routine secondary calibration.
That is why I gave an example of a tolerance of ±1°C, which is quite generous. Most merc thermometers have tighter tolerances. I still have an old mercury thermometer with a magnifying lens and double reticle that allows you to line up at an exact right angle and view the temperature in 0.1°F increments. Before I retired I calibrated it, and it was right on throughout its range [-8° to +89°F].
Aside from the question of accuracy, if the same thermometer is used correctly for X number of years, it will show if there is a trend even if it’s an alcohol thermometer and not very accurate. That can’t be said for electronic thermometers, which rely on PRTs, thermistors, or thermocouples that have an output in microvolts, and which drift over time due to thermocouple degradation, hysteresis, and/or changes in the voltmeter-based temperature readout if it is not regularly calibrated. It doesn’t take much to alter the output of a thermocouple by a few tens of microvolts, or drift in the voltmeter, which can translate into an out of tolerance temperature reading, or an erroneous in tolerance reading.
For long term reliability in measurements like Surface Stations, I would prefer a good mercury thermometer.

LightRain
April 16, 2010 8:07 pm

Peter Pond (03:08:11) :
Where I live (in SE Australia) there were two occasions this recent summer where there was a 20C difference in the max temps on succeeding days.
Heck that’s nothing, in Calgary we can have a change of 20°C in the winter, and in either direction!