Should We Be Worried?

Guest Post by Willis Eschenbach

I chanced to plot up the lower tropospheric temperatures by broad latitude zones today. This is based on the data from the satellite microwave sounding unit (MSU), as analyzed by the good folks at the University of Alabama at Huntsville. Here are the results, divided into tropical, extratropical, and polar. I’ve divided them at the Arctic and Antarctic Circles at 67° North and South, and at the Tropics of Capricorn and Cancer at 23° N & S.

uah lower troposphere temperature

Figure 1. Satellite-based microwave sounding unit temperatures (red line) from the University of Alabama Huntsville. Blue line shows a loess smooth, span=0.4. Data from KNMI (NCDF file, 17 Mb)

So … is this something to worry about?

Well, let’s take a look. To start with, the tropics have no trend, that’s 40% of the planet. So all you folks who have been forecasting doom and gloom for the billions of poor people in the tropics? Sorry … no apparent threat there in the slightest. Well, actually there is a threat, which is the threat of increased energy prices from the futile war on carbon—rising energy prices hit the poor the hardest. But I digress …

What else. Southern Extratropics? No trend. South of the Antarctic Circle? No trend, it cooled slightly then warmed slightly back to where it started.

So that’s 70% of the planet with no appreciable temperature trend over the last third of a century

What else. Northern Extratropics? A barely visible trend, and no trend since 2000.

And that means that 96% of the planet is basically going nowhere …

Now, that leaves the 4% of the planet north of the Arctic Circle. It cooled slightly over the first decade and a half. Then it warmed for a decade, and it has stayed even for a decade …

My conclusion? I don’t see anything at all that is worrisome there. To me the surprising thing once again is the amazing stability of the planet’s temperature. A third of a century, and the temperature of the tropics hasn’t budged even the width of a hairline. That is an extremely stable system.

I explain that as being the result of the thermoregulatory effect of emergent climate phenomena … you have a better explanation?

My best regards to everyone,

w.

PLEASE! If you disagree with what I or anyone says, QUOTE THE WORDS that you disagree with, and say why you disagree with them. That way we can understand each other. Vague statements and handwaving opinions are not appreciated.

DATA: All data and R code as used are here in a zip file.

The climate data they don't want you to find — free, to your inbox.
Join readers who get 5–8 new articles daily — no algorithms, no shadow bans.
5 2 votes
Article Rating
272 Comments
Inline Feedbacks
View all comments
January 29, 2014 6:59 pm

george e. smith says to RichardLH:
“So if the computer models are not ‘wildly inaccurate’ as dbstealey asserts, why are there 13 of them (or is it 17), and no two of them agree with each other; let alone agree with the planet earth?”
Thank you, george. Once more: not one of the multi-$million [or is it multi-$billion?] GCM’s was able to predict the biggest climate event of the past twnety years: the 17-year long halt to global warming. They were all wrong.
I understand that models have their uses. But they also have their limitations.

daddylonglegs
January 29, 2014 7:02 pm

MattS
Thanks.

ntesdorf
January 29, 2014 8:22 pm

It looks to me, from those graphs, as if the Great Global Thermostat is still in good working order. It is certainly in far better working than all of the CAGW computer models, anyway.

January 29, 2014 8:47 pm

JBJ,
Willis didn’t ‘guess’, he was pointing out his prior comment:
“For example, mass is an extensive property.”
Giving an example is not making a guess.

January 29, 2014 9:33 pm

It doesn’t say anywhere in the article or on the charts whether this is in degrees F, C, or K, but that doesn’t much matter.
If you add up the 5 temperature changes from the 5 graphs, you get +2 degrees. Divide that by 5 for the 5 graphs (just to be arbitrary) and you get +0.4 degrees for the 35 years observed. That figures out to be ~ 0.114 degrees per decade, and whether that is F, C, or K , it’s not enough to worry about.

Carbomontanus
January 29, 2014 11:03 pm

Marshall
Allow me to interfere, a bit late , but….
The fear about 2 deg temperature rise…. FOBIA it is called. What do we and people fear?
The planet Mars for iron and war has got 2 satelites, PHOBOS & DAIMOS. meaning war is followed by fear and horror.
Just think of that. That ought to be efficient and fruitful astrology.
But we ought to understand it and make up our minds in a realistic way. That is a very traditional and proper medicine against both fear and horror. This is psychology and philosophy and quite important.
Yesterday I met an American at the climate- surrealist-and denial meeting in Oslo. He was quite reasonable, and most probably out for the same errands, to try and spy on the Norwegians in their taverns and on their peculiar behaviours and opinions.
We also discussed temperatures and degrees and scales, and I told him what I have found out about it.
First Olaf Rømer who propably invented the long glass and mercury thermometer, and caibrated it to zero in salted snow, and to +120 in boiling water, and interpolated linear between that, taking it for granted that both his arbitrary tube and the creeping coefficient of mercury is linear.
A certain Fahrenheit did actually stand there next by in the lab learning, and spying on Ole Rømer. Then Fahrenheit went home and did the same, but he calibrated zero at sharp, salted snow, and hundred in freshly warm piss. never forget that. And both references are,………….shall we say dubious or obscure or shall we say inaccurate?
But nevermind,…
The great astronomer Celsius Upstairs at the University of Uppsala at tyhe royal observatory there, made a glass and mercury thermometer and calibrated 100 in fresh and pure wet snow, and Zero in boiling water. And made a linear scale between called “Centigrades” because CENTO = 100 in LATIN. Celsius also carefully noticed the barometric pressures.
And gave that thermometer to the great swedish systematic Pythagorean and scientist Carl von Linnaeus, who needet it in his heated greenhouse for the welfare of his long series of pottery plants in that long greenhouse.
Carl von Linneus found that up near the stove he had boiling water and down at the other end of the greenhouse it was freezing in the winters of old Uppsala, so he simply took and turned that “centigrade” scale from Celsius around. Thus what we work with and tend to believe in is acually not Celsius but in the great Prof.Dr.Carl von Linnaeus himself.
But, what we should rather keep an eye on in the spirit of Linnaeus and Celsius is pure wet snow and pure boiling water, and care to look also at the barometer like Celsius did. And to all those plants, like Linnaeus did. Then we are better calibrated.
The Kelvin scale is rather invented by Robert Boyle who had the Mercury U- tube Barometer right from ……Torricelli… a very fameous and clever pupil of …Gallileo Gallilei himself. But Boyle examined what happens and how it looks if there is some air above the mercury at the closed end of the U- tube. And found that Pressure and volume is “inversely proportional” and proportional to an obscure magnitude T for temperature, Obscure….. but quite real,…..
And there we have it. When the centigrades of Linnaeus definition is placed alongside with the Torricelli- Boyle gas- thermometer, we get the KELVIN- scale.
Beat that!
I find that scale more and more practical. It rules for natural gases, and rules in a very practical way for heat irradiation and for visible heat irradiation colours that I have to judge quite carefully by eye and by hand measurement. and by the fameous Bolzmann T^4 rule.
For heating with a stove, for glasswork and for pottery, and for casting and welding of metals, and ruling further for a candle, for incandescent lamps, and for the sun. And for the inner solar system.
Thus we are calibrated.
That reasonable amercan could agree indeed. Better take notice of reliable signals of nature and rather do respect and recommend that, when we discuss temperatures. And propagate that orientation and way of living and of judging and of seeing it.
It probably is a quite proper medicine against both PHOBOS and DAIMOS. Thus also proper Astrology and Royal diplomacy of MICROCHOSMOS.
The bathing waters of southern California for instance do come out very close to 300 K quite exactly, thus also very practical as a natural reference. And the very fameous Big Bang that is all around does keep allmost quite exactly 3 K, thus also easy to remember.
Working in the spirit of the old masters you see is better than believing in the experts.

george e. smith
January 29, 2014 11:24 pm

“”””””……RichardLH says:
January 29, 2014 at 5:09 pm
george e. smith says:
January 29, 2014 at 4:13 pm
“What if you re-divide your earth into five EQUAL AREA zones, instead of the unequal areas you used. ”
Because the heating input profile to those areas would be wrong?
You do know why the lines on the globe are where they are don’t you? The bit about the sun being directly overhead at some point in the year and the other about it not being seen at all some or visible all of the time as well?…..”””””
Help me out here Richard LH.
Willis and I were both perplexed by the zonal signal difference. Willis opined that it might relate to his chosen polar areas, are much smaller than his tropical zone.
So I asked Willis; what if he makes the areas equal; thinking he might take the data and do that.
And then you respond thusly: “”..Because the heating input profile to those areas would be wrong?..””
What on earth does that have to do with what I suggested to Willis ??
Also; that data is about anomalies. Not about Temperatures, so what the heating input profile is, is quite immaterial. That is the whole idea of the anomaly concept..
And I submit that the heating input profile to ANY area, is precisely what it is supposed to be.

george e. smith
January 29, 2014 11:43 pm

“””””…..No. …..”””””
So says RACookPE1978.
Sheesh !! I ask Willis what if he does a simple (different) analysis (if he cares to.)
And folks I didn’t even ask say “no”..
Well I asked Willis; not Richard LH, nor RACookPE1978. So why don’t the two of you , take a long walk on a short pier.
If Willis isn’t inclined to do it to find out what happens; that’s fine with me; I figured he might be curious enough to try it. But I do understand he has his own things to do.
But for the life of me, I can’t imagine why anyone else would object to my asking.

anthropic
January 29, 2014 11:50 pm

3×2 says:
January 29, 2014 at 12:05 pm
harrydhuffman (@harrydhuffman) says:
January 29, 2014 at 5:58 am

Emergent phenomenon” is an argument from incompetent, third-rate thinkers like Richard Dawkins, determined to push Darwinian, or undirected, evolution upon students of science, despite its by now obvious failings; back in the 1980′s, it was called “order out of chaos” […]
HDH, you are so full of it. Willis has suggested and ,over time, provided evidence for, a ‘control mechanism’ that accounts for pretty much everything we have seen so far in terms of GSTA. Change the atmosphere and watch the ‘emergent phenomena’ shift 15 minutes ‘earlier’. He makes a good case.
You, OTOH, are the equivalent of “witch craft is responsible”. You are exactly the kind of idiot that ‘warmists’, or whatever they are called these days, point to as being representative of ‘climate deniers’. You, and you are not alone, are a f*cki*g shambles that I’m ashamed to post on the same thread as.
As we might sing at an English football match …
Are you AlecM? Are you AlecM? Are you ‘dogs nose’ in disguise?
Seriously. You think Darwin was wrong? Hansen knows nothing about Venus? Jesus on a fxcking Dinosaur HDH… Will you ever accept the fact that we need people like you like we need ‘The Environmental Lobby’.
You are an embarrassment. Were it my blog then you and the ‘Dragon Slayers’ would have been out of here a long time ago. The only problem with this site is that Anthony is way to polite. He really doesn’t want to ‘moderate’ like ‘realclimate’. Me, well you would be out of here as fast as every other idiot troll.
Now 3×2, I would agree that HDH’s comment came out of left field. Nor do I buy the notion that emergent phenomena are necessarily nonsense.
But before you invest yourself too much into protecting Darwin, you should know that much of what he theorized, from the “tree of life” to the simplicity of turning chemicals into life, does not match the evidence. The lack of transitional forms in the fossil record prior to the Cambrian explosion has not, contrary to his expectations, been filled in by further search. Even the modern neo-Darwinian synthesis using modern genetics is coming under increasing criticism from biologists because the mechanisms of mutation and drift fail to account for the innovations clearly seen in the fossil record.
For example, a 2011 paper in the journal Biological Theory stated, “Darwinism in its current scientific incarnation has pretty much reached the end of its rope.” In 2009, Eugene Koonin of the National Center for Biotechnology Information stated in Trends in Genetics that there are major problems in core neo-Darwinian tenets, such as the “traditional concept of the tree of life” and the view that “natural selection is the main driving force of evolution.” Said Koonin, “the modern synthesis has crumbled, apparently, beyond repair” and “all major tenets of the modern synthesis have been, if not outright overturned, replaced by a new and incomparably more complex vision of the key aspects of evolution.” Koonin concludes, “not to mince words, the modern synthesis is gone.”
I could go on with the problems of achieving the functional folding proteins by chance processes to accounting for the complex specified information necessary for biological innovation, but that’s enough.
Agreed, this was the wrong time & place for HDH to make some of his remarks. But if criticism of Darwin shows a person is crazy, we’re gonna have to send a lot of biologists to the loony bin.

george e. smith
January 29, 2014 11:50 pm

As for the “small number of samples” someone suggested as a cause of the signal amplitude.
Hansen says anomalies are correlated out to 1,000 km, so one sensor every 2,000 km, is all you need.
So you only need one sensor in each of Willis’s polar zones. That’s plenty of samples; according to Hansen.

Reply to  george e. smith
January 30, 2014 3:33 am

Hansen is wrong. The difference is the averaging of large collections of number vs a small collection. At least numbers that vary in this case from weather. The larger the collect the smoother the average is. I saw this exact thing when working with the NCDC gsod data set, and the arctic and antarctic where the main place it shows up.it also shows up when you process “smaller” area’s both Africa and Australia in some years shows the same thing, and then you look at the sample count, you find only a small number of samples, and in many cases these stations sample a fraction of the year.

richardscourtney
January 29, 2014 11:51 pm

timetochooseagain:
I am replying to your offensive twaddle at January 29, 2014 at 4:38 pm.
You blatantly misrepresented my words,
You presented red herrings.
You made statements which you have yourself refuted in previous posts in this thread.
And you pretended to not understand anything you did not agree.
Then you accuse ME of sophistry!!!
You are an anonymous, dishonest and disruptive time-waster. I will have no more to do with you.
Richard

RichardLH
January 30, 2014 1:36 am

george e. smith says:
January 29, 2014 at 11:24 pm
“And I submit that the heating input profile to ANY area, is precisely what it is supposed to be.”
Yes of course. The input curve function for the Polar regions is exactly the same as for the Tropics, just scaled a little.
By the way, you have worked out that the Min + Max / 2 only holds true for half wave rectified equal time sine waves haven’t you? Really bad maths outside of that value. Over or under, not right. Needs a different divisor for all the other cases.

RichardLH
January 30, 2014 1:47 am

george e. smith says:
January 29, 2014 at 11:43 pm
“Sheesh !! I ask Willis what if he does a simple (different) analysis (if he cares to.)
And folks I didn’t even ask say “no”..
Well I asked Willis; not Richard LH, nor RACookPE1978. So why don’t the two of you , take a long walk on a short pier.”
I prefer step wise integrating functions myself, You know averages over longer and longer timescales. Just like most RMS based power circuits do all the time.

RichardLH
January 30, 2014 1:53 am

RichardLH says:
January 29, 2014 at 12:56 pm
[photobucket website giving “out of service” message. Mod]
Just verified it is still working for me. Can you please recheck this? I’ll post to snag,gy as ell if equired.

RichardLH
January 30, 2014 1:55 am

snag.gy as well if required. (sorry)

johnmarshall
January 30, 2014 3:36 am

@carbomanton (?)
I did not say I feared a 2C rise in temperature it was the rise claimed by that arch alarmist James Hanson.
2C is not even important when you consider diurnal changes, seasonal changes.
Consider this:- take the coldest temperature measured in Antarctica, -80C and the hottest from the Arabian Desert, +70C at the surface the average of those two is not +14C which is the optimum temperature alarmists claim. That max/min data could happen on the same day at the same time.
Arguing temperature is like arguing about how many angels could dance on the head of a pin. It is HEAT that is important and that vital metric is not measured.

Andy H
January 30, 2014 3:38 am

Hi Willis,
Stefan Mitich has a web site :-http://globalwarmingdenier.wordpress.com/climate/
In this he details that the troposphere expands when heated ,cools when colder. A basic law of physics , things expand when warmer ? (apart from water) .So when heated atoms soar to the top of the Troposphere where they are cooled, top of the Troposphere is at -90c.fall and because this is happening every nanosecond there is a worldwide auto regulation of overall temperature. Now I do not know enough on any subject to say that this is a better view than your own or a worse one , but I would love it if someone could explain to my why this simple mechanism is wrong?
Because of this rapid (immediate) expansion and contraction ,resulting obviously in vertical and horizontal winds ,their is a persistent phenomena of distribution of heat from the surface to the stratosphere and cooling in the reverse direction, only local effects can be said to be weather and this is why the Troposphere around the Equator is some 5 k higher than that of around the poles. I like to learn , so please someone , point out the absurdity of this ,hypothesis or is it just a fact. At any rate , please take a look at the Mitich website. Either he is mad or the sanest man on Earth.

johnmarshall
January 30, 2014 3:44 am

@Mi Cro.
Agree in part. It would help if there was complete data coverage then a more meaningful average MIGHT be arrived at. Africa has about 1200 temperature stations, perhaps correctly set and maintained, but the WMO say that Africa needs 12000 to cover that continent accurately. Antarctica has single figure stations though more are claimed most are impossible to find due to snow cover.
So do we have enough data to formulate an accurate state of climate now? NO.

Reply to  johnmarshall
January 30, 2014 4:38 am

There are a few years where my data set (gsod) has only one or two stations providing less than a full year of data, when averaged it shows a large spike in the over all average. And even now at least in gsod, I’d be very surprised if Africa has more than a thousand or so stations.

January 30, 2014 4:00 am

george e. smith says:
“Howcome; on the same scales the north pole and the south pole have whacking great peak real signal amplitudes, and everywhere else has ho-hum real signals ??”
For much of the year the temperatures are impacted by highly variable circulation rather than regular daily sunshine.

RichardLH
January 30, 2014 5:08 am

Mi Cro says:
January 30, 2014 at 4:38 am
“I’d be very surprised if Africa has more than a thousand or so stations.”
So why bother interpolating (guessing) the ‘field’ in between? Trust Nyquist and just track the changes in the sample points we have. If they are representative enough we will get a ‘true’ answer.
If not, then at least we won’t add any more ‘distortion’!

Reply to  RichardLH
January 30, 2014 5:34 am

So why bother interpolating (guessing) the ‘field’ in between?

Somewhere along the way they thought they could/should. I started the work I did to look at two things, how much of yesterdays temp increase was lost over night, and what did the station measurements say, not the model of surface temps that were mostly fabrication.
Both of these show no warming trend, some years are up, others are down. Richard if you haven’t followed the link in my name, I have about 6 different blogs I did on this work, lots of charts.
Here’s the sample count by year for Africa
YEAR SAMPLE
1940
1941
1942
1943
1944
1945
1946 79
1947 718
1948 1414
1949 125
1950
1951
1952
1953 2569
1954 2762
1955 2944
1956 3095
1957 9235
1958 13914
1959
1960 2465
1961 3046
1962 3692
1963 3872
1964 4177
1965 3559
1966 3411
1967 4875
1968 14270
1969 730
1970 728
1971 365
1972
1973 31389
1974 34755
1975 36933
1976 38190
1977 40606
1978 40653
1979 44378
1980 41476
1981 45291
1982 29863
1983 40596
1984 41876
1985 40943
1986 42346
1987 40894
1988 42465
1989 39687
1990 41520
1991 40781
1992 39483
1993 42823
1994 44120
1995 45342
1996 46036
1997 44244
1998 40252
1999 41525
2000 44409
2001 45230
2002 46235
2003 46697
2004 47935
2005 49059
2006 45159
2007 46403
2008 47805
2009 46213
2010 54570
2011 52880
2012 53821
9999 1806928
9999 is the total number of samples used. GSoD has 262 stations in total for Africa, but samples are erratic, samples are erratic for all of the stations globally, again at least in this data set. But let me know I got a copy of the CRU data, and compared station lists, they were almost identical, unfortunately CRU only had an average, and no counts.

Reply to  RichardLH
January 30, 2014 5:40 am

But let me know

I’m not sure where this came from ?
Here’s the counts for Antarctic:
YEAR SAMPLE
1940
1941
1942
1943
1944
1945
1946
1947
1948
1949
1950 95
1951 251
1952 1
1953
1954
1955
1956 780
1957 2551
1958 2350
1959 1791
1960 1830
1961 1831
1962 1870
1963 1817
1964 1565
1965 1515
1966 1226
1967 1226
1968 1214
1969 1200
1970 978
1971 991
1972 929
1973 1183
1974 1135
1975 1203
1976 1082
1977 1475
1978 2283
1979 2867
1980 3216
1981 3258
1982 3114
1983 3757
1984 4218
1985 3935
1986 4182
1987 4381
1988 4538
1989 4074
1990 6730
1991 8687
1992 9477
1993 8560
1994 10026
1995 9927
1996 12708
1997 13947
1998 13375
1999 11936
2000 12644
2001 14423
2002 15006
2003 14461
2004 15196
2005 15955
2006 14725
2007 19014
2008 20645
2009 21312
2010 22379
2011 21676
2012 20181
9999 408902

Reply to  RichardLH
January 30, 2014 5:43 am

Oh, many of these stations don’t provide anywhere near a full year’s data, this also really tweaks a yearly average when compared to a set that is mostly stations with a full year’s samples.

Greg
January 30, 2014 5:42 am

G.E. Smith: “Hansen says anomalies are correlated out to 1,000 km, so one sensor every 2,000 km, is all you need.”
which of course is total BS when step from land into a region that is basically an ice bucket, where huge amounts of energy are going into and out of phase changes. That means there can be substantial heat flux with minimal change is surface temperature.
There is probably no clearer case where you CANNOT assume temperature is even an approximate indication of the extensive property energy that would give some credence to the idea of averaging, correlation or projection.
Projecting land temps out over sea-ice is just the latest of a long heritage of deliberate deception perpertrated by Hansen that goes back at least as far as the 1988 congressional hearing where he took out the air-con the night before the hearing.

Greg
January 30, 2014 5:49 am

G.E. Smith::”So I asked Willis; what if he makes the areas equal; thinking he might take the data and do that.”
He also posted the code to retrive and plot the data. I imagine it would not be too hard to tweak a couple of constants in his code to produce what you want to see.

RichardLH
January 30, 2014 5:57 am

Mi Cro says:
January 30, 2014 at 5:34 am
“Somewhere along the way they thought they could/should.”
And ignored Nyquist into the bargain (even though they often quote him).
I suppose this comes fro wanting to compare it to the model output which is an area function. So they need an area function to compare it too. Comes from all those nice contour plots that weather men do as well.
It is just that all it does is add some (random?) weighting factors to the points they have which just muddies the picture.
Sort of like a jpg rather than a bmp/raw with a variable and estimated compression factor across the image.

Reply to  RichardLH
January 30, 2014 6:07 am

It is just that all it does is add some (random?) weighting factors

It adds the “modelers” bias on what he thinks the interpolated temperature should be. What I produce is based exclusively on measurements, but it’s not a “Global” average, as you mentioned it would allow them to compare temps. If temps were spatially linear, maybe it would be valid, but temps are organized by weather systems, they have fronts. Every one of the temperature series does this.
And when you look at the measurements, there is no strong Co2 signal.

RichardLH
January 30, 2014 6:02 am

See
http://wattsupwiththat.com/2014/01/29/important-study-on-temperature-adjustments-homogenization-can-lead-to-a-significant-overestimate-of-rising-trends-of-surface-air-temperature/#comment-1554490
for my thoughts on Max – Min /2 as opposed to true ‘continuously’ sampled RMS values for temperature and then step wise integrated down to the periods required, Hour, Day, Month, Year, Decade.

Reply to  RichardLH
January 30, 2014 6:15 am

A continuous measurement would be nice, but it doesn’t exist for past measurements, and everyone wants to see a series from as long ago as possible, the reality is prior to about 1974, surface temp series are mostly made up, gsod is okay back to the 40’s-50’s, but there are lots of spikes due to being under sampled.
I do a lot of stuff with Tmin/Tmax and when you get a decent sample the results look pretty clean.
I do get comments about TOB, but since I produce a difference for each station everyday, as long as they do the same thing, you get a reasonable difference, and then random changes get swamped out with averaging large collections of numbers.

RichardLH
January 30, 2014 6:16 am

Mi Cro says:
January 30, 2014 at 6:07 am
“temps are organized by weather systems, they have fronts. ”
And they are mobile. Which is the thing I believe is missing in the way the weighing factors are calculated to date. They should vary hour by hour, day by day, to reflect the transition of warmer/colder air between the sampling points.
Sort of like using motion compensated mpg rather than fixed jpg compression. (I do HATE compression algorithms – they do nasty things to images – and the current climate science interpolation is compression back in the very, very early days).