Global Warming is a Pussy Cat

Guest post by Ira Glickstein

Thanks to WUWT readers who posted estimates of how much of the supposed 0.8ºC Global Warming since 1880 was due to Data Bias, Natural Cycles, and AGW (human-caused warming). I am happy with the results even though the average for AGW came out higher than my original estimate.

This is the fifth of my Tale of the Global Warming Tiger series where I allocated the supposed 0.8ºC warming since 1880 to: (1) Data Bias (0.3ºC) , (2) Natural Cycles (0.4ºC) , and (3) Human-caused global warming – AGW (0.1ºC). Click Tiger’s Tale and Tail :^) to read the original story.

WUWT COMMENTERS SAY

As the above graphic indicates, WUWT Commenters who provided their own estimates generally agreed with my allocation, with the interesting exception of AGW, where the average is 0.18ºC, nearly double my original allocation of 0.1ºC. Natural Cycles averaged out at 0.33ºC, a bit lower than my original 0.4ºC. Data Bias averaged out at 0.28ºC, a bit lower than my original 0.3ºC. While this is not a scientific poll, it certainly shows a wide variety of Climate Science opinion is alive and well here at WUWT.

Far from being a Global Warming Tiger mostly due to atmospheric CO2 from human burning of fossil fuels and land use, and on its way to 2ºC to 5ºC or more, according to the IPCC, it appears we are actually dealing with a Global Warming Pussy Cat, with warming since 1880 around 0.5ºC to 0.6ºC,and stabilizing despite continued rise in CO2, much of it human-caused.

Some who responded put AGW as low as ZERO (while others put it as high as 0.7ºC), some put Natural Cycles as low as ZERO (while others put it as high as 0.55ºC), and some put Data Bias as low as ZERO (while others put it as high as 0.65ºC). At the end of this posting, I’ve tabulated your estimates, along with the names of those kind enough to provide them. THANKS!

When everything settles out over the coming decades, which I believe will be marked by stabilization of Global temperatures, and perhaps a bit of Global Cooling, I think your estimates will turn out to be more prescient than that of the official climate Team! One Commenter humorously posted “Jim Hansen’s” estimates as: AGW = +3.3ºC, Natural Cycles = – 2.5ºC, and, of course, Data Bias = 0.0ºC.

IS ALL THE TEMPERATURE DATA USELESS – OR IS IT THE ANALYSIS?

When I discussed the controversy about the temperature data collected since 1880 with my PhD advisor (with whom I am still in regular contact) he reminded me that, given a large number of measurements by different observers, using a variety of thermometers, and taken at a variety of locations and times, the random errors would largely cancel each other out. Even systematic errors in given thermometers, which might be calibrated a bit high or low, and given observers, who might tend to round the numbers up or down, would largely cancel out. Indeed, he said, even long-term systematic bias would hardly show up in the temperature trends. Thus, he assured me, while any individual reading may or may not be accurate, the overall temperature trend would be quite robust, to a high level of precision.

Of course, he is correct from an academic point of view. As a brilliant analyst once humorously explained to me, once we ASSSUME a perfectly smooth elephant with negligible mass, all sorts of wonderful circus tricks become possible!

Yes, errors may be categorized as:

  1. Perfectly Random (due to “noise” in the measurement process, and equally likely to be higher or lower than the truth) or,
  2. Perfectly Systematic (due to miscalibration of the measuring instrument, off by a constant amount, equally likely to be higher or lower than the truth), and assumed to be
  3. Perfectly Independent (not affected by any other measurement).

In the real world, however, these conditions seldom obtain, but they are necessary assumptions for statistical analysis to operate correctly. When a scientific study concludes that the results are correct, plus or minus a given amount (say +/- 0.05ºC), to a given statistical certainty (say 95%), they are implicitly assuming the three items above are satisfied.

In many cases, even if those assumptions are not perfectly true, they are close enough for the statistical results to be valid. How can we tell if Global Warming is one of those cases? Well, for a start, we can ask how ROBUST are the results. In other words, when they are analyzed by different people at different times, do they all come up with close to the same results? In the case of Global Warming data, as I have shown, even when the same exact data is analyzed by the same exact members of the official climate Team, the results vary by +/-0.2ºC or more, indicating that something is wrong with their basic assumptions.

Case #1

According to my posting, a graph of the US Annual Mean Temperature record from 1880 to 1998, published by NASA GISS in 1999, differs substantially for the record for the same years, published by them in 2011, see blink graphic below:

A commenter suggested that the 1999 chart did not look like what had been published by GISS in that year. Well, the 1999 chart I used came from a posting by Anthony who credited Zapruder.nl. An almost identical chart appeared at Climate Audit in 2007, linking to a Hansen 1999 News Release but that link now brings up a damaged image. However, I found an almost identical chart at GISS in a Hansen 1999 paper. The 2011 graphic I used was downloaded from GISS last month. The GISS re-analysis makes data after about 1960 warmer by up to 0.3ºC, while that prior to 1950 gets cooler by 0.1ºC.

Case #2

According to a GISS email, released under the Freedom of Information Act, records for US Annual Mean Temperature for 1934 and 1998 were re-analyzed seven times, and that resulted in a reduction of 1934’s lead of 0.5ºC warmer to a virtual tie. [The email is embedded in the graphic below.] In the latest GISS accounting, done after the date of the email, 1998 pulled ahead by a bit. (Our tax dollars at work.)

There is a need to analyze and adjust the raw temperature data when stations move or are encroached by development or when other changes are made to the equipment and enclosures or in the times of observation, etc. It seems that most of those changes would tend to exaggerate the amount of warming, yet those charged with analyzing the data seem to think otherwise. The reported temperatures always seem to increase with each re-analysis. That suggests an agenda on the part of those entrusted with the analysis.

DOES SATELLITE TEMPERATURE DATA SOLVE THE PROBLEM?

Satellite temperature measurements have been available starting in the late 1960’s, with good surface and tropospheric data available since late 1978. So, it would appear that, at least from 1979 on, given a uniform Global source set of data, global temperature trends have been accurately reported. However, according to Wikipedia

Satellites do not measure temperature. They measure radiances in various wavelength bands, which must then be mathematically inverted to obtain indirect inferences of temperature. The resulting temperature profiles depend on details of the methods that are used to obtain temperatures from radiances. As a result, different groups that have analyzed the satellite data have obtained different temperature trends. Among these groups are Remote Sensing Systems (RSS) and the University of Alabama in Huntsville (UAH). Furthermore the satellite series is not fully homogeneous – it is constructed from a series of satellites with similar but not identical instrumentation. The sensors deteriorate over time, and corrections are necessary for satellite drift in orbit. Particularly large differences between reconstructed temperature series occur at the few times when there is little temporal overlap between successive satellites, making intercalibration difficult. …

They go on to say “Satellites may also be used to retrieve surface temperatures in cloud-free conditions, generally via measurement of thermal infrared …”[Emphasis added] so it would appear that this type of instrumentation cannot reliably measure surface temperatures below clouds. That is problematic, since anyone who has been to a beach knows how cold it gets when a cloud happens to pass overhead and block the Sun!

Roy Spencer, PhD updates the UAH Global temperature datasets based on satellite data. He writes:

Since 1979, NOAA satellites have been carrying instruments which measure the natural microwave thermal emissions from oxygen in the atmosphere. The signals that these microwave radiometers measure at different microwave frequencies are directly proportional to the temperature of different, deep layers of the atmosphere. Every month, John Christy and I update global temperature datasets … that represent the piecing together of the temperature data from a total of eleven instruments flying on eleven different satellites over the years. As of early 2011, our most stable instrument for this monitoring is the Advanced Microwave Sounding Unit (AMSU-A) flying on NASA’s Aqua satellite and providing data since late 2002.

Contrary to some reports, the satellite measurements are not calibrated in any way with the global surface-based thermometer record of temperature. They instead use their own on-board precision redundant platinum resistance thermometers calibrated to a laboratory reference standard before launch.[Emphasis added]

The last sentence is somewhat reassuring, but it does not resolve my questions about how they compensate for cloud cover. It appears highly likely that Global temperatures have increased since 1880 by around 0.5ºC, which would most likely increase the water vapor content of the atmosphere and, over time, result in more clouds, on average. Thus, depending upon how the satellite temperature data analysis corrects for cloudiness, that data might report more warming than actually occurs. In any case, it appears that the satellite data will help improve the general reliability of global tempeature data, assuming that the analysis is done properly, by experts who do not have any political agenda to “prove” or “disprove” Catastrophic AGW. Spencer appears to be a solid citizen in that respect.

CONCLUSIONS

In my postings (A-, B-, C-, D-) in this Tale of the Global Warming Tiger series, I asked for comments on my allocations: to: (1) Data Bias 0.3ºC, (2) Natural Cycles 0.4ºC, and (3) AGW 0.1ºC. Quite a few readers were kind enough to comment, either expressing general agreement or offering their own estimates. Here is a tabulation of their interesting inputs. THANKS!

Anomaly due to — Human (AGW) Natural Cycles Data Bias
A- ºC ºC ºC
Bill Illis 0.225 0.275 0.300
Brian H 0.450
Edmh 0.100
Ágúst Bjarnason 0.250 0.250 0.100
B-
Ed Caryl 0.000 0.300 0.500
James Barker 0.000 0.480 0.320
JimF 0.100 0.500 0.200
richard verney 0.000 0.550 0.250
Scarface 0.000 0.150 0.650
Dave Springer 0.500 0.000 0.300
Mike Haseler 0.100 0.300 0.200
C-
Leonard Weinstein 0.300 0.400 0.100
TimC 0.100 0.400 0.300
Steve Reynolds 0.400 0.250 0.150
Eric Barnes 0.150 0.450 0.200
Lucy Skywalker 0.000 0.300 0.500
D-
Wayne 0.100 0.300 0.400
Eadler 0.700 0.200 0.000
Nylo 0.200 0.400 0.200
Minimum 0.000 0.000 0.000
Maximum 0.700 0.550 0.650
AVERAGE 0.179 0.331 0.275
Ira’s Estimates 0.100 0.400 0.300
Anomaly due to — Human (AGW) Natural Cycles Data Bias
The climate data they don't want you to find — free, to your inbox.
Join readers who get 5–8 new articles daily — no algorithms, no shadow bans.
0 0 votes
Article Rating
36 Comments
Inline Feedbacks
View all comments
February 8, 2011 10:00 am

Marshall at 2:37 said:
“According to some physicists a global average temperature is meaningless. To get an accurate temperature of anything is must be at equilibrium. This planet’s climate is never at equilibrium.”
The Earth is in DYNAMIC equilibrium, not STATIC equilibrium. Many systems, including bobcat-rabbit populations, are in dynamic equilibrium. This is an equilibrium which varies with some predictability around a mean, and has feedback systems. But your point is well made.
From what I can see, the IPCC concept is that the Earth’s climate is, in fact, in a static equilibrium periodically distorted by unique events of limited, if perhaps large, time-frames. Something happens, and we have an ice age, then the something ends and we go back to the Garden of Eden days, defined of course by pre-industrial (without the LIA). When and the conditions of when that is, I’m unsure, but I suspect it is much like 1955, the time the Team was born or at university. God was in his Heaven then, and all was good with the world. (I’m supposed to put /sarc, I think.)
The idea that the planet is naturally in static equilibrium is one based on stable insolation, stable oceanic circulation patterns of a decade or less, and static albedo in all the important places. Minor variations on the theme are not supposed to add up one way or the other. Man has the only non-random, unidirectional impact on the environment, right now through his production on atmospheric CO2. With this assumption, all post 1940 climate change can be attributed to anthropogenic causes.
I’m not saying I agree with this, but I believe this is the basis of AGW. CAGW is just the exaggeration based on the effects of the inclusion of the precautionary principle to policy advice. The PP says that of any set of scenarios, the worse of those is to used as a basis for action, as the cost of assuming wrongly on the lesser side is greater than the cost of assuming wrongly on the greater side. The cost, of course, is sociological, not economic, which is why dollars cannot be used to argue against excessive action: it’s about people, not wallets, after all.
But your distrust of the accuracy and precision of the temperature rise, and what it all “means” is mine, as well.
I’ve been looking at the annual and hemispheric variation in insolation and albedo based on simple geometry and general planetary data from satellites. I am disturbed that my easily-determined findings about orbital eccentricity and axial tilt mean that
1. Jan to July the insolation varies by 23.2 W/m2,
2. the albedo of the earth in January is 0.3408 and in July, 0.2469, causing a to-the-ground difference of 20.5 W/m2 (157.3 vs 177.8 W/m2) in heating power.
3. in Jan the southern hemisphere receives from the sun 185.9 W/m2 vs 155.2W/m2 in July (for the NH, 166.2 vs 173.7 W/m2),
4. over the course of the year cloud cover varies by 15 – 20%.
5. the differences in hemispheric albedo over the year OVER-counter the insolation variations such that the planet is 2.3K warmer at aphelion than at perihelion.
These variations show what a dynamic system we have. That we have basic stability at all is a result of rapid balancing forces. The math “mean” gives an illusion of stability in insolation (340.5 W/m2) and warming (236.4 W/m2) that does not exist through the year. How well do these balancing forces work? Since weather is regional but is the basis of climate (climate is the average sum of weather on a seasonal level), can we say that the insolation and albedo variations regionally do not over time have a non-random variation of , say, 0.5%? 0.05% is 1.18 W/m2, what the IPCC more-than-says will kick up the planet’s temperature, and 138% of what Trenberth and Schmidt say is “missing” heat below 1000m in the oceans (0.85W/2).
All the above is background to say that the most important non-variant factor, the sun’s heat, is in fact quite variable. The math just makes calculations easier by calculaating as if the energy heating the earth is a constant within negligible amounts. The new measurement is 1360.5 +/-0.15 W/m2 (340.125 on a whole Earth, surface-area basis). That is the average, but that around a large annual range, but is what impacts the heated portions of the earth really +/- 0.15 W/m2?
We all recall wet, cold summers: the differences of a region, the differences of a time. It is common for Yellowknife in the Northwest Territories, to be warmer than Calgary, whether it is summer or winter (much to the chagrin of Calgarians). If regional differences make a significant impact on the global record – as Hansen, with his “better” Arctic record implicitly agrees – then the variation in impact is clearly greater than +/-0.15 W/m2 on a regional scale. Weather does impact global “climate” indicators.
The error bar for both accuracy and precision are not those derived from a gross summation of the planet at the largest of levels of insolation but also, for our purposes, in the heating, i.e. tempertures. To get a hard look at the temperature variation issue, perature, check the ARGO float data: look at year-to-year temperatures on a hemispheric level. The graphing functions are simple to do. What you see is large regional differences not just annually but within the year. What is heated, when it is heated, and how much it is heated are very distinct but variable.
I’m saying it is not just input of energy that is of regional and time significance for global numbers, but transport as well. Heat in at point A goes to point B. And it is not random in time or space. The ARGO data show that condensed numbers of SST cover up important local variations that are not considered in the error bars.
I suggest temperature measurements (instrumental and observation) are less important than time-and-location variations. The Arctic warms more than the Antarctic AS IS WELL KNOWN, for the NH always warms more than the SH due to albedo differences, but is the “error” bar in Arctic naturally greater than that elsewhere? I’d suggest it is. But is it treated as such? No. A global “error” bar of 0.1K or less is used. The math makes it so. But is it, at least in terms of impact?
All regions are treated the same and over all periods of time. It is easy to show that input energies and temperature variations are not the same by region and time. Nor should they be, as symmetry is a theoretical concept at a macro-level, not a reality. Is the planet is in an equilibrium (dynamic or not) to 0.1% heating (0.25W/m2) or 0.4K? Seems unlikely.
I strongly suggest
1. that the IPCC and here-in discussed error bars are inappropriately small,
2. that the error bars are mathematical artefacts and not reflective of the variations that cause temperature anomalies locally or globally,
2. that the temperature variations including those of the LIA could be the result of minor, semi-random variances in input and albedo that constructively and destructively interfer overtime, and
3. that much discussion and calculation within the CAGW position is of portions that are impossible to identify as global, as opposed to regional in nature, or of amounts attributable to man when regional and time differences are considered.
We quibble, to use a phrase I have used elsewhere, about the nature of boots on angels dancing on the heads of pins. Until we separate out regional from global causes and effects (including those of time), global averages or changes from past averages SIGNIFY nothing attributable to either man or nature.
The IPCC, Gore, Hansen et al have lead us down the path of simplicity in a complex world. Data fudging is about making regional differences disappear to show a pre-determined global trend. Salinger in New Zealand was not an outlier in his field, just an obvious figure in a large, empty space. If we were to interpret the various datasets on a regional basis – as we would, for instance, for population growth – I know we would not see a global change attributable to CO2. The mathematics of global inclusion is PC; the mathematics of regions is not (try discussing population growth per country, for an instance of non-PC behaviour).
Global warming is not global. I doubt either camp would deny that. The meme says that regional warming has a global cause, but is expressed differently (like cold and more snow in winter 2011) in different regions. A handy, PC concept. But break it down over the last 20 years, stop adjusting local temperature records to match global/large, non-related regions, and throw in the legitimate local variation, and the non-global nature of climate changes will reveal themselves.
Remember: two people 6ft tall standing with one 3ft tall does not a trio of 5ft tall people make.

Dacron Mather
February 8, 2011 11:59 am

Kitteh !

Owen
February 8, 2011 12:31 pm

I still don’t quite understand why the satellite sensors aren’t calibrated by pointing them all to the same empty part of space and measuring the cosmic background temperature. Then the drift of each individual sensor can be compared to the same source (or can they not measure temperatures that low?) That’s what I get for watching the film “The Dish” and seeing the moon used as a pointing reference for acquiring the moon landing signals. What else in the universe could likewise be used as external reference points?

wayne
February 8, 2011 12:33 pm

Hey Ira! Now that’s better. Just didn’t want others who skim read this to automatically tag me as AGW proponent! I’ll take a ‘D’ over ‘AGW’ any day. ☺
More serious, really enjoyed your series. It has presented some very good points to keep in mind and one great insight that once broken into classes of cause and effect much of any alarm that might exists tends to disappear into the noise.
And, I’ll stay by my thoughts that much of the 0.5ºC since 1880 was tied to solar forcing though many now think the sun had little effect. Still don’t buy that line. I think Jean et al.’s earlier papers and others written in the 90’s were correct and there was actually a rather marked secular rise in both activity and solar irradiance over this period of some 5-7 Wm2. One day we may find it was masked in flaws in the instrumentation and/or algorithms used to process that data.

Jeff
February 8, 2011 12:47 pm

did I miss it or was UHI not mentioned at all ?

George E. Smith
February 9, 2011 8:43 am

“”””” Latitude says:
February 7, 2011 at 7:06 pm
Can anyone really measure a 1/2 of a degree….
…no “””””
Well actually you can; I’m sure that one can measure to 0.001 deg (C), and probably way less than that. BUT !!!
The big question always is; WHAT ARE YOU READING THE TEMPERATURE OF ?
Well actually you are reading the Temperature of the thermometer; or at least some point on it; but how that relates to what the hell Temperature you really wanted to know is another thing.
All sorts of industrial processes rely to varying degrees (pun intended) on accurate control of Temperatures.
If you are trying to grow a 300 mm diameter (12 inch) silicon single crystal ingot with a Czockralzki puller, the Temperature control has to be extremely precise or or you end up with a crystal that looks like Dolly Parton, instead of a cylinder.
Same thing goes for Anthony’s photographic collection of Owl boxes. They each have thermometers in them; but what the hell is that thermometer measuring the Temperature of ? More importantly, does it ever change with conditions. Therein lies the rub. The point where you want to know the Temperature, is always linked to the actual thermometer (Temperature sensing element) by some thermal impedance, and depending on heat sources or sinks nearby, air flows, or water flows, in the region, etc, the change in Tempertaure from sensor to sensed location can be all over the map.
It is very poor process control strategy, to monitor some variable, and then use that by way of a presumed relationship, to force control of some other variable. What if that relationship is somewhat unknown; such as the relationship between atmospheric CO2 abundance (well mixed of course) and mean global surface Temperature (which we don’t know).
If you don’t want your chemical plant to blow up, you always monitor that which you wish to control; not something else.
So yes we can measure 1/2 degree; but whether we can measure the weather Temperature to 1/2 degree is an entirely different question. My money would certainly be on the no way button.
Frequency of an electrical oscillation, I think I can measure to one part in 10^8. The good guys can do it to maybe one part in 10^16 for special cases; but measuring the mean global surface or lower tropospheric Tempertaure to 1/2 degree, is pie in the sky. It’s also quite meaningless, so it is good that we can’t measure it anyway.

Richard Sharpe
February 9, 2011 8:58 am

George E. Smith says on February 9, 2011 at 8:43 am

If you are trying to grow a 300 mm diameter (12 inch) silicon single crystal ingot with a Czockralzki puller, the Temperature control has to be extremely precise or or you end up with a crystal that looks like Dolly Parton, instead of a cylinder.

I think there could be a market there.

Mike
February 16, 2011 1:03 am

Just a thought for you all!
IF humankind is partly responsible for the effects of global warming then there should be significant markers at key times in history such as:
• Pre/ post Industrial Revolution
• First/ Second world wars
• All the atomic testing around the world during the Cold War periods
IF temperatures can be measured with such great accuracy then all measurements by who so ever records them should show significant changes during these periods in recent history on top of natural events. Would this be a way of assessing how accurate recording were and whose data was best interpreted.