Global Warming is a Pussy Cat

Guest post by Ira Glickstein

Thanks to WUWT readers who posted estimates of how much of the supposed 0.8ºC Global Warming since 1880 was due to Data Bias, Natural Cycles, and AGW (human-caused warming). I am happy with the results even though the average for AGW came out higher than my original estimate.

This is the fifth of my Tale of the Global Warming Tiger series where I allocated the supposed 0.8ºC warming since 1880 to: (1) Data Bias (0.3ºC) , (2) Natural Cycles (0.4ºC) , and (3) Human-caused global warming – AGW (0.1ºC). Click Tiger’s Tale and Tail :^) to read the original story.

WUWT COMMENTERS SAY

As the above graphic indicates, WUWT Commenters who provided their own estimates generally agreed with my allocation, with the interesting exception of AGW, where the average is 0.18ºC, nearly double my original allocation of 0.1ºC. Natural Cycles averaged out at 0.33ºC, a bit lower than my original 0.4ºC. Data Bias averaged out at 0.28ºC, a bit lower than my original 0.3ºC. While this is not a scientific poll, it certainly shows a wide variety of Climate Science opinion is alive and well here at WUWT.

Far from being a Global Warming Tiger mostly due to atmospheric CO2 from human burning of fossil fuels and land use, and on its way to 2ºC to 5ºC or more, according to the IPCC, it appears we are actually dealing with a Global Warming Pussy Cat, with warming since 1880 around 0.5ºC to 0.6ºC,and stabilizing despite continued rise in CO2, much of it human-caused.

Some who responded put AGW as low as ZERO (while others put it as high as 0.7ºC), some put Natural Cycles as low as ZERO (while others put it as high as 0.55ºC), and some put Data Bias as low as ZERO (while others put it as high as 0.65ºC). At the end of this posting, I’ve tabulated your estimates, along with the names of those kind enough to provide them. THANKS!

When everything settles out over the coming decades, which I believe will be marked by stabilization of Global temperatures, and perhaps a bit of Global Cooling, I think your estimates will turn out to be more prescient than that of the official climate Team! One Commenter humorously posted “Jim Hansen’s” estimates as: AGW = +3.3ºC, Natural Cycles = – 2.5ºC, and, of course, Data Bias = 0.0ºC.

IS ALL THE TEMPERATURE DATA USELESS – OR IS IT THE ANALYSIS?

When I discussed the controversy about the temperature data collected since 1880 with my PhD advisor (with whom I am still in regular contact) he reminded me that, given a large number of measurements by different observers, using a variety of thermometers, and taken at a variety of locations and times, the random errors would largely cancel each other out. Even systematic errors in given thermometers, which might be calibrated a bit high or low, and given observers, who might tend to round the numbers up or down, would largely cancel out. Indeed, he said, even long-term systematic bias would hardly show up in the temperature trends. Thus, he assured me, while any individual reading may or may not be accurate, the overall temperature trend would be quite robust, to a high level of precision.

Of course, he is correct from an academic point of view. As a brilliant analyst once humorously explained to me, once we ASSSUME a perfectly smooth elephant with negligible mass, all sorts of wonderful circus tricks become possible!

Yes, errors may be categorized as:

  1. Perfectly Random (due to “noise” in the measurement process, and equally likely to be higher or lower than the truth) or,
  2. Perfectly Systematic (due to miscalibration of the measuring instrument, off by a constant amount, equally likely to be higher or lower than the truth), and assumed to be
  3. Perfectly Independent (not affected by any other measurement).

In the real world, however, these conditions seldom obtain, but they are necessary assumptions for statistical analysis to operate correctly. When a scientific study concludes that the results are correct, plus or minus a given amount (say +/- 0.05ºC), to a given statistical certainty (say 95%), they are implicitly assuming the three items above are satisfied.

In many cases, even if those assumptions are not perfectly true, they are close enough for the statistical results to be valid. How can we tell if Global Warming is one of those cases? Well, for a start, we can ask how ROBUST are the results. In other words, when they are analyzed by different people at different times, do they all come up with close to the same results? In the case of Global Warming data, as I have shown, even when the same exact data is analyzed by the same exact members of the official climate Team, the results vary by +/-0.2ºC or more, indicating that something is wrong with their basic assumptions.

Case #1

According to my posting, a graph of the US Annual Mean Temperature record from 1880 to 1998, published by NASA GISS in 1999, differs substantially for the record for the same years, published by them in 2011, see blink graphic below:

A commenter suggested that the 1999 chart did not look like what had been published by GISS in that year. Well, the 1999 chart I used came from a posting by Anthony who credited Zapruder.nl. An almost identical chart appeared at Climate Audit in 2007, linking to a Hansen 1999 News Release but that link now brings up a damaged image. However, I found an almost identical chart at GISS in a Hansen 1999 paper. The 2011 graphic I used was downloaded from GISS last month. The GISS re-analysis makes data after about 1960 warmer by up to 0.3ºC, while that prior to 1950 gets cooler by 0.1ºC.

Case #2

According to a GISS email, released under the Freedom of Information Act, records for US Annual Mean Temperature for 1934 and 1998 were re-analyzed seven times, and that resulted in a reduction of 1934’s lead of 0.5ºC warmer to a virtual tie. [The email is embedded in the graphic below.] In the latest GISS accounting, done after the date of the email, 1998 pulled ahead by a bit. (Our tax dollars at work.)

There is a need to analyze and adjust the raw temperature data when stations move or are encroached by development or when other changes are made to the equipment and enclosures or in the times of observation, etc. It seems that most of those changes would tend to exaggerate the amount of warming, yet those charged with analyzing the data seem to think otherwise. The reported temperatures always seem to increase with each re-analysis. That suggests an agenda on the part of those entrusted with the analysis.

DOES SATELLITE TEMPERATURE DATA SOLVE THE PROBLEM?

Satellite temperature measurements have been available starting in the late 1960’s, with good surface and tropospheric data available since late 1978. So, it would appear that, at least from 1979 on, given a uniform Global source set of data, global temperature trends have been accurately reported. However, according to Wikipedia

Satellites do not measure temperature. They measure radiances in various wavelength bands, which must then be mathematically inverted to obtain indirect inferences of temperature. The resulting temperature profiles depend on details of the methods that are used to obtain temperatures from radiances. As a result, different groups that have analyzed the satellite data have obtained different temperature trends. Among these groups are Remote Sensing Systems (RSS) and the University of Alabama in Huntsville (UAH). Furthermore the satellite series is not fully homogeneous – it is constructed from a series of satellites with similar but not identical instrumentation. The sensors deteriorate over time, and corrections are necessary for satellite drift in orbit. Particularly large differences between reconstructed temperature series occur at the few times when there is little temporal overlap between successive satellites, making intercalibration difficult. …

They go on to say “Satellites may also be used to retrieve surface temperatures in cloud-free conditions, generally via measurement of thermal infrared …”[Emphasis added] so it would appear that this type of instrumentation cannot reliably measure surface temperatures below clouds. That is problematic, since anyone who has been to a beach knows how cold it gets when a cloud happens to pass overhead and block the Sun!

Roy Spencer, PhD updates the UAH Global temperature datasets based on satellite data. He writes:

Since 1979, NOAA satellites have been carrying instruments which measure the natural microwave thermal emissions from oxygen in the atmosphere. The signals that these microwave radiometers measure at different microwave frequencies are directly proportional to the temperature of different, deep layers of the atmosphere. Every month, John Christy and I update global temperature datasets … that represent the piecing together of the temperature data from a total of eleven instruments flying on eleven different satellites over the years. As of early 2011, our most stable instrument for this monitoring is the Advanced Microwave Sounding Unit (AMSU-A) flying on NASA’s Aqua satellite and providing data since late 2002.

Contrary to some reports, the satellite measurements are not calibrated in any way with the global surface-based thermometer record of temperature. They instead use their own on-board precision redundant platinum resistance thermometers calibrated to a laboratory reference standard before launch.[Emphasis added]

The last sentence is somewhat reassuring, but it does not resolve my questions about how they compensate for cloud cover. It appears highly likely that Global temperatures have increased since 1880 by around 0.5ºC, which would most likely increase the water vapor content of the atmosphere and, over time, result in more clouds, on average. Thus, depending upon how the satellite temperature data analysis corrects for cloudiness, that data might report more warming than actually occurs. In any case, it appears that the satellite data will help improve the general reliability of global tempeature data, assuming that the analysis is done properly, by experts who do not have any political agenda to “prove” or “disprove” Catastrophic AGW. Spencer appears to be a solid citizen in that respect.

CONCLUSIONS

In my postings (A-, B-, C-, D-) in this Tale of the Global Warming Tiger series, I asked for comments on my allocations: to: (1) Data Bias 0.3ºC, (2) Natural Cycles 0.4ºC, and (3) AGW 0.1ºC. Quite a few readers were kind enough to comment, either expressing general agreement or offering their own estimates. Here is a tabulation of their interesting inputs. THANKS!

Anomaly due to — Human (AGW) Natural Cycles Data Bias
A- ºC ºC ºC
Bill Illis 0.225 0.275 0.300
Brian H 0.450
Edmh 0.100
Ágúst Bjarnason 0.250 0.250 0.100
B-
Ed Caryl 0.000 0.300 0.500
James Barker 0.000 0.480 0.320
JimF 0.100 0.500 0.200
richard verney 0.000 0.550 0.250
Scarface 0.000 0.150 0.650
Dave Springer 0.500 0.000 0.300
Mike Haseler 0.100 0.300 0.200
C-
Leonard Weinstein 0.300 0.400 0.100
TimC 0.100 0.400 0.300
Steve Reynolds 0.400 0.250 0.150
Eric Barnes 0.150 0.450 0.200
Lucy Skywalker 0.000 0.300 0.500
D-
Wayne 0.100 0.300 0.400
Eadler 0.700 0.200 0.000
Nylo 0.200 0.400 0.200
Minimum 0.000 0.000 0.000
Maximum 0.700 0.550 0.650
AVERAGE 0.179 0.331 0.275
Ira’s Estimates 0.100 0.400 0.300
Anomaly due to — Human (AGW) Natural Cycles Data Bias
0 0 votes
Article Rating
36 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Latitude
February 7, 2011 7:06 pm

Can anyone really measure a 1/2 of a degree….
…no
You can only arrive at that through math.
When you consider rounding, thermometers shrinking, stations getting dropped, the past getting colder, whole years missing, no statistical warming in a decade…………
…when that entire 1/2 of a degree can be accounted for in one blink chart
Snow a thing of the past, snowrain, warmcold, droughtflood…..
….more severe weather events, when those events are on a major decline
I don’t believe one word of any of it any more….
….and all of this fuss is about 1/2 of a degree

Doug Proctor
February 7, 2011 7:09 pm

There are or were only about 6000 land stations in the GISSTemp records. Various investigators have analysed in good detail New Zealand, Australia and parts of Europe. WUWT readers have done grand work on the American mainland records. So the UHIE problem seems to have been nailed down in many areas. Also, the great dying of records in 1990 is well connected to the sudden global temperature anomaly rise in 1990. The work seems to have been done.
What is left is for someone to pull the “counter-fixed” station data into a group file and repeat the process that gives us a global history. Your work is interesting in that it shows that some have done this, at least with part of the data. When do we see a “data revealed” Skeptic graph? When do the Watts et al put their/our cards on the table and say “This, is what is really going on!”?

richard verney
February 7, 2011 7:14 pm

If the global temperature has truly increased by about 0.5C in the past 130 years, this is less than 0.05C per decade. Whilst I do not consider that one can measure temperatures to such accuracy, it is hardly anthing to get alarmed about.
It is interesting that Dave Springer does not consider that any of the ‘assessed’ temperature increase is due to natural causes. This is surprising since even the ‘Team’ generally accept that natural causes drove temperature increases through to the 1940s. It would be very odd if all those natural causes suddenly switched themselves off post 1940.
Ira, an interesting set of articles even if there is no real place for consensus views in Climate Science.

Robb876
February 7, 2011 8:37 pm

It should be easy to see how the natural cycles correlate to recent warming… Anthony, can you put a post together on that? It would be nice to see the data collected and well presented on this, and a great follow up to this post…

dana1981
February 7, 2011 9:34 pm

I don’t know why climate scientists bother doing scientific research when all they have to do to figure out how much global warming is anthropogenic is poll WUWT readers. This is how real science is done, alarmists! Stop wasting our tax dollars!!

intrepid_wanders
February 7, 2011 10:02 pm

Aw… Dana, no comment on what matters (O’Donnell et al 2010) vs (Steig 2009). PC’s must mean “Personal Computers” to some folks 😉

wayne
February 7, 2011 10:04 pm

Ira, love it! And I haven’t even read a word yet!
I no longer see that tiger, just Sambo and all of that oil butter.

thingadonta
February 7, 2011 10:09 pm

Yes but according European Union of Climate science or something, pre industrial era T was normal, so everything since then is abnormal. This is clearly stated in their policy to keep T below 2 degrees above pre industrial era T. Somebody tell these monkeys that pre industrial era T was a little ice age whuih even the most fanatical warmists agrees was a natural cold period.

wayne
February 7, 2011 10:10 pm

Ira, how in the heck do you have me under AGW, yuck!
Have you never read what I have been saying?

intrepid_wanders
February 7, 2011 10:26 pm

Ira,
The measurement bias is obviously “underestimated”. As your advisor pointed out, in any “untampered” distributions, one can convey that the “bias” can be sorted out. This is not the case, since all measurements have been “corrected” for various external “influences”. The data is just reduced to a series of numbers, +/- X degrees. Without the measurement in control (especially in a multi-variant), the rest is sci-fi or fantasy.

Christopher Hanley
February 7, 2011 11:40 pm

I don’t understand how it’s been acceptable for the two most important surface temperature archives to be left in the custody of Wigley/Jones, both attendees at the proto-IPCC Villach meeting in 1985 (the objective of which was to prove that human CO2 drove the climate — there was no evidence it had or would) and Hansen who by 1979 had the same conviction (without evidence) and who has recently stated that “…..The trains carrying coal to power plants are death trains. Coal-fired power plants are factories of death…”.
Truly bizarre — a situation which wouldn’t have been be tolerated in any other discipline I assume (as a layman).

John Peter
February 8, 2011 12:38 am

I am not really surprised that the AGW estimate pans out at 0.179C. That is more or less Dr Roy Spencer’s estimate (around 20% of the 0.8C increase) and I believe that Professor Lindzen is in that ballpark as well. Having read Dr Spencer’s books and his arguments I am inclined to believe (as a layman) that this is probably right. That would give a decadel increase of around 0.02C rather than the AGW 0.2C per decade.

Mike Haseler
February 8, 2011 12:52 am

Strictly speaking you’ve missed out one of the figures from my own which I think was reduction in global dimming. But as it wouldn’t significantly change the result I’m not going to complain.
Obviously it’s totally unscientific, and speaking personally I’ve only got professional experience with temperature measurement and the rest is just the best information I have gleamed from others – but very insightful!
Before the exercise, I used a rough guess of 1/3:1/3:1/3. But having been forced to think through the exercise and reading the comments of others, I think the final averages are better guesses.

February 8, 2011 1:28 am

Ira:
My estimate of 0.250° 0.250° 0.100° (sum 0.6°C) is from the year 1998
( http://www.agust.net/sol/ ). As most of the other estimates add up to 0.8°C we can scale this old estimate up to approximately 0.33° 0.33° 0.13° (sum 0.8°C).
(Sorry for the precision in the numbers. I would have liked one significant number, maybe something like 0.3° 0.3° 0.2°).
When asked, I often answer that about half of the observed warming could be caused by humans, about half caused by natural variations, plus some measurement error. Then I add that uncertainty is so great that the word “half” can mean anything in the range 20% to 80%. Probably in favor of natural variations…
This does not affect your result. The result 0.179C (~ 0.2°C) for AGW is well acceptable in my mind.

Peter Czerna
February 8, 2011 1:47 am

Graphics: Less is more, Ira!

Mike Haseler
February 8, 2011 1:51 am

John Peter says: February 8, 2011 at 12:38 am
I am not really surprised that the AGW estimate pans out at 0.179C. … That would give a decadel increase of around 0.02C rather than the AGW 0.2C per decade.
I’m not sure of this 0.2C. I thought the IPCC had a range 0.13 – 0.52C/decade!
If, as I recall, the typical noise figure in the temperature record is around 0.1C/decade, then it’s already unlikely to have had this masking as to have had a decade without warming there would need to have been at least -0.13C/decade of climate noise.
If there is no evidence for further warming this century**, as the minimum required noise to hide this “warming” increases, we should get more and more confidence that the predicted warming is not happening. (aka BS)
Just for fun, I’ve tried to estimate when this theory is busted!
I hope you forgive my statistics (I’ve not found a single article on the probability statistics of 1/f type noise as the climate has) but using a rule of thumb, I’d guess that the equivalent of 95% confidence is a difference between predicted and actual of:
0.2 C in any one decade (2x typical noise of 0.1C = 95% confidence Gaussian)
0.17C in two decades (90% confidence)
0.13C in three decades (80% confidence)
I’ve scaled these figures down based these of it being unlikely to have two successive years of cooling to “hide” the warming. Having a run of one decade of cooling is twice as likely has two decades of cooling, which is again twice as likely as three decades of cooling. I suppose I ought to use single sided probability distribution figures, but as it’s only a rule of thumb (and unlike climate “scientists” I’m not paid to be right) I’m not being fussy.
This means the prediction of a minimum of 0.13C/decade warming is busted if by:
2011 it had cooled 0.07C
2021 it has warmed by 0.09C
2031 it has warmed by 0.26C
So it looks to me that if proper statisticians looked at it, the GW scam is busted if:
a) The trend since 2001** at any time shows net cooling below -0.07C/decade
b) or by 20165 there’s no net warming.
** (since 2001 – the year of the IPCC prediction of 1.4-5.8C by 2100)

John Marshall
February 8, 2011 2:37 am

Hadley and CRU give their surface measured data error bands as +/-1.0C. So how can any average be below this range. It is not statistically correct. It is pure alarmism and a figure plucked from a computer readout.
Satellites have errors but at least they get the overall picture of temperature which surface measurements do not.
According to some physicists a global average temperature is meaningless. To get an accurate temperature of anything is must be at equilibrium. This planet’s climate is never at equilibrium.

LazyTeenager
February 8, 2011 3:30 am

they are implicitly assuming the three items above are satisfied.
————
This is an assumption on your part.
It depends on the kinds of measurements and statistical analysis required.
1. there may be good and well understood reasons for making certain statistical assumptions.
2. In areas where it is important the statistics will actually be used to determine if the analysis assumptions are valid.

steveta_uk
February 8, 2011 3:36 am

“As the above graphic indicates” …
Actually, Ira, the above graphic indicates nothing to me, as it’s so prettified that any embedded data there might be is completely lost.
I see I’m not the only person who thought so, (Peter Czerna, 1:47 am)

LazyTeenager
February 8, 2011 3:38 am

That suggests an agenda on the part of those entrusted with the analysis.
————
We’ll this is a big fat assumption in the face of ignorance of:
1. the analysis method
2. Who did the analysis
3. The agenda of the person doing the analysis
4. The honesty of the person doing the analysis
5. What checks and balances are in place.

LazyTeenager
February 8, 2011 3:48 am

In any case, it appears that the satellite data will help improve the general reliability of global tempeature data, assuming that the analysis is done properly, by experts who do not have any political agenda to “prove” or “disprove” Catastrophic AGW. Spencer appears to be a solid citizen in that respect.
—————
This is seriously weird. Roy Spencer is a very well known and committed climate skeptic. You could just as easily accuse Roy Spencer of having an agenda and thereby fudging the data as any other climate scientist.
On top of that Roy Spencer has been accused of cocking up several years of the satellite data analysis. I sort of assume this is true, though I would like to hear Roy’s version of events.

Professor Bob Ryan
February 8, 2011 5:13 am

Digging into the attribution using the 1880- temperature and CO2 records I cannot get close to 18%. Seeking attribution statistically is often a mug’s game although a decent R2 should give an indication of the level of variability in annual temperature than can be explained by variability in the independent variable. Putting trust in Ockham and going for the simplest statistical analysis first: variation in CO2 does not appear to explain any of the observed variation in temperature. Regressing annual % change in temperature v annual % change in CO2 both contemporaneously and lagged the R2 using OLS and MLR does not suggest any causation. At this level, in my field, I wouldn’t bother going any further. Both data series look to me like random walks with some upward drift but that’s where the similarity ends. Statistically they appear quite unconnected. Maybe I am missing something. I had better find out before I try anything more sophisticated!

Katherine
February 8, 2011 5:51 am

Ira,
I don’t see why the reader inputs are apparently grouped by A, B, C, and D when their answers don’t support that grouping. For example, Ed Caryl is under B-Data Bias but Lucy Skywalker who gives the same breakdown of 0.000, 0.300, and 0.500 is under C-Natural Cycles. Also, James Barker, JimF, richard verney, and Mike Haseler all give bigger weight to Natural Cycles but are apparently part of the B-Data Bias group. The presentation is misleading.

February 8, 2011 5:53 am

“Contrary to some reports, the satellite measurements are not calibrated in any way with the global surface-based thermometer record of temperature. They instead use their own on-board precision redundant platinum resistance thermometers calibrated to a laboratory reference standard before launch.”
In the “www.burnsengineering.com” is interesting papers about “Error Sources
That Effect Platinum Resistance Thermometer Accuracy” and they show, that platinum resistance thermometers resistance is changing due to prolonged exposure to high temperatures/radiation and industry standarts alows change by approximately 0.1°C after 1000 hours for a 350°C exposure. I checked publically available papers of precision redundant platinum resistance thermometers which were used in NASA’s Aqua satellite and producers of these construction elements in their own WWW page FAQ recomends to check accuracy of thermometers yearly… 🙂

Doug Proctor
February 8, 2011 10:00 am

Marshall at 2:37 said:
“According to some physicists a global average temperature is meaningless. To get an accurate temperature of anything is must be at equilibrium. This planet’s climate is never at equilibrium.”
The Earth is in DYNAMIC equilibrium, not STATIC equilibrium. Many systems, including bobcat-rabbit populations, are in dynamic equilibrium. This is an equilibrium which varies with some predictability around a mean, and has feedback systems. But your point is well made.
From what I can see, the IPCC concept is that the Earth’s climate is, in fact, in a static equilibrium periodically distorted by unique events of limited, if perhaps large, time-frames. Something happens, and we have an ice age, then the something ends and we go back to the Garden of Eden days, defined of course by pre-industrial (without the LIA). When and the conditions of when that is, I’m unsure, but I suspect it is much like 1955, the time the Team was born or at university. God was in his Heaven then, and all was good with the world. (I’m supposed to put /sarc, I think.)
The idea that the planet is naturally in static equilibrium is one based on stable insolation, stable oceanic circulation patterns of a decade or less, and static albedo in all the important places. Minor variations on the theme are not supposed to add up one way or the other. Man has the only non-random, unidirectional impact on the environment, right now through his production on atmospheric CO2. With this assumption, all post 1940 climate change can be attributed to anthropogenic causes.
I’m not saying I agree with this, but I believe this is the basis of AGW. CAGW is just the exaggeration based on the effects of the inclusion of the precautionary principle to policy advice. The PP says that of any set of scenarios, the worse of those is to used as a basis for action, as the cost of assuming wrongly on the lesser side is greater than the cost of assuming wrongly on the greater side. The cost, of course, is sociological, not economic, which is why dollars cannot be used to argue against excessive action: it’s about people, not wallets, after all.
But your distrust of the accuracy and precision of the temperature rise, and what it all “means” is mine, as well.
I’ve been looking at the annual and hemispheric variation in insolation and albedo based on simple geometry and general planetary data from satellites. I am disturbed that my easily-determined findings about orbital eccentricity and axial tilt mean that
1. Jan to July the insolation varies by 23.2 W/m2,
2. the albedo of the earth in January is 0.3408 and in July, 0.2469, causing a to-the-ground difference of 20.5 W/m2 (157.3 vs 177.8 W/m2) in heating power.
3. in Jan the southern hemisphere receives from the sun 185.9 W/m2 vs 155.2W/m2 in July (for the NH, 166.2 vs 173.7 W/m2),
4. over the course of the year cloud cover varies by 15 – 20%.
5. the differences in hemispheric albedo over the year OVER-counter the insolation variations such that the planet is 2.3K warmer at aphelion than at perihelion.
These variations show what a dynamic system we have. That we have basic stability at all is a result of rapid balancing forces. The math “mean” gives an illusion of stability in insolation (340.5 W/m2) and warming (236.4 W/m2) that does not exist through the year. How well do these balancing forces work? Since weather is regional but is the basis of climate (climate is the average sum of weather on a seasonal level), can we say that the insolation and albedo variations regionally do not over time have a non-random variation of , say, 0.5%? 0.05% is 1.18 W/m2, what the IPCC more-than-says will kick up the planet’s temperature, and 138% of what Trenberth and Schmidt say is “missing” heat below 1000m in the oceans (0.85W/2).
All the above is background to say that the most important non-variant factor, the sun’s heat, is in fact quite variable. The math just makes calculations easier by calculaating as if the energy heating the earth is a constant within negligible amounts. The new measurement is 1360.5 +/-0.15 W/m2 (340.125 on a whole Earth, surface-area basis). That is the average, but that around a large annual range, but is what impacts the heated portions of the earth really +/- 0.15 W/m2?
We all recall wet, cold summers: the differences of a region, the differences of a time. It is common for Yellowknife in the Northwest Territories, to be warmer than Calgary, whether it is summer or winter (much to the chagrin of Calgarians). If regional differences make a significant impact on the global record – as Hansen, with his “better” Arctic record implicitly agrees – then the variation in impact is clearly greater than +/-0.15 W/m2 on a regional scale. Weather does impact global “climate” indicators.
The error bar for both accuracy and precision are not those derived from a gross summation of the planet at the largest of levels of insolation but also, for our purposes, in the heating, i.e. tempertures. To get a hard look at the temperature variation issue, perature, check the ARGO float data: look at year-to-year temperatures on a hemispheric level. The graphing functions are simple to do. What you see is large regional differences not just annually but within the year. What is heated, when it is heated, and how much it is heated are very distinct but variable.
I’m saying it is not just input of energy that is of regional and time significance for global numbers, but transport as well. Heat in at point A goes to point B. And it is not random in time or space. The ARGO data show that condensed numbers of SST cover up important local variations that are not considered in the error bars.
I suggest temperature measurements (instrumental and observation) are less important than time-and-location variations. The Arctic warms more than the Antarctic AS IS WELL KNOWN, for the NH always warms more than the SH due to albedo differences, but is the “error” bar in Arctic naturally greater than that elsewhere? I’d suggest it is. But is it treated as such? No. A global “error” bar of 0.1K or less is used. The math makes it so. But is it, at least in terms of impact?
All regions are treated the same and over all periods of time. It is easy to show that input energies and temperature variations are not the same by region and time. Nor should they be, as symmetry is a theoretical concept at a macro-level, not a reality. Is the planet is in an equilibrium (dynamic or not) to 0.1% heating (0.25W/m2) or 0.4K? Seems unlikely.
I strongly suggest
1. that the IPCC and here-in discussed error bars are inappropriately small,
2. that the error bars are mathematical artefacts and not reflective of the variations that cause temperature anomalies locally or globally,
2. that the temperature variations including those of the LIA could be the result of minor, semi-random variances in input and albedo that constructively and destructively interfer overtime, and
3. that much discussion and calculation within the CAGW position is of portions that are impossible to identify as global, as opposed to regional in nature, or of amounts attributable to man when regional and time differences are considered.
We quibble, to use a phrase I have used elsewhere, about the nature of boots on angels dancing on the heads of pins. Until we separate out regional from global causes and effects (including those of time), global averages or changes from past averages SIGNIFY nothing attributable to either man or nature.
The IPCC, Gore, Hansen et al have lead us down the path of simplicity in a complex world. Data fudging is about making regional differences disappear to show a pre-determined global trend. Salinger in New Zealand was not an outlier in his field, just an obvious figure in a large, empty space. If we were to interpret the various datasets on a regional basis – as we would, for instance, for population growth – I know we would not see a global change attributable to CO2. The mathematics of global inclusion is PC; the mathematics of regions is not (try discussing population growth per country, for an instance of non-PC behaviour).
Global warming is not global. I doubt either camp would deny that. The meme says that regional warming has a global cause, but is expressed differently (like cold and more snow in winter 2011) in different regions. A handy, PC concept. But break it down over the last 20 years, stop adjusting local temperature records to match global/large, non-related regions, and throw in the legitimate local variation, and the non-global nature of climate changes will reveal themselves.
Remember: two people 6ft tall standing with one 3ft tall does not a trio of 5ft tall people make.

Dacron Mather
February 8, 2011 11:59 am

Kitteh !

Owen
February 8, 2011 12:31 pm

I still don’t quite understand why the satellite sensors aren’t calibrated by pointing them all to the same empty part of space and measuring the cosmic background temperature. Then the drift of each individual sensor can be compared to the same source (or can they not measure temperatures that low?) That’s what I get for watching the film “The Dish” and seeing the moon used as a pointing reference for acquiring the moon landing signals. What else in the universe could likewise be used as external reference points?

wayne
February 8, 2011 12:33 pm

Hey Ira! Now that’s better. Just didn’t want others who skim read this to automatically tag me as AGW proponent! I’ll take a ‘D’ over ‘AGW’ any day. ☺
More serious, really enjoyed your series. It has presented some very good points to keep in mind and one great insight that once broken into classes of cause and effect much of any alarm that might exists tends to disappear into the noise.
And, I’ll stay by my thoughts that much of the 0.5ºC since 1880 was tied to solar forcing though many now think the sun had little effect. Still don’t buy that line. I think Jean et al.’s earlier papers and others written in the 90’s were correct and there was actually a rather marked secular rise in both activity and solar irradiance over this period of some 5-7 Wm2. One day we may find it was masked in flaws in the instrumentation and/or algorithms used to process that data.

Jeff
February 8, 2011 12:47 pm

did I miss it or was UHI not mentioned at all ?

George E. Smith
February 9, 2011 8:43 am

“”””” Latitude says:
February 7, 2011 at 7:06 pm
Can anyone really measure a 1/2 of a degree….
…no “””””
Well actually you can; I’m sure that one can measure to 0.001 deg (C), and probably way less than that. BUT !!!
The big question always is; WHAT ARE YOU READING THE TEMPERATURE OF ?
Well actually you are reading the Temperature of the thermometer; or at least some point on it; but how that relates to what the hell Temperature you really wanted to know is another thing.
All sorts of industrial processes rely to varying degrees (pun intended) on accurate control of Temperatures.
If you are trying to grow a 300 mm diameter (12 inch) silicon single crystal ingot with a Czockralzki puller, the Temperature control has to be extremely precise or or you end up with a crystal that looks like Dolly Parton, instead of a cylinder.
Same thing goes for Anthony’s photographic collection of Owl boxes. They each have thermometers in them; but what the hell is that thermometer measuring the Temperature of ? More importantly, does it ever change with conditions. Therein lies the rub. The point where you want to know the Temperature, is always linked to the actual thermometer (Temperature sensing element) by some thermal impedance, and depending on heat sources or sinks nearby, air flows, or water flows, in the region, etc, the change in Tempertaure from sensor to sensed location can be all over the map.
It is very poor process control strategy, to monitor some variable, and then use that by way of a presumed relationship, to force control of some other variable. What if that relationship is somewhat unknown; such as the relationship between atmospheric CO2 abundance (well mixed of course) and mean global surface Temperature (which we don’t know).
If you don’t want your chemical plant to blow up, you always monitor that which you wish to control; not something else.
So yes we can measure 1/2 degree; but whether we can measure the weather Temperature to 1/2 degree is an entirely different question. My money would certainly be on the no way button.
Frequency of an electrical oscillation, I think I can measure to one part in 10^8. The good guys can do it to maybe one part in 10^16 for special cases; but measuring the mean global surface or lower tropospheric Tempertaure to 1/2 degree, is pie in the sky. It’s also quite meaningless, so it is good that we can’t measure it anyway.

Richard Sharpe
February 9, 2011 8:58 am

George E. Smith says on February 9, 2011 at 8:43 am

If you are trying to grow a 300 mm diameter (12 inch) silicon single crystal ingot with a Czockralzki puller, the Temperature control has to be extremely precise or or you end up with a crystal that looks like Dolly Parton, instead of a cylinder.

I think there could be a market there.

Mike
February 16, 2011 1:03 am

Just a thought for you all!
IF humankind is partly responsible for the effects of global warming then there should be significant markers at key times in history such as:
• Pre/ post Industrial Revolution
• First/ Second world wars
• All the atomic testing around the world during the Cold War periods
IF temperatures can be measured with such great accuracy then all measurements by who so ever records them should show significant changes during these periods in recent history on top of natural events. Would this be a way of assessing how accurate recording were and whose data was best interpreted.