By S. Fred Singer (first published in American Thinker)
Global warming has re-entered public consciousness in recent days, partly because of the buzz surrounding the release of warming results from the Berkeley Earth Surface Temperature (BEST) project. The reaction of the “warmistas” has been jubilant, yet hilariously wrong. Will they ever learn?
They’ve latched on to the BEST result as their last best hope for rescuing misbegotten schemes to control emissions of the greenhouse gas CO2. Leading the pack has been the Washington Post (Oct. 25), whose columnist tried to write off Republican presidential candidates Bachmann, Cain, and Perry as “cynical diehards,” deniers, idiots, or whatever.
I sent the WP a letter pointing out obvious errors, but I got a peculiar response. It turned out that they were willing to publish my letter, but not my credentials as emeritus professor at the University of Virginia and former director of the U.S. Weather Satellite Service. Apparently, they were concerned that readers might gain the impression that I knew something about climate.
Unfortunately, it has become expedient (for those who condemn CO2 as the cause of warming) to deride their opponents with terms like “climate deniers.” A complacent and inattentive media has made the problem worse, by giving the impression that anyone who doesn’t buy the CO2 hypothesis doesn’t believe that climate changes, and hence is a total Luddite. Even the WSJ got carried away. Prof. Richard Muller, the originator and leader of the BEST study, complained to me that some eager editor changed the title of his op-ed (Oct. 21) to “The Case Against Global-Warming Skepticism” from his original “Cooling the Global Warming Debate. ”
The (formerly respected) scientific journal Nature chimed in and announced in an (Oct. 26) editorial[i] that any results confirming “climate change” (meaning anthropogenic global warming — AGW) are welcome, even when released before peer review. Of course, we’ve known for many years that Nature does not welcome any contrary science results, but it’s nice to have this confirmation.
Their hearts filled with bubbling joy and their brains befuddled, none of the warmistas have apparently listened to the somewhat skeptical pronouncements from Prof. Muller. He emphasizes that the analysis is based only on land data, covering less than 30% of the earth’s surface and housing recording stations that are poorly distributed, mainly in the U.S. and Western Europe. In addition, he admits that 70% of U.S. stations are badly sited and don’t meet the standards set by government; the rest of the world is probably worse. He disclaims to know the cause of the warming found by BEST and favors naturally caused oscillations of the atmosphere-ocean system that no climate model has yet simulated or explained.
The fact that the BEST results agree with previously published analyses of warming trends from land stations may indicate only that there is something very wrong with all of these. There are two entirely different ways to interpret this agreement on surface warming. It might indicate important confirmation, but logic allows for an alternate possibility: since both results rely on surface thermometers, they are not really independent and could be subject to similar fundamental errors. For example, both datasets could be affected by urban heat islands or other non-global effects — like local heating of airports, where traffic has been growing steadily.
But the main reason I have remained a skeptic is that the atmosphere, unlike the land surface, has shown no warming during the crucial period (1978-1997), either over land or over ocean, according to satellites and independent data from weather balloons. And did you know that climate models run on high-speed computers all insist that the atmosphere must warm faster than the surface — and so does atmospheric theory?
BEST has no data from the oceans, which cover 71% of the planet’s surface. True, oceans are not subject to urban heat islands, but they have problems with instrumentation. It is very likely that the reported warming during 1978-97 is simply an artifact — the result of the measurement scheme rather than an actual warming. Anyway, supporting data don’t show any ocean warming, either.
And finally, we have non-thermometer temperature data from so-called proxies: tree rings, ice cores, lake and ocean sediments, stalagmites. Most of these haven’t shown any warming since 1940!
Contrary to some commentary, BEST in no way confirms the scientifically discredited hockey stick graph, which was based on multi-proxy analysis and had been so eagerly adopted by climate alarmists. In fact, the hockey stick authors never published their post-1978 temperatures in their 1998 paper in Nature — or since. Their proxy record suddenly just stops in 1978 — and is then replaced by a thermometer record that shows rapid warming. The reason for hiding the post-1978 proxy data: it’s likely that they show no warming. Why don’t we try to find out?
None of the warmistas can explain why the climate hasn’t warmed in the 21st century, while CO2 has been increasing rapidly. It’s no wonder that Herman Cain, a former math and computer science major in college, says that “man-made global warming is poppycock” (NYT, Nov. 12). He blames climate fears on “scientists who tried to concoct the science” and “were busted because they tried to manipulate the data.”
Mr. Cain is not far from the truth — at least when one listens to Rich Muller. Muller’s careful to make no claim whatsoever that the warming he finds is due to human causes. He tells us that one third of the stations show cooling, not warming. Muller admits that “the uncertainty [involved in these stations] is large compared to the analyses of global warming.” He nevertheless insists that if he uses a large enough set of bad numbers, he could get a good average. I am not so sure.
Muller thinks that he has eliminated the effects of local heating, like urban heat islands. But this is a difficult undertaking, and many doubt that the BEST study has been successful in this respect. Some of Muller’s severest critics are fellow physicists: Lubos Motl in the Czech Republic and Don Rapp in California. Somewhat harshly, perhaps, Rapp would change the study designation from BEST to “WORST” (World Overview of Representative Station Temperatures).
I am one of those doubters. While many view the apparent agreement of BEST with previous analyses as confirmation, I wonder about the logic. It might be a good idea if BEST would carry out some prudent internal cheeks:
** Plot number of stations used between 1970 and 2000 and make sure that there have been no significant changes in what I call the “demographics”: station latitudes, altitudes, or anything that could induce an artificial warming trend.
**I would pay particular attention to the fraction of temperature records from airport stations — generally considered among the best-maintained, but subject to large increases in local warming.
** I would also decompose the global record of BEST into regions to see if the results hold up.
Of course, the most important checks must come from records that are independent of weather station thermometers: atmospheric temperatures, ocean temperatures, and temperatures from non-thermometer proxy data. But even then, it may be difficult to pinpoint the exact causes of climate change.
I conclude, therefore, that the balance of evidence favors little if any global warming during 1978-1997. It contradicts the main conclusion of the IPCC — i.e., that recent warming is “very likely” (90-99% certain) caused by anthropogenic greenhouse gases like CO2.
And finally, what to do if CO2 is the main cause, and if a modest warming has bad consequences — as so many blindly assume? I am afraid that the BEST project and Muller are of no help.
On the one hand, Muller is dismissive of policies to control CO2 emissions in the U.S. — much less in his State of California. In an Oct. 31 interview with the Capital Report of New Mexico, he stated:
… the public needs to know this, that anything we do in the United States will not affect global warming by a significant amount. Because, all projections show that most of the future carbon dioxide is going to be coming from China, India, and the developing world. … [A]nything we do that will not be followed by China and India is basically wasted.
On the other hand, Muller told MSNBC’s Morning Joe (Nov.14):
[W]e’re getting very steep warming … we are dumping enough carbon dioxide into the atmosphere that we’re working in a dangerous realm, where I think, we may really have trouble in the next coming decades.
So take your choice. But remember — there is no evidence at all for significant future warming. BEST is a valuable effort, but it does not settle the climate debate.
Why do people trust the satellite record so much.
There has been more bodging done on the data from various satellites than on most surface station records. And the bodging is not small:
http://tinyurl.com/7xmd6e2
Surprisingly the changes are usually to make GHG effects smaller!
Positive slopes turn negative and vice versa. for example data derived for 4.4km changed from a positive slope of 3.27e-5K/day = 0.119K/decade
to a slope of -1.95e-5K/day = -0.0712K/decade
and this is over the overlap time when both satellites were operating!!!! If the algorithm had to be changed for the latter satellite then why not correct the former data stream?
The temperatures are derived from MODELS – you know those things that most here despise! Yet people throw these around as if handed down on tablets of pure platinum.
>> John B says:
November 17, 2011 at 3:40 pm
It amazes me the lengths some people will go to to avoid truths they don’t like. UHI effect is real, but it does not skew the global temperature records. Deal with it! <<
So you resort to truth by fiat? How typical of a religious believer.
1. UHI changes with time.
2. These changes have not been measured.
3. There are very few totally non-urban weather stations. Even the top-rated stations can be affected by UHI as well as local microclimate changes.
4. The years between 1950 and 1980 were marked by at least three major additions that add significant heat to a local microclimate: air conditioning, air travel, and the use of black asphalt for paving. During these years there were no satellite measurements.
>> Leif Svalgaard says:
November 17, 2011 at 5:23 pm
Whatever the differences, the trend over the critical period 1978-2011 is up. <<
I asked this about the original article and I'll ask you; why is anyone talking about a 'critical period'? Is this just because we have satellite measurements during this period?
Leif/KR
Re “ending the data in a strong La Nina rather deceptive”
According to The NOAA Climate Prediction Center:
http://www.cpc.ncep.noaa.gov/products/analysis_monitoring/ensostuff/ensoyears.shtml
the La Nina did not end in 1997, but in MAM 1996. There were 13 months of ENSO “neutral” conditions before the official start of the 1998 El Nino at period AMJ
Besides, the La Nina was not particularly strong lasting 7 periods and reached a maximum of -.7 for two of those periods. Compare this to the La Nina following the 1998 El Nino which lasted 24 periods and reached a maximum of -1.6 for two periods or the 1975 La Nina with lasted 36 periods and reached a maximum of -1.7 for two periods.
Also if you take the trend (UAH) from Jan 1979 to April 1996 you get 0.36 Deg C/ Cent. You get the same trend if you take the period Jan 1979 to March 1997, the last 0 period before the 1998 El Nino. The same result is achieved for RSS and GISS ( 0.72 and 0.96).
So I don’t believe that Singer is being deceptive. However the three temperature records do show warming albeit they vary by as much as a factor of almost 3!! What does that say!! Where does Singer get no warming – Unless he only takes UAH and thinks 0.36 Deg/Century is indistinguishable from noise.
Fred Singer knows from being there that there was a flat trend from 1978 till the 1997 beginning of the 1998 El Nino and a step up to the flat trend since 1998-1999 in the satellite records, watching something on a daily basis teaches a person more than reviewing the data ever will.
The difference between the real step change in the temperature data, and the slow increase in the CO2 level has no correlation, although just correlation is not causation, there can no causation with out some amount of correlation!
So it seems natural he should point out the flat response in the critical period 1978-1997, as well as the step increase during the 1998 El Nino and the ensuing flat trend after. This is cherry picking? About as much as only paying the fare for the cab while you are in it, instead of all day.
Philip
The two stations are within 200 m of each other so they would have the same latitude. The manual w/station was moved to the airport some years ago from the Post Office but well before the AWS was set up. I am comparing the temps only from the time the AWS started.
I have visited the site and can assure you what I have written is true.
Some points well worth mentioning in regards to Dr. Singer’s cherry-picked 1978-1997 trend line: there are multiple variations including ENSO, volcanic aerosols, and the solar cycle as well as anthropogenic greenhouse gases. There’s no reason to expect monotonic increases in temperature over the short term.
See http://www.aip.org/history/climate/images/Model-4_effects.jpg for a straightforward addition of these effects, with some 10-20 year speedups and slowdowns of temperature rise, from Lean and Rind 2009 (http://www.unity.edu/facultypages/womersley/2009_Lean_Rind-5.pdf).
Cherry-picking short term trends is nothing new: see http://tinyurl.com/6tkxogy
Tom_R says:
November 17, 2011 at 7:33 pm
I asked this about the original article and I’ll ask you; why is anyone talking about a ‘critical period’?
Because Singer himself does that. Actually [and that could be my fault] he does use the word ‘critical’, but the stronger one: ‘crucial’ “the crucial period (1978-1997)”
Is this just because we have satellite measurements during this period?
We have such measurements from 1978 until today.
Steve H says:
November 17, 2011 at 7:39 pm
So I don’t believe that Singer is being deceptive.
He is an old hand at this and is very careful with what he says. Ending just before the great warming in 1998 is not quite kosher [and he knows that] so why do it? In my book, that hurts his credibility. I gather from people’s willingness to defend him that those people think it is OK to play such tricks [see below]. I do not [and get dumped on for it].
Richard Holle says:
November 17, 2011 at 7:56 pm
So it seems natural he should point out the flat response in the critical period
Richard Holle says:
November 17, 2011 at 7:56 pm
The difference between the real step change in the temperature data, and the slow increase in the CO2 level has no correlation, although just correlation is not causation, there can no causation with out some amount of correlation!
There is no step function either in the various other things people think cause climate change, solar activity, planetary tides, Jupiter shine, Galactic clouds, etc. We have a tendency to see lines, steps, and such where there are none. Take away the 1998 el Nino [and the 1982 one] and there is much less of an apparent step: http://www.leif.org/research/Temps-since-1975.png
Leif Svalgaard says: November 17, 2011 at 10:36 am quoting Setven Mosher –
“It is very likely that the reported warming during 1978-97 is simply an artifact — the result of the measurement scheme rather than an actual warming. You need to explain to people whether you agree with this nonsense or not.”
(Leif) I agree that this is nonsense, destroying whatever credibility Singer had.
Can we not be less absolute? If you look at the shape of global or NH temperature graphs in the dip of 1945-70 or so, there is obvious flattening in newer versions as many others have observed. So, relatively rather than absolutely, I would say that some nonsense work has been done to that time period. If then, why not also to the 1978-97 period that is in your derisory comment?
I’m happy to concur that the 1978-97 period is error prone, but until I see the errors explained, I would not shoot the messenger. Some severe instrumental changes happened in some countries in this term. See, for example, http://www.bom.gov.au/amm/docs/2004/trewin.pdf
Let’s not get acrimonious about our frustrations with noisy data.
A linear trend is the best that can be established given the data – all of the data, mind you, not just a cherry-picked shorter term that isn’t robust to +/- 10% of the data.
Some people invoke a step function, or even multiple steps. I’ll note, however, that a single step function, with two changes of slope and three different trends, would require much more statistical justification (i.e., data) than a linear trend over that data with one slope. And as Santer points out, you need at least 17 years just to establish a linear trend. Anyone who thinks a linear trend isn’t significant should just forget about higher order fits, including steps…
Geoff Sherrington says:
November 17, 2011 at 9:59 pm
Let’s not get acrimonious about our frustrations with noisy data.
The nonsense is not about the data, but in the statement that the cherry-picked period 1978-1997 period is ‘crucial’ in the debate.
steven mosher says:
November 17, 2011 at 9:59 am
Will any of you skeptical thinkers will lift a brain cell to critically examine Singer.
Seems there is a shortage of such…
I am curious about the adjustment algorithms that BEST used relative to previous data adjustments. From what I see only the average temperatures have been released to date. Correct? The real issues are the differences between maximum and minimum temperature trends. Maximum temperatures are a mostly a function of short wave solar radiation that is modulated by clouds. The convection created by surface warming vertically mixes the air, and thus gives a better representation of the overall heat content. Typically the convection at midday eliminates differences in urban heat islands and rural temps. Accordingly previous adjustments to maximum temperatures are typically minimal.
In contrast minimum temperatures occur at times when there is a thermal inversion, with colder air layers nearest the surface. However changes in major winds, a la ocean oscillations, or changes in surface boundaries that either disturb the vertical stratification or change the surface heat storage and thus convection, all can dramatically re-distribute the air layers and create a diverse array of minimum temperatures. For example citrus growers use fans to mix the air to prevent frosts forming at the surface boundary. Because of this stratification and its disruption by a wide variety of local conditions, minimum temperatures should be expected to vary much more between neighboring weather stations. Accordingly attempts to detect change points will likely be notice more often in minimum temperature data sets.
Earlier adjustments compared neighboring stations looking for changes in trends and then homogenized the data according to what they perceived was a non-climatic artifact. These earliest analyses detected 1 “discontinuity” per ~ every 20 years. The new BEST analyses calls there change-point detection method the “scalpel” and the have detected even more discontinuities, approximately 1 per 14 years if I remember correctly, and then then adjusted those trends.
If you compare raw temps to adjusted temps for both maximum and minimum temperatures, the minimum temperatures are, as expected, the most grossly adjusted temperature sets, and because their algorithms look for changing trends, they often make drastic adjustments creating very odd and steep warming trends that are often totally contrary to the raw data. Yet compared to the same station’s maximum temperatures no similar exaggerated adjustments are made. I fear BEST’s “scalpel” methodology which hacks ups the data at even smaller segments due to more perceived trend changes,they will likely adjust the data into a more uniform trend that suffers from the same systematic errors that created weird adjustments earlier. For example the bimodal high peaks expected due to the PDO is often obliterated in the minimum adjustments. in many of the USHCN California sites.
I was looking at temperature data for Amherst MA because that area has experienced a southward migration of northern moose, in total contradiction of warming theory. Likewise a bimodal minimum is obliterated and transfigured into a steep linear trend. Go to USHCN and compare max and minimum raw and adjusted data, and from a quick perusal I suspect you will see this systematic adjustment in half the stations. The question is why maximum are treated so differently than minimums. I would bet the BEST data will express that same asymmetry to their adjustments.
In 1997, when the debates and warnings about AGW were already rife, Singer would have had a point.
I understand Singer is referring to the missing fingerprint of AGW by CO2 in the mid-troposphere since 1978. Lindzen has expressed very similar observations about the lack of the fingerprint since 1979 as have others.
We have the following from Lindzen’s Jan 17 2011 WUWT Post entitled “Richard Lindzen: A Case Against Precipitous Climate Action”
In that context Singer’s note about the lacking of atmospheric temp increase since 1978 seems reasonable and consistent with Lindzen discussions and other’s discussions.
John
But the main reason I have remained a skeptic is that the atmosphere, unlike the land surface, has shown no warming during the crucial period (1978-1997), either over land or over ocean, according to satellites and independent data from weather balloons.
Singer is factually correct in this statement as the UAH temperature anomaly was around zero for most of these two years.
Whether or not a trend over x months or y years is a more accurate measure doesn’t invalidate Singer’s statement.
Leif takes issue with Singer’s use of ‘crucial’. This is a letter to the editor and some rhetorical flourishes are allowed IMO.
I can’t see any issues with the rest of what he says.
Ah, homogenization. What would we do without ye?
Scalpels are SO helpful!
Edit note: “prudent internal cheeks” would be “checks”. Sorely needed.
Leif Svalgaard states Nov17 7:56 that there are no step function in temperature data or various other things etc.
Leif, please explain the scientific evidence for your categorical statement. I disagree with you, as the data and other empirical evidence clearly suggest otherwise. But I am prepared to learn.
thanks …. jens
Latitude says:
November 17, 2011 at 9:29 am
Does anyone else wonder how BEST was able to do something so involved and complicated….
….in such a short period of time /snark
Latitude, you say /snark but you are closer to the real answer than you think. The main thing BEST did was to simply take the adjustments performed in a step called homogenizing, and renamed it splicing, streamlining for fast turnaround. Both of those processes basically remove the UHI signature that becomes apparent when stations are relocated to better sites where cooler. That step downward is the UHI signature itself and in both cases it is termed an discontinuation error and ‘spliced’ out to leave another small increase in the trends every time. Walla! Global warming.
Why is the heat generated in the earth’s core that is transmitted mainly to the Ocean not taken into account when trying to model the climate? And as the majority of this heat is from radio active decay won’t this contribute to a global cooling?
jens raunsø jensen says:
November 18, 2011 at 2:48 am
Leif Svalgaard states Nov17 7:56 that there are no step function in temperature data or various other things etc.
Leif, please explain the scientific evidence for your categorical statement. I disagree with you, as the data and other empirical evidence clearly suggest otherwise. But I am prepared to learn.
So am I. I showed that the step in temperature is not clear at all: http://www.leif.org/research/Temps-since-1975.png As someone pointed out if you deny a linear increase on account of noise then a step as a second order effect is even less likely. As for geomagnetic activity, see for yourself: http://www.leif.org/research/Ap-1944-2008.png As for sunspots http://www.leif.org/research/SSN-vs-CaK3.png and so on.
>> Leif Svalgaard says:
November 17, 2011 at 11:18 pm
steven mosher says:
November 17, 2011 at 9:59 am
Will any of you skeptical thinkers will lift a brain cell to critically examine Singer.
Seems there is a shortage of such… <<
Seems to me there are several skeptics questioning the apparently cherry-picked time period.
>> Leif Svalgaard says:
November 17, 2011 at 9:23 pm
There is no step function either in the various other things people think cause climate change, solar activity, planetary tides, Jupiter shine, Galactic clouds, etc. We have a tendency to see lines, steps, and such where there are none. Take away the 1998 el Nino [and the 1982 one] and there is much less of an apparent step: http://www.leif.org/research/Temps-since-1975.png <<
If you also take away the subsequent La Nina it looks very much like a step change. However, I can't see a logical reason for a step change in 'global temperature' unless there was a change in measuring devices or methods around that time, and I'm unaware of any such change.
Steve Garcia says:
November 17, 2011 at 9:38 am
…both results rely on surface thermometers, they are not really independent and could be subject to similar fundamental errors. For example, both datasets could be affected by urban heat islands or other non-global effects — like local heating of airports, where traffic has been growing steadily.
Well, one thing that I had not thought of and that may tie in well with the temps as seen in upslope in the 1990s and the flattening in the 2000s is that airline industry was approaching – and may have even passed – full capacity in the 1990s. Overbookings were extremely common. Airports were planning (and building) extra runways, so more takeoffs and landings could be accomodated. (Put together with the Great dying off of the thermometers, this is something worth looking into.)
And then 9/11 hit and the industry fell off massively, and only came back up slowly. Have they gotten back to where they were in the late 1990s? I don’t know, but I don’t think so. While some of my flights have been full, I also know that some routes simply don’t exist (or have far fewer flights weekly), not as they did in the 1990s and early 2000s.
As with anything in climate, I don’t bring this up as a stand-alone cause of anything, but I think it is a likely factor in the lack of warming now, versus all that warming in the 1990s. Let us not forget how booming the world economy was in the 1990s. It hasn’t been like that since. And one place it showed was in airline flights.
Coincidence? Maybe. But maybe not, too.
The number of aircraft in flight worldwide has increased. You may find that the direct regional jet flights from small airports in the US have gone, but there are still about 5000 aircraft airborne over the USA at any one time during the day. Also in the middle and far east air traffic is growing extremely fast. This is why several middle eastern airlines have placed orders for 300+ aircraft _each_ with both Airbus and Boeing at the recent Dubai air show.
However, as the actual fuel burn per passenger mile has reduced considerably – a 737-800 does about 120 miles per gallon per passenger – and the 787 is 20% better than that – so the emissions from aircraft have reduced significantly. Aviation has actually starting to reduce emissions despite traffic growth – mainly to cope with the cost of fuel.
But all this does not alter the fact that there is really no quantification on the ‘forcing’ effect of aircraft emissions and contrails. There is a distinct possibility that the albedo increase (cooling) from persistent contrails is far more significant than any forcing due to water vapor from the engines. Remember that contrails (like clouds) only appear in air that is already saturated or super-saturated with water vapor.