Global annualized temperature – "full of [snip] up to their eyebrows"

Guest Post by Dr. Robert Brown,

Physics Dept. Duke University [elevated from comments]

Dr. Brown mentions “global temperature” several times. I’d like to know what he thinks of this.

Dr. Brown thinks that this is a very nice piece of work, and is precisely the reason that he said that anybody who claims to know the annualized average temperature of the Earth, or the Ocean, to 0.05 K is, as the saying goes, full of [snip] up to their eyebrows.

What I think one can define is an average “Global Temperature” — noting well the quotes — by following some fixed and consistent rule that goes from a set of data to a result. For example, the scheme that is used to go from satellite data to the UAH lower troposphere temperature. This scheme almost certainly does not return “the average Global Temperature of the Earth” in degrees absolute as something that reliably represents the coarse-grain averaged temperature of (say) the lowest 5 kilometers of the air column, especially not the air column as its height varies over an irregular terrain that is itself sometimes higher than 5 kilometers. It does, however, return something that is likely to be close to what this average would be if one could sample and compute it, and one at least hopes that the two would co-vary monotonically most of the time.

The accuracy of the measure is very likely not even 1K (IMO, others may disagree) where accuracy is |T_{LTT} - T_{TGT}| — the absolute difference between lower troposphere temperature and the “true global temperature” of the lower troposphere. The various satellites that contribute to temperature have (IIRC) a variance on this order so the data itself is probably not more accurate than that. The “precision” of the data is distinct — that’s a measure of how much variance there is in the data sources themselves, and is a quantity that can be systematically improved by more data, where accuracy, especially in a situation like this where one is indirectly inferring a quantity that is not exactly the same as what is being measured cannot be improved by more or more precise measurements, it can only be improved by figuring out the map between the data one is using and the actual quantity you are making claims about.

Things are not better for (land) surface measurements — they are worse. There the actual data is (again, in my opinion) hopelessly corrupted by confounding phenomena and the measurement errors are profound. Worse, the measurement errors tend to have a variable monotonic bias compared to the mythical “true average surface Global Temperature” one wishes to measure.

One is in trouble from the very beginning. The Moon has no atmosphere, so its “global average temperature” can be defined without worrying about measuring its temperature at all. When one wishes to speak of the surface temperature at a given point, what does one use as a definition? Is it the temperature an actual high precision thermometer would read (say) 1 cm below the surface at that point? 5 mm? 1 mm? 1 meter? All of these would almost certainly yield different results, results that depend on things like the albedo and emissivity of the point on the surface, the heat capacity and thermal conductivity of the surface matter, the latitude. Is it the “blackbody” temperature of the surface (the inferred temperature of the surface determined by measuring the outgoing full spectrum of radiated light)?

Even inferring the temperature from the latter — probably the one that is most relevant to an airless open system’s average state — is not trivial, because the surface albedo varies, the emissivity varies, and the outgoing radiation from any given point just isn’t a perfect blackbody curve as a result.

How much more difficult is it to measure the Earth’s comparable “surface temperature” at a single point on the surface? For one thing, we don’t do anything of the sort. We don’t place our thermometers 1 meter, 1 cm, 1 mm deep in — what, the soil? The grass or trees? What exactly is the “surface” of a planet largely covered with living plants? We place them in the air some distance above the surface. That distance varies. The surface itself is being heated directly by the sun part of the time, and is radiatively cooling directly to space (in at least some frequencies) all of the time. Its temperature varies by degrees K on a time scale of minutes to hours as clouds pass between the location and the sun, as the sun sets, as it starts to rain. It doesn’t just heat or cool from radiation — it is in tight thermal contact with a complex atmosphere that has a far greater influence on the local temperature than even local variations in insolation.

Yesterday it was unseasonably warm in NC, not because the GHE caused the local temperature to be higher by trapping additional heat but because the air that was flowing over the state came from the warm wet waters of the ocean to the south, so we had a relatively warm rain followed by a nighttime temperature that stayed warm (low overnight of maybe 46F) because the sky was cloudy. Today it is almost perfectly seasonal — high 50’s with a few scattered clouds, winds out of the WSW still carrying warm moisture from the Gulf and warm air from the south central US, but as the day progresses the wind is going to shift to the NW and it will go down to solidly freeze (30F) tonight. Tomorrow it will be seasonal but wet, but by tomorrow night the cooler air that has moved in from the north will make it go down to 25F overnight. The variation in local temperature is determined far more by what is going on somewhere else than it is by actual insolation and radiation here.

If a real cold front comes down from Canada (as they frequently do this time of year) we could have daytime highs in the 30’s or low 40’s and nighttime lows down in the the low 20s. OTOH, if the wind shifts to the right quarter, the temperature outside could reach the low 80s high and low 50s low. We can, and do, have both extremes within a single week.

Clearly surface temperatures are being driven as strongly by the air and moisture flowing over or onto them as they are by the “ideal” picture of radiative energy warming the surface and radiation cooling it. The warming of the surface at any given point isn’t solely responsible for the warming or cooling of the air above it, the temperature of the surface is equally dependent on the temperature of the air as determined by the warming of the surface somewhere else, as determined by the direct warming and cooling of the air itself via radiation, as determined by phase changes of water vapor in the air and on the surface, as determined by factor of ten modulations of insolation as clouds float around over surface and the lower atmosphere alike.

Know the true average surface Global Temperature to within 1K? I don’t even know how one would define a “true” average surface Global Temperature. It was difficult enough for the moon without an atmosphere, assuming one can agree on the particular temperature one is going to “average” and how one is going to perform the average. For the Earth with a complex, wet, atmosphere, there isn’t any possibility of agreeing on a temperature to average! One cannot even measure the air temperature in a way that is not sensitive to where the sun is and what it is doing relative to the measurement apparatus, and the air temperature can easily be in the 40s or 50s while there is snow covering the ground so that the actual surface temperature of the ground is presumably no higher than 32F — depending on the depth one is measuring.

And then oops — we forgot the Oceans, that cover 70% of the surface of the planet.

What do we count as the “temperature” of a piece of the ocean? There is the temperature of the air above the surface of the ocean. In general this temperature differs from the actual temperature of the water itself by order of 5-10K. The air temperature during the day is often warmer than the temperature of the water, in most places. The air temperature at night is often cooler than the temperature of the water.

Or is it? What exactly is “the temperature of the water”? Is it the temperature of the top 1 mm of the surface, where the temperature is dominated by chemical potential as water molecules are constantly being knocked off into the air, carrying away heat? Is it the temperature 1 cm deep? 10 cm? 1 m? 10 m? 50 m? 100m? 1 km?

Is it the average over a vertical column from the surface to the bottom (where the actual depth of the bottom varies by as much as 10 km)? This will bias the temperature way, way down for deep water and make the global average temperature of the ocean very nearly 4K very nearly everywhere, dropping the estimate of the Earth’s average Global Temperature by well over 10K. Yet if we do anything else, we introduce a completely arbitrary bias into our average. Every value we might use as a depth to average over has consequences that cause large variations in the final value of the average. As anyone who swims knows, it is quite easy for the top meter or so of water to be warm enough to be comfortable while the water underneath that is cold enough to take your breath away.

Even if one defines — arbitrarily, as arbitrary in its own way as the definition that one uses for T_{LT} or the temperature you are going to assign to a particular point on the surface on the basis of a “corrected” or “uncorrected” thermometer with location biases that can easily exceed several degrees K compared to equally arbitrary definitions for what the thermometer “should” be reading for the unbiased temperature and how that temperature is supposed to relate to a “true” temperature for the location — a sea surface temperature SST to go with land surface temperature LST and then tries to take the actual data for both and turn them into a average global temperature, one has a final problem to overcome. One’s data is (with the possible exception of modern satellite derived data) sparse! Very sparse.

In particular, it is sparse compared to the known and observed granularity of surface temperature variations, for both LST and SST. Furthermore, it has obvious sampling biases. We have lots and lots of measurements where people live. We have very few measurements (per square kilometer of surface area) where people do not live. Surface temperatures can easily vary by 1K over a kilometer in lateral distance (e.g. at terrain features where one goes up a few hundred meters over a kilometer of grade). They can and do vary by 1 K over order of 5-10 kilometers variations routinely.

I can look at e.g. the Weather Underground’s weather map readings from weather stations scattered around Durham at a glance, for example. At the moment I’m typing this there is a 13 F variation from the coldest to the warmest station reading within a 15 km radius of where I’m sitting. Worse, nearly all of these weather station readings are between 50 and 55 F, but there are two outliers. One of them is 46.5 F (in a neighborhood in Chapel Hill), and the other is Durham itself, the “official” reading for Durham (probably downtown somewhere) which is 59.5 F!

Guess which one will end up being the temperature used to compute the average surface temperature for Durham today, and assigned to an entirely disproportionate area of the surface of the planet in a global average surface temperature reconstruction?

Incidentally, the temperature outside of my house at this particular moment is 52F. This is a digital electronic thermometer in the shade of the north side of the house, around a meter off of the ground. The air temperature on the other side of the house is almost certainly a few degrees warmer as the house sits on a southwest-facing hill with pavement and green grass absorbing the bright sunlight. The temperature back in the middle of the cypresses behind my house (dense shade all day long, but with decent airflow) would probably be no warmer than 50 F. The temperature a meter over the driveway itself (facing and angled square into the sun, and with the house itself reflecting additional heat and light like a little reflector oven) is probably close to 60 F. I’m guessing there is close to 10F variation between the air flowing over the southwest facing dark roof shingles and the northeast facing dark roof shingles, biased further by loss of heat from my (fairly well insulated) house.

I don’t even know how to compute an average surface temperature for the 1/2 acre plot of land my own house sits on, today, right now, from any single thermometer sampling any single location. It is 50F, 52 F, 58 F, 55F, 61 F, depending on just where my thermometer is located. My house is on a long hill (over a km long) that rises to an elevation perhaps 50-100 m higher than my house at the top — we’re in the piedmont in between Durham and Chapel Hill, where Chapel Hill really is up on a hill, or rather a series of hills that stretch past our house. I’d bet a nickel that it is a few degrees different at the top of the hill than it is where my house is today. Today it is windy, so the air is well mixed and the height is probably cooler. On a still night, the colder air tends to settle down in the hollows at the bottoms of hills, so last frost comes earlier up on hilltops or hillsides; Chapel Hill typically has spring a week or so before Durham does, in contradiction of the usual rule that higher locations are cooler.

This is why I am enormously cynical about Argo, SSTs, GISS, and so on as reliable estimates of average Global Temperature. They invariably claim impossible accuracy and impossible precision. Mere common sense suffices to reject their claim otherwise. If they disagree, they can come to my house and try to determine what the “correct” average temperature is for my humble half acre, and how it can be inferred from a single thermometer located on the actual property, let alone from a thermometer located in some weather station out in Duke Forest five kilometers away.

That is why I think that we have precisely 33 years of reasonably reliable global temperature data, not in terms of accuracy (which is unknown and perhaps unknowable) but in terms of statistical precision and as the result of a reasonably uniform sampling of the actual globe. The UAH T_{LT} is what it is, is fairly precisely known, and is at least expected to be monotonically related to a “true average surface Global Temperature”. It is therefore good for determining actual trends in global temperature, not so good for making pronouncements about whether or not the temperature now is or is not the warmest that it has been in the Holocene.

Hopefully the issues above make it just how absurd any such assertion truly is. We don’t know the actual temperature of the globe now, with modern instrumentation and computational methodology to an accuracy of 1 K in any way that can be compared apples-to-apples to any temperature reconstruction, instrument based or proxy based, from fifty, one hundred, one thousand, or ten thousand years ago. 1 K is the close order of all of the global warming supposedly observed since the invention of the thermometer itself (and hence the start of the direct instrumental record). We cannot compare even “anomalies” across such records — they simply don’t compare because of confounding variables, as the “Hide the Decline” and “Bristlecone Pine” problems clearly reveal in the hockey stick controversy. One cannot remove the effects of these confounding variables in any defensible way because one does not know what they are because things (e.g. annual rainfall and the details of local temperature and many other things) are not the same today as they were 100 years ago, and we lack the actual data needed to correct the proxies.

A year with a late frost, for example, can stunt the growth of a tree for a whole year by simply damaging its new leaves or can enhance it by killing off its fruit (leaving more energy for growth that otherwise would have gone into reproduction) completely independent of the actual average temperature for the year.

To conclude, one of many, many problems with modern climate research is that the researchers seem to take their thermal reconstructions far too seriously and assign completely absurd measures of accuracy and precision, with a very few exceptions. In my opinion it is categorically impossible to “correct” for things like the UHI effect — it presupposes a knowledge of the uncorrected temperature that one simply cannot have or reliably infer from the data. The problem becomes greater and greater the further back in time one proceeds, with big jumps (in uncertainty) 250, 200, 100 and 40 odd years ago. The proxy-derived record from more than 250 years ago is uncertain in the extreme, with the thermal record of well over 70% of the Earth’s surface completely inaccessible and with an enormously sparse sampling of highly noisy and confounded proxies elsewhere. To claim accuracy greater than 2-3 K is almost certainly sheer piffle, given that we probably don’t know current “true” global average temperatures within 1 K, and 5K is more likely.

I’m certain that some paleoclimatologists would disagree with such a pessimistic range. Surely, they might say, if we sample Greenland or Antarctic ice cores we can obtain an accurate proxy of temperatures there 1000 or 2000 years ago. Why aren’t those comparable to the present?

The answer is because we cannot be certain that the Earth’s primary climate drivers distributed its heat the same way then as now. We can clearly see how important e.g. the decadal oscillations are in moving heat around and causing variations in global average temperature. ENSO causes spikes and seems responsible for discrete jumps in global average temperature over the recent (decently thermometric) past that are almost certainly jumps from one poincare’ attractor to another in a complex turbulence model. We don’t even know if there was an ENSO 1000 years ago, or if there was if it was at the same location and had precisely the same dependences on e.g. solar state. As a lovely paper Anthony posted this morning clearly shows, major oceanic currents jump around on millennial timescales that appear connected to millennial scale solar variability and almost certainly modulate the major oscillations themselves in nontrivial ways. It is quite possible for temperatures in the antarctic to anticorrelate with temperatures in the tropics for hundreds of years and then switch so that they correlate again. When an ocean current is diverted, it can change the way ocean average temperatures (however one might compute them, see above) vary over macroscopic fractions of the Earth’s surface all at once.

To some extent one can control for this by looking at lots of places, but “lots” is in practice highly restricted. Most places simply don’t have a good proxy at all, and the ones that do aren’t always easy to accurately reconstruct over very long time scales, or lose all sorts of information at shorter time scales to get the longer time scale averages one can get. I think 2-3 K is a generous statement of the probable real error in most reconstructions for global average temperature over 1000 years ago, again presuming one can define an apples-to-apples global average temperature to compare to which I doubt. Nor can one reliably compare anomalies over such time scales, because of the confounding variables and drift.

This is a hard problem, and calling it settled science is obviously a political statement, not a scientific one. A good scientist would, I truly believe, call this unsettled science, science that is understood far less than physics, chemistry, even biology. It is a place for utter honesty, not egregious claims of impossibly accurate knowledge. In my own utterly personal opinion, informed as well or as badly as chance and a fair bit of effort on my part have thus far informed it, we have 33 years of a reasonably precise and reliable statement of global average temperature, one which is probably not the true average temperature assuming any such thing could be defined in the first place but which is as good as any for the purposes of identifying global warming or cooling trends and mechanisms.

Prior to this we have a jump in uncertainty (in precision, not accuracy) compared to the ground-based thermometric record that is strictly apples-to-oranges compared to the satellite derived averages, with error bars that rapidly grow the further back one goes in the thermometric record. We then have a huge jump in uncertainty (in both precision and accuracy) as we necessarily mount the multiproxy train to still earlier times, where the comparison has unfortunately been between modern era apples, thermometric era oranges, and carefully picked cherries. Our knowledge of global average temperatures becomes largely anecdotal, with uncertainties that are far larger than the observed variation in the instrumental era and larger still than the reliable instrumental era (33 year baseline).

Personally, I think that this is an interesting problem and one well worth studying. It is important to humans in lots of ways; we have only benefitted from our studies of the weather and our ability to predict it is enormously valuable as of today in cash money and avoided loss of life and property. It is, however, high time to admit the uncertainties and get the damn politics out of the science. Global climate is not a “cause”! It is the object of scientific study. For the conclusions of that science to be worth anything at all, they have to be brutally honest — honest in a way that is utterly stripped of bias and that acknowledges to a fault our own ignorance and the difficulty of the problem. Pretending that we know and can measure global average temperatures from a sparse and short instrumental record where it would be daunting to assign an accurate, local average temperature to any given piece of ground based on a dense sampling of temperatures from different locations and environments on that piece of ground does nothing to actually help out the science — any time one claims impossible accuracy for a set of experimentally derived data one is openly inviting false conclusions to be drawn from the analysis. Pretending that we can model what is literally the most difficult problem in computational fluid dynamics we have ever attempted with a handful of relatively simple parametric differential forms and use the results over centennial and greater timescales does nothing for the science, especially when the models, when tested, often fail (and are failing, badly, over the mere 33 years of reliable instrumentation and a uniform definition of at least one of the global average temperatures).

It’s time to stop this, and just start over. And we will. Perhaps not this year, perhaps not next, but within the decade the science will finally start to catch up and put an end to the political foolishness. The problem is that no matter what one can do to proxy reconstructions, no matter how much you can adjust LSTs for UHI and other estimated corrections that somehow always leave things warmer than they arguably should be, no matter what egregious claims are initially made for SSTs based on Argo, the UAH T_{LT} will just keep on trucking, unfutzable, apples to apples to apples. The longer that record gets, the less one can bias an “interpretation” of the record.

In the long run that record will satisfy all properly skeptical scientists, and the “warmist” and “denier” labels will end up being revealed as the pointless political crap that they are. In the long run we might actually start to understand some of the things that contribute to that record, not as hypotheses in models that often fail but in models that actually seem to work, that capture the essential longer time scale phenomena. But that long run might well be centennial in scale — long enough to detect and at least try to predict the millennial variations, something utterly impossible with a 33 year baseline.

rgb

Get notified when a new post is published.
Subscribe today!
0 0 votes
Article Rating
224 Comments
Inline Feedbacks
View all comments
markx
March 5, 2012 5:41 am

Re “Global temperature”
The simple fact that the ‘measurers’ feel the need to adjust for past errors, add more stations, eliminate suspect stations, add higher resolution satellite instruments, add (more and more) Argo diving buoys, etc, etc, etc, to the picture, says everything about their own faith in the current state of accuracy.

beng
March 5, 2012 5:54 am

****
Robert Brown says:
March 4, 2012 at 2:34 pm
A more correct statement is that outbound heat loss is the result of an integral over a rather complicated spectrum containing lots of structure. In very, very approximate terms one can identify certain bands where the radiation is predominantly “directly” from the “surface” of the Earth — whatever that surface might be at that location — and has a spectrum understandable in terms of thermal radiation at an associated surface temperature. In other bands one can observe what appears very crudely to be thermal radiation coming “directly” from gases at or near the top of the troposphere, at a temperature that is reasonably associated with temperatures there. In other bands radiation is nonlinearly blocked in ways that cannot be associated with a thermal temperature at all.
****
Exactly. I don’t see why people have trouble w/this. One can obviously “see” the GH effect by just looking at the outbound IR spectrum from above the earth.
A picture (w/understanding) is truly worth a thousand words in this case.

David
March 5, 2012 6:17 am

Steven Mosher says:
March 4, 2012 at 1:41 pm
As nick notes this is a strawman argument….
===================================================
I think you and Nick misss the point and you both are in fact the ones with “strawman” arguements. Dr Brown made your point within the body of the article.
“That is why I think that we have precisely 33 years of reasonably reliable global temperature data, not in terms of accuracy (which is unknown and perhaps unknowable) but in terms of statistical precision and as the result of a reasonably uniform sampling of the actual globe. The UAH is what it is, is fairly precisely known, and is at least expected to be monotonically related to a “true average surface Global Temperature”. It is therefore good for determining actual trends in global temperature, not so good for making pronouncements about whether or not the temperature now is or is not the warmest that it has been in the Holocene.”
Yet he did also point out some obvious difficulties even with average anomalies…”We cannot compare even “anomalies” across such records — they simply don’t compare because of confounding variables, as the “Hide the Decline” and “Bristlecone Pine” problems clearly reveal in the hockey stick controversy. One cannot remove the effects of these confounding variables in any defensible way because one does not know what they are because things (e.g. annual rainfall and the details of local temperature and many other things) are not the same today as they were 100 years ago, and we lack the actual data needed to correct the proxies.”
So Mr Mosher, neither you or Nick addressed the climate reconstructuions from the past and the wild claims of unprecedented, “hottest in a thousands years, we are all going to die” CAGW claims, which are refuted here. One could clearly add that by constant adjustments of areas, dropping some thermometers, adding a few new ones here and there, as the article says here…”I don’t even know how to compute an average surface temperature for the 1/2 acre plot of land my own house sits on, today, right now, from any single thermometer sampling any single location. It is 50F, 52 F, 58 F, 55F, 61 F, depending on just where my thermometer is located,”” and via lowering the past and and raising the present, ( http://www.real-science.com/corruption-temperature-record ) especialy in areas of sparse reading where those sparse readings adjust large land mass,( http://www.real-science.com/new-giss-data-set-heating-arctic ) as well as likely underestimating the UHI adjustments, (read McKitricks paper) you could easily get a data base that shows .1 to .25 degrees C warming more then there actually is.
So the real strawman here was your arguement and Nicks, as neither of you addressed what was actually written.

March 5, 2012 7:18 am

Agnostic says:
March 5, 2012 at 2:33 am
Without a measure of relative humidity, one would have to believe that 30 C in Houston with 80 % RH is the same as 30 C in Phoenix with 15 % RH, and your discussion of temps is dead on. Since the peak and trough of the temps are recorded without reference to time, or time gets lost later, the whole process is bogus. When a weak cold front with following dry air pulls into Houston, the total heat drops rapidly, even when the temp barely changes.
Anthony’s little USB logger, mounted on bicycles and using GPS would be a great method of profiling temperatures and humidity across urban areas. If they were gathered into a database it would provide an interesting picture of the local weather. As a motorcyclist in Phoenix, I know first hand the effect of riding into an citrus orchard on a pleasant night in Phoenix… jackets are called for.

Steve McIntyre
March 5, 2012 7:28 am

Your article is a reminder of the originality and importance of using oxygen infrared from satellite to measure tropospheric temperatures – a development for which Christy and Spencer should receive the highest honors from the climate scientific community.
However, neither is an AGU fellow, though AGU honors as fellows many climate scientists of modest credentials. http://www.agu.org/about/honors/fellows/alphaall.php#C

Blade
March 5, 2012 7:40 am

Steven Mosher [March 4, 2012 at 1:41 pm] says:
“I tell you that its 20C at my house and 24C 10 miles away. Estimate the temperature at 5 miles away. Now, you can argue that this estimate is not the “average”. You can hollar that you dont want to make this estimate.”

Why do that? Why estimate anything, ever? There is nothing wrong with having no data for a location, that would be the purely scientific thing to do. Backfilling empty records should be considered a high crime, and I could construct a hundred scenarios why you personally would not want this done to you in some way. For the life of me I cannot understand why all these people commenting on these blogs who are trying to portray themselves as intelligent scientists fail to question this. There appears to be no place these days that honors the critical importance of the evidentiary trail, the chain of custody or whatever we want to call it. I see a clear lack of scientific purity and integrity at work.

Steven Mosher [March 4, 2012 at 6:49 pm] says:
“First very few people in this debate understand how variable UHI is. Typically, they draw from very small samples to come up with the notion that UHI us huge: huge in all cases, huge at all times. It’s not. Here are some facts:
1. UHI varies across the urban landscape. That means in some places you find NEGATIVE UHI and in other places you find no UHI and in other places you find mild UHI and in other places you find large UHI. You really have to understand the last 100 meters.
2. UHI varies by Latitude; Its higher in the NH and lower in the SH.
3. UHI varies by season. Its present in some seasons and absent in others depending on the area.
4. UHI varies according to the weather. With winds blowing over 7m/sec it vanishes. in some places a 2m/sec breeze is all it takes.
So you can find UHI in BEST data, the tough thing is finding pervasive UHI. Several things work against this. most importantly the fraction of sites that re in large cities is very small.
Basically, you are looking for a small signal (UHI is less than .1C decade ) and many things can confound that.
There are no adjustments for UHI.
Your anecdote is interesting, but the problem is that studying many sites over long periods of time does not yield a similiar result.”

Anecdotes are more than interesting, they are evidence. When you say: “Basically, you are looking for a small signal (UHI is less than .1C decade ) and many things can confound that” you are doing what you always do, saying that an actual real-life effect gets lost in the sauce, the sandpapered homogenized averages. All this tells me is that the primary data collection process is completely corrupt. Compounding this, they’re taking this questionable data and then inputting into very questionable models. This is only a slight variation on the game of telephone most of us experienced as children, but for some of us the lesson appears to be completely lost.
Anyway, who said anything about adjusting (i.e., altering actual data) for UHI? That would be doing exactly the same thing that Hansen is doing which is corrupting the primary raw data record. No-one should be suggesting that. It sounds like a perfect strawman to beat up. What should be learned from the Surface Stations Project is that there is a real problem with location of stations, and much more thought needs to put into the locations of next generation equipment. Moreover, UHI does need to be acknowledged, without this ‘barely detectable’ nonsense. Then, it needs to be annotated clearly on output graphs as an ‘NB’ where applicable. It’s true that the overall UHI effect may never be properly ascertained thanks mostly to the averaging of averaging of averages, but at some point in the far off future there will be measuring stations everywhere, and the people of that time looking at a realtime plot of temps with immense UHI variation at 100 foot resolution will laugh at the statistical contortions occurring today. Hopefully by that time the concept shoveling all the data into a blender and hitting ‘puree’ will have died a well-deserved death.
Regardless of Steve’s above laundry list of reasons to disregard UHI, let’s just remember that there actually is a UHI effect, warmth radiating from heat sinks made of concrete, asphalt, metal, etc. All these materials have differing rates and magnitude of absorption and radiation, so it is a potpourri of signals. But one thing is certain, when all those materials occur in the same place it is strong, and Steve’s list becomes completely irrelevant. You don’t need IR night vision gear to know this, although I suspect we may need to issue them to the warmies and lukewarmies sometime soon. When I am standing at 2am on the concrete jungle that is the Las Vegas strip, there is an enormous difference than when I am a mile or three away in the desert. This is a human detectable variation, which means at least 5° to 10° F or much more. It is similar but obviously not quite as extreme in Los Angeles and NYC versus their suburbs. It matters not what latitude or hemisphere, and forget the ‘negative’ UHI idea, just build a Las Vegas or Dubai and bring your thermometer. Concrete, metal, asphalt etc., versus sand, dirt, trees. Sheesh, there is no need to complicate this!
IMHO, when Lukewarmers or warmie trolls or BEST try to diminish station placement or UHI and say that ‘it’s okay since it is lost in the averages’ what I hear them say is: ‘empirical evidence be damned, we know what we’re doing, trust us, we’re here to help. Many of us have seen this before, it’s called: ‘Close enough for government work’.

David
March 5, 2012 7:58 am

Robert Brown says:
March 4, 2012 at 4:25 pm
Damn, I’m going to just give up. Mousepad disease. Looks like I’ve completely lost two hourlong posts.
Grrr.
==========================
Yep, got to remember to copy and hold any long posts at several points. It must be frustrating, but expected to have someone like William Conneley write comments that ignore entire sections of your article which directly address their criticism, but to have Nick Stokes, and especially Steven Mosher do the same is worse.

Theo Goodwin
March 5, 2012 8:03 am

Robert Brown says:
March 4, 2012 at 2:34 pm
The “right” thing to do is just make the measurements and do the integrals and then try to understand the results as they stand alone, not to try to make a theoretical pronouncement of some particular variation based on an idealized notion of blackbody temperature.
What an absolute thrill it is to have a genuine scientist point out in luxurious detail the scandalous degree to which AGW fails to address the details, whether “a priori” or empirical, fails to connect with observable fact, and fails to break free from its “a priori” assumptions. There can be no doubt that climate science is in its infancy and that AGW proponents go to great extremes to hide that fact.

RockyRoad
March 5, 2012 8:04 am

Blade says:
March 5, 2012 at 7:40 am

Steven Mosher [March 4, 2012 at 1:41 pm] says:
“I tell you that its 20C at my house and 24C 10 miles away. Estimate the temperature at 5 miles away. Now, you can argue that this estimate is not the “average”. You can hollar that you dont want to make this estimate.”
Why do that? Why estimate anything, ever? There is nothing wrong with having no data for a location, that would be the purely scientific thing to do. Backfilling empty records should be considered a high crime, and I could construct a hundred scenarios why you personally would not want this done to you in some way.

You are absolutely correct, Blade. This should be shouted from the rooftops. Were it so, the whole charade of AGW would collapse that much sooner.

beng
March 5, 2012 8:14 am

DirkH says:
****
March 4, 2012 at 1:18 pm
Nick Stokes says:
March 4, 2012 at 12:43 pm
“Of course, anomalies are widely calculated and published. And they prove to have a lot of consistency. That is because they are the result of “following some fixed and consistent rule that goes from a set of data to a result”. And the temporal variation of that process is much more meaningful than the notion of a global average temperature.”
But the rules of that process change all the time. So the rules are not fixed. For instance, every time they kill a thermometer, they hange the rules.

****
Thanks, Dirk, you responded to this ridiculous statement early on. Take a snapshot of the surroundings of most any “official” site (mostly airports) & see how much urbanization has occurred over even a few yrs, let alone decades. One can argue that such changes are small area-wise, but guess where most of the sites are located — right in the middle of where the changes are! Stokes & Mosher shoot themselves in the foot repeatedly.

Somebody
March 5, 2012 9:05 am

The law of large numbers was mentioned. I’ve also seen invoked the central limit theorem with the same line of reasoning, related with the AGW pseudo-science.
Before invoking those, please look careful when they apply, how, and what’s the proof.
For example, for both mentioned above, the independency is an important thing. It usually appears in the proof. Now, let’s check that spatially and temporarily. I say that there is a correlation, both spatially and temporarily, or else one cannot define temperature even locally (there wouldn’t be a cvasiequilibrium to be able to define it), which would be a bad thing. The heat equation couldn’t work, to mention a consequence. Another assumption is that the random variables have the same distribution. Do they really? Let’s check that for a point at equator, versus a point at the North Pole, when it’s polar night. Ok, let us notice that they have different means. Really. It’s hot at the equator, and cold at the North Pole. Check the real data. The expected value is not the same at different points on Earth. If you compare different points on Earth you might also learn that they have different variances. So no, the variables do not have the same distribution. So the proofs for the law of large numbers and central limit theorem are not valid for the temperatures on the Earth. So, unless the AGW pseudo-scientists provide a proof that does not rely on the independence of variables and/or the identity/similarity (in the sense of equal means and/or variances and so on), the law of large numbers/central limit theorem does not apply to temperatures (unless you apply them for example to measuring the same physical temperature many times, which is another, very different thing).

Dolphinhead
March 5, 2012 9:16 am

I have long been of the opinion that the concept of a global average temperature is mostly meaningless. There seems to be strong support for that opinion expressed on this blog. How ludicrous that we continue to spend billions on climate models that attempt to predict this meaningless metric. The world is trully [snipped]

March 5, 2012 9:25 am

Clive Best says: March 5, 2012 at 1:12 am
“Reply Nick Stokes: The CRU statement on errors is…”

Yes, but errors in what? Averages of what? Read carefully, and you’ll find it is average anomaly.
Again, if you think someone is claiming to know the average temperature to within 0.05C, ask (for I will) what is the figure? What is that average temperature? You won’t find such a claimed number at CRU.
The difference is not semantic. This post has described all sorts of well-known difficulties of measuring an average temperature. That’s why no-one does it. Calculating anomalies, and averaging that, overcomes most of the problems.

Reply to  Nick Stokes
March 5, 2012 11:20 am

Stokes:
“Yes, but errors in what? Averages of what? Read carefully, and you’ll find it is average anomaly.”
Yes – I know they are quoting 0.05 degrees C. as the error on the annual “average anomaly”. Lets look for a moment at exactly how all these anomalies are calculated :
– The anomaly at one single station is the measured temperature per month minus the monthly “normals” for that station. (The normals are calculated by averaging measured temperatures for the station from 1960-1999).
– Next all “anomalies” are binned into a 5×5 degree grid for each month. All station anomalies within the same grid point are averaged together. The assumption here is that within one grid point (~300,000 km^2) the anomaly is constant. Note also systematic problems : 1) Many grid points are actually empty – especially for early years. 2) The distribution of grid points with latitude is highly asymmetric with over 80 percent of all stations outside the tropics.
– The monthly grid time series is then converted to an annual series by averaging the grid points over each 12 month period. The result is a grid series (36,72,160) ie. 160 years of data.
– Finally the yearly global temperature anomalies are calculated by taking an area weighted average of all the populated grid points in each year. The formula for this is $Weight = cos( $Lat * PI/ 180 ) where $Lat is the value in degrees of the middle of each grid point. All empty grid points are excluded from this average.
The systematic errors are the main problem. You can view all the individual station data here. There are clearly coverage problems with the early data. I repeated the exactly same calculation as above but changed the way the normals are calculated. Instead of calculating a “per station” monthly average, I calculated a “per grid point” monthly average. All other steps are the same. The new result is shown here .
I conclude that there are probably systematic errors on temperature anomalies before 1900 of about 0.5 degrees C. Deriving one annual global temperature anomaly is anyway based on the premise that there is a single global effect (CO2 forcing) which can then be identified by subtracting two large numbers.

Slartibartfast
March 5, 2012 10:01 am

When you take the difference of two random variables, is the result more or less uncertain than either of the two numbers you are differencing? It depends on the nature of the error component, I suppose. If the assumption is that the majority of the error is fixed bias, then the result has less uncertainty. Otherwise, not so much.

Edim
March 5, 2012 10:33 am

No meat grinders are necessary. Take only the absolutely top quality stations (no changes, moves, as far as possible from human influence (even rural influence)…), from all continents. Since it’s a temperature INDEX and not a temperature, grids are unnecessary. No adjustments! Make the process open and transparent. Plot all the single stations together (spaghetti) to see what it looks like for a start. Then do the analysis.

March 5, 2012 11:08 am

Dr. Brown. What happened to Duke when they played North Carolina? My sister is an avid Duke fan and I had to suffer her disposition during the game. Please tell coach K to win all remaining games; it makes my life easier.
As a long time Duke fan, beating Carolina wouldn’t be so sweet if it weren’t for the annoying fact that they sometimes beat us.
But hey, we made Roy cry…;-) Mike mans up a little better when Duke loses.
And there is still the tournament. And maybe the tournament after that. Duke is a team that still hasn’t quite come together to be as good as they can be — they have the talent and sometimes brilliance, but they don’t have the consistency and the go-to player who will not let them lose, so far. They could win the ACC tournament or go home in the first game, damn near ditto in the NCAA. I’m not seeing them do six in a row for the latter, but they could do four, and then anything can happen.
rgb
(Apologies for an OT reply, but hey, there is climate change and then there is important stuff like Duke Basketball…;-)

bill
March 5, 2012 11:32 am

Once you face the (fairly obvious) fact that we don’t know the global temp for the last couple of hundred years in any meaningful way, then climate science and its scary forecasts collapse. Admittedly, taking into account all the technical, political, spatial, historic reasons why the global climate record is imperfect, we could probably agree we have a rough idea of global temperature. However, what use is a rough idea for people who are claiming that calamity starts at 3 degrees away from the average (which remember we don’t actually know) and who crow over fractions of degrees as proof of the fulfilling of their prophesies? Their argument of certainty rests on uncertainty. Not only does that uncertainty invalidate their certainty, it renders risible the policy solutions which might save us from ‘certain’ disaster.

LucVC
March 5, 2012 11:58 am

Not to be posted
Robert Brown says:
March 4, 2012 at 6:09 pm
is worthy of turning into an article too. In the same line as Willis question of where they got that accuracy in Argo. This seems even more absurd

March 5, 2012 12:17 pm

Dr Brown: I have enjoyed your posts as you clearly have an open and enquiring mind.
I just listened to one of your Canadian Colleaques, Dr. W. R. Peltier of the University of Toronto berate the scientists who wrote an “opposing” article to the WSJ. He repeatedly and with considerable vehemence called them deniers on our public radio system on a program called Quirks and Quarks – with a very warmist host, Bob MacDonald. http://www.cbc.ca/quirks/
Interesting on this page was notification of the shutting down of the Canadian weather station at Eureka, Nunavit, Canada – which is a bit sad given it is a good arctic weather station. There were excellent discussions with this person on WUWT about their efforts to acquire good data and how wind direction affected their temperature readings. A very rational and good discussion as compared to Dr. Peltier’s repeated use of the word “denier” as an epithet in his interview when discussion his fellow scientists who wrote the WSJ article.
Very unprofessional considering he was belittling him on National Radio that is heard not only in Canada but a good part of the USA. I was embarrassed for him and his fellow warming scientists but I suppose when facts fail you, throwing epithets is the only option left to the uneducated.
Sadly, this was related to his winning an award with a 1 million dollar prize associated with it:
“Dr. Richard Peltier, University Professor of Physics at the University of Toronto and founding director of U of T’s Centre for Global Change Science, is this year’s winner of Canada’s highest prize for science, the Gerhard Herzberg Canada Gold Medal for Science and Engineering.”
What bothers me even more – I am an Engineer, and in our Association of Professional Engineers Geologists and Geophysicist Association, there has been considerable rational debate on the issue of global warming without this type of nasty attribution in our letters to the editor and other articles (at least in the ones I have read).
I embarrassed to see the word “Engineering” in the name of the award that he received as given the way he used the words “deniers” in his he used in his interview, a Professional Engineer might be subject to disciplinary action for making this kind of accusation against his peers.
I am in total shock that such a person would make such strong statements although it may be I have been sensitized by my own biases.
Listen here: http://www.cbc.ca/quirks/media/2011-2012/qq-2012-03-03_05.mp3
Sadly, he will use his 1 million dollar award to hire graduate students and post doctorate fellows to prove out his holistic earth “MODEL” to “Make Projections”. In other words, it appears he want them go look for data that will support his conclusions and “TUNE” his models to match reality as opposed to the real science of analyzing data and developing a conclusion.
As far as Dr. Peltier is concerned, it seems, he considers the science is settled. He is a modeler. And we all know about GIGO. So he is really a garbage collector. He needs to take the garbage out …. so we can get back to science.
He wants to develop models to project/predict client 100 years out.
The interview sounds fairly reasonable until he gets to the denier comments except where he claims the “ensemble of independent models” is very accurate. Another theory of averages – average the models and get an accurate result. Amazing. You can make bad data good simply by averaging.
He really goes on about forcings versus feedbacks. He also wants to “decarbonize” the economy.
But perhaps I am overreacting.
It would be nice to have some third party comments on his interview.
Too bad. Kind of sad

Frank K.
March 5, 2012 12:24 pm

Well gosh – nobody answered my question. Is a correlation coefficient of 0.5 considered “good” or “strong”? What about 0.6? 0.7? 0.8? How good is good enough?
Did anyone read Hansen’s paper 1987?
Once we’ve established the “goodness” of the station correlations, we can then talk about the “reference station method” that they use…

March 5, 2012 12:33 pm

Shoot, auto correct nailed what I posted several times. APEGGA is the Association of Professional Engineers Geologists and Geophysicists of Alberta (6th paragraph)
“discussion” should be “discussing” 3rd paragraph.
7th paragraph – duplicate words – “way he used the words “deniers” in his he used in his interview” should read “way he used the words “deniers” in his interview.”
“models to project/predict client 100 years out.” – “client” was supposed to be “climate”.
My apologies for the typos. Very unprofessional of ME not to have proof read this better before running out the door this morning.
Apologies.

Larry Ledwick (hotrod)
March 5, 2012 12:36 pm

Stephen Richards says:
March 5, 2012 at 4:16 am
Robert
Many years ago I made this very same argument at RealClimate. I got a very similar, but more rude, reply from them as you have from Stokes and Mosher. People from non-engineering and/or non Physics -ologies have not had the necessary tutolage to understand their lack of knowledge in those fields. You always get the “strawman” argumant back because they can’t understand measuring techniques and their limitations.

This is perhaps one of the reasons the average person on the street understands the limitations of these measurements much better than these “climate scientists”, because in their every day jobs and activities they are constantly confronted by measurement errors and the difficulty of making accurate measurements.
The machinist quickly learns that metals move around a lot due to temperature changes that are not obvious at all. You measure a part just as the lunch whistle blows and when you come back from lunch you re-measure it and find it is a different size, because it cooled while you were eating. The steel worker that learns that a beam will fit easily into a location in the morning but after a few hours it now has to be forcibly coaxed into position because it is an 1/8 of an inch longer due to a hardly noticeable temperature change. Shooters learn that their rifle shoots to a different point of aim after the first couple shots have warmed the cold bore, and if the cartridges are left out in the sun during a match they will shoot high as the powder will burn faster since the cartridge case is warmer producing higher pressures. Photographers see changes in exposure that the human eye totally ignored but the camera meter detected and the resulting images are obviously different, yet at the time he was completely unaware a thin cloud had blocked a few percent of the sunlight. The house wife that discovers her favorite bread recipe does not bake properly at Denver’s altitude due to the lower pressure and she needs to change how she prepares bread dough. Mechanics discover that certain auto parts are not really round and it makes a difference where and how you measure them (pistons are actually oval shaped) etc. etc.
People with real jobs run into measurement issues every day at far courser resolution than the scientists are claiming and once the issue is pointed out to the person on the street, it is simply unreasonable that they can achieve the precision they claim given the highly questionable data they are working with.
Larry

March 5, 2012 12:51 pm

bill says: March 5, 2012 at 11:32 am
“Once you face the (fairly obvious) fact that we don’t know the global temp for the last couple of hundred years in any meaningful way, then climate science and its scary forecasts collapse.”

No, our ability to measure variations in global temperature is unrelated to the fact that we are putting large amounts of CO2 into the air, and CO2 is a greenhouse gas. The earth will heat regardless of our skill at thermometry.

Edim
March 5, 2012 1:05 pm

“No, our ability to measure variations in global temperature is unrelated to the fact that we are putting large amounts of CO2 into the air, and CO2 is a greenhouse gas. The earth will heat regardless of our skill at thermometry.”
Will it also heat regardless of our skill at physics?

March 5, 2012 1:15 pm

Your article is a reminder of the originality and importance of using oxygen infrared from satellite to measure tropospheric temperatures – a development for which Christy and Spencer should receive the highest honors from the climate scientific community.
However, neither is an AGU fellow, though AGU honors as fellows many climate scientists of modest credentials. http://www.agu.org/about/honors/fellows/alphaall.php#C

Damn skippy! It is brilliant, and the record they are developing is as important as the work of the rest of the GISS and HADCRUT put together — an objective, completely independent more or less direct measure of global temperature that cannot be easily tweaked, twiddled, cherrypicked, trended, detrended, or corrected on any sort of heuristic basis. What corrections there are (for e.g. drift in the detectors) are made on the basis of direct evidence for the need, well supported by multiple independent instruments.
As I said, in 100 years people will still do microwave soundings from above TOA and derive troposphere temperatures that can be directly and accurately compared to their work. Try doing that in 100 years with GISS or HADCRUT. The data sources (where in the case of HADCRUT, last time I looked nobody but the elite few knew exactly what they were or could access them) will be different, locations that survive will have different instrumentation, the physical environment of those locations will continue to change and introduce noise and drift that cannot be corrected for as it cannot be separated from the signal one wishes to measure. A temperature anomaly doesn’t come with a fractionated causal label.
I don’t, actually, think that GISS is completely useless. I just don’t think it is as useful or conclusive as it is alleged to be regarding predicting CAGW. For one thing, I think the error estimates are egregious, as I’ve now pointed out several times.
I notice, however, that Nick Stokes still hasn’t addressed the believability of error estimates only a factor of 2 greater for 1885 than they are for today, or the GISS elephant in the room — the very models that predict CAGW also predict that that the lower troposphere should warm more, faster, than the ground. The actual numbers go the other way, really quite substantially. Which is wrong — the GCMs (in which case why should we believe them when they predict catastrophe when they can’t even get the relative slope of atmospheric warming right) — or GISS which might be systematically exaggerrating the growth of the surface anomaly? No winners either way — and they could both be wrong. I doubt that the UAH numbers are wrong, though.
I am also quite unconvinced that the El Nino spike has anything whatsoever to do with the overall trend in the data, any more than Mt Pinatubo cooling does. Yes, the various climate oscillations are important modulators, as are natural events, as is solar state. On what grounds, then, do you decide what is signal and what is noise? Either include everything and correct for nothing and let the data speak for itself or open up the field to empirically fit all the plausible hypothesized causes and see what works best the explain the data. Stop playing the omitted variable game, whether or not it is really “fraud”.
The latter is anathema to the CAGW crowd, however, because the more possible causes one admits for observed warming trends, the less warming that remains to be explained by increased CO_2 and the lower the chances of catastrophe. Curiously, nobody in the CAGW camp ever seems to be cheered by the possibility that we aren’t headed for catastrophe after all. When the UAH temperature fails to actually continue to increase post the 1998 peak and might even be decreasing, this should be great news! The sky may not be falling after all! At the very least, it is something else to understand, a suggestion from nature to stop omitting important variables and focusing “only” on CO_2.
After all, if solar state is a more important control variable than “just” variable direct TOA insolation, then suddenly the solar state in the latter part of the 20th century becomes relevant, because the sun was in a Grand Maximum that at least one researcher alleges on the basis of proxy evidence was the most intense and prolonged such event in around 10,000 years — the last such event might have been associated with the actual end of the ice age and helped initiate the Holocene. A plausible physical mechanism for this has been proposed (modulation of albedo), and there is a fair bit of corroborating evidence, including the direct evidence that the Earth’s albedo has increased measurably over the last fifteen years, large due to changes in cloud formation patterns.
This sounds like it might be actual science. There is an observation — solar state and the Earth’s climate have varied in a correlated way over thousands of years in a way that is too strong to be explained by just variation in surface brightness of the sun. There are papers on this that present convincing data. There is a hypothesis — the solar magnetic field modulates Galactic Cosmic Rays that impact the Earth’s atmosphere. There is more evidence, again, totally convincing, that cosmic-ray derived neutron rates countervary with solar activity and so on, in addition to variations in various radioactive atmospheric components that allow us to trace this back over historical time via proxies. There is a plausible explanatory hypothesis — that particle cascades through supersaturated air can nucleate clouds. This hypothesis has some direct evidence, although questions remain. There is the indirect evidence that as the Sun’s magnetic activity has recently sharply diminished, the Earth’s albedo has increased, and that the increase has been due to increased cloud formation. Finally, there is secondary evidence that may be connected but that (as yet) lacks a complete explanation — during that same interval stratospheric H_2 O has dropped by roughly twice as much as the albedo has increased.
A change in albedo of 0.01 is equivalent to roughly 1 K. If the Earth’s bond albedo went from 0.30 to 0.31, one would (all things being equal) expect the global average temperature to drop by roughly 1 K. The observed variation has been easily of this magnitude (the Earthlight-derived increase was around 6% or 0.02, and the albedo is currently holding close to constant (retaining the increase). This could cause the Earth to gradually cool by as much as 2 degrees K over 20-30 years if the GCR modulation hypothesis is correct and the sun remains magnetically quiescent — much as was observed during the Maunder Minimum and associated Little Ice Age.
This is not rogue science! It is quite legitimate. It deserves to be taken quite seriously, not suppressed as contrary to “the cause”, especially not without bothering to even wait for experiments and evidence to be undertaken to help support or undermine it. One can understand every step of the physics every bit as well as one can understand the GHE — arguably better as it is higher in the food chain of solar energy flow than the GHE. It is already known that modulating nucleation aerosols modulates cloud formation rates, albedo, and global temperature — it simply provides a secondary solar-modulated pathway of modulating the nucleation process, one that we may well not fully understand yet but that evidence suggests exists. Given that this has profound implications for our planet’s climatological future and that a 2 K decrease in average global temperature might well be catastrophic, perhaps we should work on this and not dismiss the entire hypothesis with a sniff.
Other confounding causes are the phases of e.g. the PDO (beloved by Spencer) and/or NAO, changes in ocean currents. In other words, one might well be able to explain at least some fraction of the warming and cooling trends of the last thirty years without recourse to CO_2 and the GHE. We aren’t close to untangling the morass of cause and effect on patterns of warming and cooling here, although there is tantalizing evidence from the previous warmings of the Arctic in the period following 1920 that coupling between the NAO and currents split off from the Gulf Stream were major factors. Even the change in cloud cover and albedo could have a completely different cause from GCR modulation, because in spite of its importance as a GHG our understanding of water in the atmosphere is really rather poor.
In time, satellite-derived data will let us answer most if not all of these questions (including the ones involving the albedo and modulation of the actual GHE itself as visible in TOA IR spectra, which IMO is the only thing that ultimately really matters regarding power flow through the Earth as an open system), and answer them far more soundly than ground based thermometric data and with far less opportunity for systematic biases (intentional or not) to produce misleading results. Christy and Spencer absolutely deserve recognition for their contribution, and Spencer deserves further recognition for his steadfast refusal to buy into the CAGW hypothesis as “proven” in spite of considerable pressure to do so, but I don’t know that they will get it. They represent a major embarrassment for GISS and the GCMs and the UAH/RSS data substantially weakens the case for “catastrophe”.
rgb