Global annualized temperature – "full of [snip] up to their eyebrows"

Guest Post by Dr. Robert Brown,

Physics Dept. Duke University [elevated from comments]

Dr. Brown mentions “global temperature” several times. I’d like to know what he thinks of this.

Dr. Brown thinks that this is a very nice piece of work, and is precisely the reason that he said that anybody who claims to know the annualized average temperature of the Earth, or the Ocean, to 0.05 K is, as the saying goes, full of [snip] up to their eyebrows.

What I think one can define is an average “Global Temperature” — noting well the quotes — by following some fixed and consistent rule that goes from a set of data to a result. For example, the scheme that is used to go from satellite data to the UAH lower troposphere temperature. This scheme almost certainly does not return “the average Global Temperature of the Earth” in degrees absolute as something that reliably represents the coarse-grain averaged temperature of (say) the lowest 5 kilometers of the air column, especially not the air column as its height varies over an irregular terrain that is itself sometimes higher than 5 kilometers. It does, however, return something that is likely to be close to what this average would be if one could sample and compute it, and one at least hopes that the two would co-vary monotonically most of the time.

The accuracy of the measure is very likely not even 1K (IMO, others may disagree) where accuracy is |T_{LTT} - T_{TGT}| — the absolute difference between lower troposphere temperature and the “true global temperature” of the lower troposphere. The various satellites that contribute to temperature have (IIRC) a variance on this order so the data itself is probably not more accurate than that. The “precision” of the data is distinct — that’s a measure of how much variance there is in the data sources themselves, and is a quantity that can be systematically improved by more data, where accuracy, especially in a situation like this where one is indirectly inferring a quantity that is not exactly the same as what is being measured cannot be improved by more or more precise measurements, it can only be improved by figuring out the map between the data one is using and the actual quantity you are making claims about.

Things are not better for (land) surface measurements — they are worse. There the actual data is (again, in my opinion) hopelessly corrupted by confounding phenomena and the measurement errors are profound. Worse, the measurement errors tend to have a variable monotonic bias compared to the mythical “true average surface Global Temperature” one wishes to measure.

One is in trouble from the very beginning. The Moon has no atmosphere, so its “global average temperature” can be defined without worrying about measuring its temperature at all. When one wishes to speak of the surface temperature at a given point, what does one use as a definition? Is it the temperature an actual high precision thermometer would read (say) 1 cm below the surface at that point? 5 mm? 1 mm? 1 meter? All of these would almost certainly yield different results, results that depend on things like the albedo and emissivity of the point on the surface, the heat capacity and thermal conductivity of the surface matter, the latitude. Is it the “blackbody” temperature of the surface (the inferred temperature of the surface determined by measuring the outgoing full spectrum of radiated light)?

Even inferring the temperature from the latter — probably the one that is most relevant to an airless open system’s average state — is not trivial, because the surface albedo varies, the emissivity varies, and the outgoing radiation from any given point just isn’t a perfect blackbody curve as a result.

How much more difficult is it to measure the Earth’s comparable “surface temperature” at a single point on the surface? For one thing, we don’t do anything of the sort. We don’t place our thermometers 1 meter, 1 cm, 1 mm deep in — what, the soil? The grass or trees? What exactly is the “surface” of a planet largely covered with living plants? We place them in the air some distance above the surface. That distance varies. The surface itself is being heated directly by the sun part of the time, and is radiatively cooling directly to space (in at least some frequencies) all of the time. Its temperature varies by degrees K on a time scale of minutes to hours as clouds pass between the location and the sun, as the sun sets, as it starts to rain. It doesn’t just heat or cool from radiation — it is in tight thermal contact with a complex atmosphere that has a far greater influence on the local temperature than even local variations in insolation.

Yesterday it was unseasonably warm in NC, not because the GHE caused the local temperature to be higher by trapping additional heat but because the air that was flowing over the state came from the warm wet waters of the ocean to the south, so we had a relatively warm rain followed by a nighttime temperature that stayed warm (low overnight of maybe 46F) because the sky was cloudy. Today it is almost perfectly seasonal — high 50’s with a few scattered clouds, winds out of the WSW still carrying warm moisture from the Gulf and warm air from the south central US, but as the day progresses the wind is going to shift to the NW and it will go down to solidly freeze (30F) tonight. Tomorrow it will be seasonal but wet, but by tomorrow night the cooler air that has moved in from the north will make it go down to 25F overnight. The variation in local temperature is determined far more by what is going on somewhere else than it is by actual insolation and radiation here.

If a real cold front comes down from Canada (as they frequently do this time of year) we could have daytime highs in the 30’s or low 40’s and nighttime lows down in the the low 20s. OTOH, if the wind shifts to the right quarter, the temperature outside could reach the low 80s high and low 50s low. We can, and do, have both extremes within a single week.

Clearly surface temperatures are being driven as strongly by the air and moisture flowing over or onto them as they are by the “ideal” picture of radiative energy warming the surface and radiation cooling it. The warming of the surface at any given point isn’t solely responsible for the warming or cooling of the air above it, the temperature of the surface is equally dependent on the temperature of the air as determined by the warming of the surface somewhere else, as determined by the direct warming and cooling of the air itself via radiation, as determined by phase changes of water vapor in the air and on the surface, as determined by factor of ten modulations of insolation as clouds float around over surface and the lower atmosphere alike.

Know the true average surface Global Temperature to within 1K? I don’t even know how one would define a “true” average surface Global Temperature. It was difficult enough for the moon without an atmosphere, assuming one can agree on the particular temperature one is going to “average” and how one is going to perform the average. For the Earth with a complex, wet, atmosphere, there isn’t any possibility of agreeing on a temperature to average! One cannot even measure the air temperature in a way that is not sensitive to where the sun is and what it is doing relative to the measurement apparatus, and the air temperature can easily be in the 40s or 50s while there is snow covering the ground so that the actual surface temperature of the ground is presumably no higher than 32F — depending on the depth one is measuring.

And then oops — we forgot the Oceans, that cover 70% of the surface of the planet.

What do we count as the “temperature” of a piece of the ocean? There is the temperature of the air above the surface of the ocean. In general this temperature differs from the actual temperature of the water itself by order of 5-10K. The air temperature during the day is often warmer than the temperature of the water, in most places. The air temperature at night is often cooler than the temperature of the water.

Or is it? What exactly is “the temperature of the water”? Is it the temperature of the top 1 mm of the surface, where the temperature is dominated by chemical potential as water molecules are constantly being knocked off into the air, carrying away heat? Is it the temperature 1 cm deep? 10 cm? 1 m? 10 m? 50 m? 100m? 1 km?

Is it the average over a vertical column from the surface to the bottom (where the actual depth of the bottom varies by as much as 10 km)? This will bias the temperature way, way down for deep water and make the global average temperature of the ocean very nearly 4K very nearly everywhere, dropping the estimate of the Earth’s average Global Temperature by well over 10K. Yet if we do anything else, we introduce a completely arbitrary bias into our average. Every value we might use as a depth to average over has consequences that cause large variations in the final value of the average. As anyone who swims knows, it is quite easy for the top meter or so of water to be warm enough to be comfortable while the water underneath that is cold enough to take your breath away.

Even if one defines — arbitrarily, as arbitrary in its own way as the definition that one uses for T_{LT} or the temperature you are going to assign to a particular point on the surface on the basis of a “corrected” or “uncorrected” thermometer with location biases that can easily exceed several degrees K compared to equally arbitrary definitions for what the thermometer “should” be reading for the unbiased temperature and how that temperature is supposed to relate to a “true” temperature for the location — a sea surface temperature SST to go with land surface temperature LST and then tries to take the actual data for both and turn them into a average global temperature, one has a final problem to overcome. One’s data is (with the possible exception of modern satellite derived data) sparse! Very sparse.

In particular, it is sparse compared to the known and observed granularity of surface temperature variations, for both LST and SST. Furthermore, it has obvious sampling biases. We have lots and lots of measurements where people live. We have very few measurements (per square kilometer of surface area) where people do not live. Surface temperatures can easily vary by 1K over a kilometer in lateral distance (e.g. at terrain features where one goes up a few hundred meters over a kilometer of grade). They can and do vary by 1 K over order of 5-10 kilometers variations routinely.

I can look at e.g. the Weather Underground’s weather map readings from weather stations scattered around Durham at a glance, for example. At the moment I’m typing this there is a 13 F variation from the coldest to the warmest station reading within a 15 km radius of where I’m sitting. Worse, nearly all of these weather station readings are between 50 and 55 F, but there are two outliers. One of them is 46.5 F (in a neighborhood in Chapel Hill), and the other is Durham itself, the “official” reading for Durham (probably downtown somewhere) which is 59.5 F!

Guess which one will end up being the temperature used to compute the average surface temperature for Durham today, and assigned to an entirely disproportionate area of the surface of the planet in a global average surface temperature reconstruction?

Incidentally, the temperature outside of my house at this particular moment is 52F. This is a digital electronic thermometer in the shade of the north side of the house, around a meter off of the ground. The air temperature on the other side of the house is almost certainly a few degrees warmer as the house sits on a southwest-facing hill with pavement and green grass absorbing the bright sunlight. The temperature back in the middle of the cypresses behind my house (dense shade all day long, but with decent airflow) would probably be no warmer than 50 F. The temperature a meter over the driveway itself (facing and angled square into the sun, and with the house itself reflecting additional heat and light like a little reflector oven) is probably close to 60 F. I’m guessing there is close to 10F variation between the air flowing over the southwest facing dark roof shingles and the northeast facing dark roof shingles, biased further by loss of heat from my (fairly well insulated) house.

I don’t even know how to compute an average surface temperature for the 1/2 acre plot of land my own house sits on, today, right now, from any single thermometer sampling any single location. It is 50F, 52 F, 58 F, 55F, 61 F, depending on just where my thermometer is located. My house is on a long hill (over a km long) that rises to an elevation perhaps 50-100 m higher than my house at the top — we’re in the piedmont in between Durham and Chapel Hill, where Chapel Hill really is up on a hill, or rather a series of hills that stretch past our house. I’d bet a nickel that it is a few degrees different at the top of the hill than it is where my house is today. Today it is windy, so the air is well mixed and the height is probably cooler. On a still night, the colder air tends to settle down in the hollows at the bottoms of hills, so last frost comes earlier up on hilltops or hillsides; Chapel Hill typically has spring a week or so before Durham does, in contradiction of the usual rule that higher locations are cooler.

This is why I am enormously cynical about Argo, SSTs, GISS, and so on as reliable estimates of average Global Temperature. They invariably claim impossible accuracy and impossible precision. Mere common sense suffices to reject their claim otherwise. If they disagree, they can come to my house and try to determine what the “correct” average temperature is for my humble half acre, and how it can be inferred from a single thermometer located on the actual property, let alone from a thermometer located in some weather station out in Duke Forest five kilometers away.

That is why I think that we have precisely 33 years of reasonably reliable global temperature data, not in terms of accuracy (which is unknown and perhaps unknowable) but in terms of statistical precision and as the result of a reasonably uniform sampling of the actual globe. The UAH T_{LT} is what it is, is fairly precisely known, and is at least expected to be monotonically related to a “true average surface Global Temperature”. It is therefore good for determining actual trends in global temperature, not so good for making pronouncements about whether or not the temperature now is or is not the warmest that it has been in the Holocene.

Hopefully the issues above make it just how absurd any such assertion truly is. We don’t know the actual temperature of the globe now, with modern instrumentation and computational methodology to an accuracy of 1 K in any way that can be compared apples-to-apples to any temperature reconstruction, instrument based or proxy based, from fifty, one hundred, one thousand, or ten thousand years ago. 1 K is the close order of all of the global warming supposedly observed since the invention of the thermometer itself (and hence the start of the direct instrumental record). We cannot compare even “anomalies” across such records — they simply don’t compare because of confounding variables, as the “Hide the Decline” and “Bristlecone Pine” problems clearly reveal in the hockey stick controversy. One cannot remove the effects of these confounding variables in any defensible way because one does not know what they are because things (e.g. annual rainfall and the details of local temperature and many other things) are not the same today as they were 100 years ago, and we lack the actual data needed to correct the proxies.

A year with a late frost, for example, can stunt the growth of a tree for a whole year by simply damaging its new leaves or can enhance it by killing off its fruit (leaving more energy for growth that otherwise would have gone into reproduction) completely independent of the actual average temperature for the year.

To conclude, one of many, many problems with modern climate research is that the researchers seem to take their thermal reconstructions far too seriously and assign completely absurd measures of accuracy and precision, with a very few exceptions. In my opinion it is categorically impossible to “correct” for things like the UHI effect — it presupposes a knowledge of the uncorrected temperature that one simply cannot have or reliably infer from the data. The problem becomes greater and greater the further back in time one proceeds, with big jumps (in uncertainty) 250, 200, 100 and 40 odd years ago. The proxy-derived record from more than 250 years ago is uncertain in the extreme, with the thermal record of well over 70% of the Earth’s surface completely inaccessible and with an enormously sparse sampling of highly noisy and confounded proxies elsewhere. To claim accuracy greater than 2-3 K is almost certainly sheer piffle, given that we probably don’t know current “true” global average temperatures within 1 K, and 5K is more likely.

I’m certain that some paleoclimatologists would disagree with such a pessimistic range. Surely, they might say, if we sample Greenland or Antarctic ice cores we can obtain an accurate proxy of temperatures there 1000 or 2000 years ago. Why aren’t those comparable to the present?

The answer is because we cannot be certain that the Earth’s primary climate drivers distributed its heat the same way then as now. We can clearly see how important e.g. the decadal oscillations are in moving heat around and causing variations in global average temperature. ENSO causes spikes and seems responsible for discrete jumps in global average temperature over the recent (decently thermometric) past that are almost certainly jumps from one poincare’ attractor to another in a complex turbulence model. We don’t even know if there was an ENSO 1000 years ago, or if there was if it was at the same location and had precisely the same dependences on e.g. solar state. As a lovely paper Anthony posted this morning clearly shows, major oceanic currents jump around on millennial timescales that appear connected to millennial scale solar variability and almost certainly modulate the major oscillations themselves in nontrivial ways. It is quite possible for temperatures in the antarctic to anticorrelate with temperatures in the tropics for hundreds of years and then switch so that they correlate again. When an ocean current is diverted, it can change the way ocean average temperatures (however one might compute them, see above) vary over macroscopic fractions of the Earth’s surface all at once.

To some extent one can control for this by looking at lots of places, but “lots” is in practice highly restricted. Most places simply don’t have a good proxy at all, and the ones that do aren’t always easy to accurately reconstruct over very long time scales, or lose all sorts of information at shorter time scales to get the longer time scale averages one can get. I think 2-3 K is a generous statement of the probable real error in most reconstructions for global average temperature over 1000 years ago, again presuming one can define an apples-to-apples global average temperature to compare to which I doubt. Nor can one reliably compare anomalies over such time scales, because of the confounding variables and drift.

This is a hard problem, and calling it settled science is obviously a political statement, not a scientific one. A good scientist would, I truly believe, call this unsettled science, science that is understood far less than physics, chemistry, even biology. It is a place for utter honesty, not egregious claims of impossibly accurate knowledge. In my own utterly personal opinion, informed as well or as badly as chance and a fair bit of effort on my part have thus far informed it, we have 33 years of a reasonably precise and reliable statement of global average temperature, one which is probably not the true average temperature assuming any such thing could be defined in the first place but which is as good as any for the purposes of identifying global warming or cooling trends and mechanisms.

Prior to this we have a jump in uncertainty (in precision, not accuracy) compared to the ground-based thermometric record that is strictly apples-to-oranges compared to the satellite derived averages, with error bars that rapidly grow the further back one goes in the thermometric record. We then have a huge jump in uncertainty (in both precision and accuracy) as we necessarily mount the multiproxy train to still earlier times, where the comparison has unfortunately been between modern era apples, thermometric era oranges, and carefully picked cherries. Our knowledge of global average temperatures becomes largely anecdotal, with uncertainties that are far larger than the observed variation in the instrumental era and larger still than the reliable instrumental era (33 year baseline).

Personally, I think that this is an interesting problem and one well worth studying. It is important to humans in lots of ways; we have only benefitted from our studies of the weather and our ability to predict it is enormously valuable as of today in cash money and avoided loss of life and property. It is, however, high time to admit the uncertainties and get the damn politics out of the science. Global climate is not a “cause”! It is the object of scientific study. For the conclusions of that science to be worth anything at all, they have to be brutally honest — honest in a way that is utterly stripped of bias and that acknowledges to a fault our own ignorance and the difficulty of the problem. Pretending that we know and can measure global average temperatures from a sparse and short instrumental record where it would be daunting to assign an accurate, local average temperature to any given piece of ground based on a dense sampling of temperatures from different locations and environments on that piece of ground does nothing to actually help out the science — any time one claims impossible accuracy for a set of experimentally derived data one is openly inviting false conclusions to be drawn from the analysis. Pretending that we can model what is literally the most difficult problem in computational fluid dynamics we have ever attempted with a handful of relatively simple parametric differential forms and use the results over centennial and greater timescales does nothing for the science, especially when the models, when tested, often fail (and are failing, badly, over the mere 33 years of reliable instrumentation and a uniform definition of at least one of the global average temperatures).

It’s time to stop this, and just start over. And we will. Perhaps not this year, perhaps not next, but within the decade the science will finally start to catch up and put an end to the political foolishness. The problem is that no matter what one can do to proxy reconstructions, no matter how much you can adjust LSTs for UHI and other estimated corrections that somehow always leave things warmer than they arguably should be, no matter what egregious claims are initially made for SSTs based on Argo, the UAH T_{LT} will just keep on trucking, unfutzable, apples to apples to apples. The longer that record gets, the less one can bias an “interpretation” of the record.

In the long run that record will satisfy all properly skeptical scientists, and the “warmist” and “denier” labels will end up being revealed as the pointless political crap that they are. In the long run we might actually start to understand some of the things that contribute to that record, not as hypotheses in models that often fail but in models that actually seem to work, that capture the essential longer time scale phenomena. But that long run might well be centennial in scale — long enough to detect and at least try to predict the millennial variations, something utterly impossible with a 33 year baseline.

rgb

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

224 Comments
Inline Feedbacks
View all comments
Dolphinhead
March 4, 2012 4:00 pm

Dr Brown
can you comment on the statement by Max Hugoson wrt RH. This is somethingt that has always concerned me. Is it right that the dry air temperature of the arctic is averaged with the very humid air of the tropics as if they were apples and apples? Does it not take a lot more energy to change the temperature of humid air than dry air?

k scott denison
March 4, 2012 4:02 pm

KR says:
March 4, 2012 at 2:33 pm
This is an appalling post.
Dr Brown, are you perhaps not familiar with the law of large numbers (http://en.wikipedia.org/wiki/Law_of_large_numbers)?
Over the course of many measurements the estimate of any random variable will converge on it’s true value.
=========================
Um, KR, those 7,000 thermometers are, at any instant in time, measuring 7,000 DIFFERENT variables, that is, the temperature, at that instant in time, of 7,000 unique geographic locations. How, exactly, does the law of large numbers apply?
HINT: it doesn’t.

DirkH
March 4, 2012 4:05 pm

Nick Stokes says:
March 4, 2012 at 2:52 pm
“Frank K. says: March 4, 2012 at 2:18 pm
“Please state this rule for us.”
No, Frank, I want an answer to my question first. Who are these people who claim to calculate the Earth’s “global annualized temperature” to 0.05K? Seen a graph?”
The answer is that you have erected a strawman yourself. Read the sentence:
“anybody who claims to know the annualized average temperature of the Earth, or the Ocean, to 0.05 K is, as the saying goes, full of …”
Did Dr. Brown bash James Hansen, or any other GISS employee? No. Can you calm down now?
I am, BTW, very satisfied that you agree with Dr. Brown.

DirkH
March 4, 2012 4:11 pm

KR says:
March 4, 2012 at 3:25 pm
“Individual readings have a much higher variance, but when you have thousands upon thousands of readings the error variance drops accordingly.”
They have 1,500 readings (since the big Dying Of The Thermometers) and the error goes down with the square root of that, BUT that assumes that they culled in a non-systematic way.
Which they haven’t:
Aussie thermometers march north, the rest of the world marches south in GHCN :
http://chiefio.wordpress.com/2009/10/23/gistemp-aussy-fair-go-and-far-gone/

Latitude
March 4, 2012 4:11 pm

Well…..who ya gonna believe
“Based on readings from more than 30,000 measuring stations, the data was issued last week without fanfare by the Met Office and the University of East Anglia Climatic Research Unit. It confirms that the rising trend in world temperatures ended in 1997.”
and it was Gleick that declared “the science is settled”…..
…you see where that got him

March 4, 2012 4:13 pm

Curses, foiled again.. Continuing….
underestimates the UHI effect by 0.3K. That would make the two curves agree quite nicely, cutting GISS back to a rise of 0.3 K over the same time frame.
Of course, that would provide one with no evidence of a catastrophe, and not much evidence of warming, especially if one allows for any modulation of global temperatures due to e.g. solar magnetic modulation of GCRs and consequent variations in albedo. Indeed, variation of albedo then becomes almost 100% of the actual driver of T_{LT}. Which makes sense, given a saturated CO_2 based GHE and moderate negative feedbacks from the water cycle.
Sooner or later, though, GISS is going to have to face UAH LTT, and deal with the growing discrepancy. And it is pretty obvious which one will win. They are NASA’s own satellites, after all.
rgb

Allen63
March 4, 2012 4:24 pm

A timely article. I agree with the thrust.
“Timely” because I’ve just re-started “from the most basic concepts” myself in attempting to accurately define the true error bars around the Global Surface Temperature Anomalies. My gut feeling is that the error bars are too large to support theories regarding AGW. But, gut isn’t science.
Like the author, I am going beyond “ordinary statistics” (which I think are not wholly applicable as used) and trying to consider “literally every physical thing” that might impact the final accuracy of a calculated Anomaly. So, thanks for this post.

March 4, 2012 4:25 pm

Damn, I’m going to just give up. Mousepad disease. Looks like I’ve completely lost two hourlong posts.
Grrr.
rgb

Steve from Rockwood
March 4, 2012 4:28 pm

KR says:
March 4, 2012 at 2:33 pm

This is an appalling post.
Dr Brown, are you perhaps not familiar with the law of large numbers (http://en.wikipedia.org/wiki/Law_of_large_numbers)?

Kr, I cannot believe that someone could be so stupid as to quote the law of large numbers within the context of global temperatures. Do you know the difference between random and systematic error? Have you ever measured anything? The problem is that any sampling theory falls apart when you start working with such heavily biased, manipulated and badly measured data such as global temperatures.
You owe Dr. Brown an apology.

If you actually run the numbers, you get the same anomaly graph down to only 60-100 randomly chosen stations total, albeit with increasing noise with fewer stations.

If this were true we wouldn’t see so much massaging of the temperature record. Unless you’re suggesting a margin of error of +/- 5 degrees. I spent 4 days at a small airport in Northern Ontario that has a weather station. The guy who plows the snow parks his huge snow plow about 5 m from the weather station while it warms up. I’m just wondering if that station points to warmer winters since the airport built the new garage so close to the weather station (and paved the whole area). But I didn’t see that effect discussed in your ridiculous link to the law of large numbers.

1DandyTroll
March 4, 2012 4:32 pm

The average temperature for out loonely planet is about the same as ut has always been the rest a mere figment of creative statistics, and tremendously creatively manhandled statistics at that.
To dive into the spaghetti logic that is the alarmist climatological logic but if they can get the number of polar bears to be, twenty to twenty five thousand, every year, since the early 1970’s from less than half of the half they think they knew something aoubt, or, stated differentlly, from a mere handfull of beers, and bears, and a sh*t load of helicopter fuel. What’s to say they don’t treat temperature data the same.
But one data point is just half of a binary set. So lets further the thing with data from the other pole. How many penguins are there in the world? They’re as endangered, and as accounted for, as the beer bottles in the polar region, right?
Is it wikianswers reported low 20 million or Seaworlds high of 70 million birds?
The accuracy of the statstics are as bold as eight million three hundred and thirty eight thousand and seven hundred and eighty individuals of one specie to, between 1540 and 1 855 times two for another. Essentially, nobody could be bothered to count less than 2000 pair of individuals but somebody had the time to, apparently, do some pretty “accurate” extrapolation equations.
When it comes to temperatures, as Mr Watts has been kind enough to show the world, nobody seem to have the time to accurately and properly maintina to properly and accurately observe, but when it comes to global temperature, for which we can thank thanks the likes of NASA/GISS, Had CRU, WMO, and ultimately IPCC, for, too many seem to have too much time to extrapolate between too many fears (of too few or too many, or too low or too high.)
According to statistics, sixty eight million one hundred and sixty three thousand and four hundred and ninety, plus, of the global populace of penguins can’t be wrong: The arctic truly, utterly, and completely unequivocally, blows, but probably blows less than, and the alarmist’s precautionary principle applied, the hordes of scientifically looking, but obviously evil looking, humanoids infesting their frozen beach fronts.

Latitude
March 4, 2012 4:39 pm

Robert Brown says:
March 4, 2012 at 4:25 pm
Damn, I’m going to just give up. Mousepad disease. Looks like I’ve completely lost two hourlong posts.
=======================
Alright darn it……I’ve been waiting on you to reply too!….LOL
Force yourself to get into the habit of word…..copy and paste……

March 4, 2012 4:41 pm

KR says: March 4, 2012 at 3:25 pm
“The GISS standard deviation is quite well supported by the number of measurements, as per the law of large numbers ..”
Yes, I’ve quite happy with GISS’ handling of anomalies. But this post says:
“anybody who claims to know the annualized average temperature of the Earth, or the Ocean, to 0.05 K is, as the saying goes, full of shit up to their eyebrows.”
And yes, that’s true. But who does? I’m not hearing. What’s the number?
Here’s what GISS says:
“For the global mean, the most trusted models produce a value of roughly 14°C, i.e. 57.2°F, but it may easily be anywhere between 56 and 58°F and regionally, let alone locally, the situation is even worse.”
Well, that’s a number. But it doesn’t sound like a claim of 0.05K accuracy.

Steve from Rockwood
March 4, 2012 4:44 pm

DirkH says:
March 4, 2012 at 4:11 pm

KR says:
March 4, 2012 at 3:25 pm
“Individual readings have a much higher variance, but when you have thousands upon thousands of readings the error variance drops accordingly.”
They have 1,500 readings (since the big Dying Of The Thermometers) and the error goes down with the square root of that, BUT that assumes that they culled in a non-systematic way.
Which they haven’t:
Aussie thermometers march north, the rest of the world marches south in GHCN :
http://chiefio.wordpress.com/2009/10/23/gistemp-aussy-fair-go-and-far-gone/

Only random error goes down with the square root of the number of readings. Systematic error (which tends to be much higher) only drops to its DC bias value (which can be substantial). You would have to have a series of systematic errors being random such that these errors were likewise reduced by the high number of sampling points. Good luck with that.
I recall a large diameter hole being drilled at a mine from surface to an underground stope from which they hoped to pump concentrate waste (with concrete, to fill the empty stope and keep the concentrate out of the tailings). Turns out the hole was about 125 m away from its desired target, even though it was accurately surveyed repeatedly with a north-seeking gyro survey probe that boasted an uncertainty of less than 2.5 m over that distance. How could such an accurate instrument that records almost continuously and produces tens of thousands of measurement points be out by so much? You won’t find the answer in the law of large numbers. Possibly in the law of common screw-ups though but I don’t see that in Wikipedia.

Douglas Hoyt
March 4, 2012 4:57 pm

There is insufficient areal coverage before 1957 to calculate the mean temperature for the Southern Hemisphere. We know this because if you calculate temperatures in degrees Kelvin, rather than using anomalies, you will not get a temperature anywhere near 288 K.
From the above, it follows that global temperature changes before 1957 (the IGY year) are not known with any reliability.
Northern Hemisphere temperatures can be calculated back to about 1900. Before that year, there is insufficient areal coverage to deduce how climate was varying.
An honest appraisal of climate change would use absolute temperatures rather than anomalies, which are very deceiving. In fact, using anomalies should be avoided altogether.

KR
March 4, 2012 4:59 pm

Nick Stokes – That’s why I said anomaly
Dr. Brown conflates absolute temperatures (which have the uncertainties you note) with anomalies (which are supported by a very large data set, and have the 2-SD accuracy of 0.05C). Which is quite the strawman argument, a bait and switch.
As I noted above, if there is a consistent bias the temperature estimation will have a constant offset of about that bias – and for anomalies that bias will cancel out to zero, as it will be in both the baseline and the time-point of interest. If the bias is consistently changing over time, it will show up in the anomalies – but the only real candidate for that is the urban heat island effect, and as the BEST data shows (along with any number of other analyses, including Fall and Watts 2009) that there is no such overall effect on temperature averages.

A. Scott
March 4, 2012 4:59 pm

[C]alling it settled science is obviously a political statement, not a scientific one. A good scientist would, I truly believe, call this unsettled science, science that is understood far less than physics, chemistry, even biology. It is a place for utter honesty, not egregious claims of impossibly accurate knowledge.
Our knowledge of global average temperatures [is] largely anecdotal, with uncertainties that are far larger than the observed variation in the instrumental era and larger still than the reliable instrumental era (33 year baseline).
It is, however, high time to admit the uncertainties and get the damn politics out of the science. Global climate is not a “cause”! It is the object of scientific study. For the conclusions of that science to be worth anything at all, they have to be brutally honest — honest in a way that is utterly stripped of bias and that acknowledges to a fault our own ignorance and the difficulty of the problem.

Excellent points.
How can the science be “settled” when there is NO reliability in the data?
When the data is continuously manipulated (always creating increasingly warmer averages).
We have 30 or so years of somewhat decent instrumental record, but even that is plagued with problems, most importantly a serious lack of global coverage. We also have several instrumental records going back to the 1600’s. And then various “reconstructed” temperature records derived from proxies – each localized, and none with a high reliability over shorter time frames.
Many, such as tree rings – the heart of current CAGW’s “science” – are increasingly proven to be unreliable; having been cherry picked and constantly manipulated for desired results – and when they failed to accurately represent current instrumental record, simply discard for current data – while keeping for historical data.
“Settled” … not remotely.

RACookPE1978
Editor
March 4, 2012 5:06 pm

Robert Brown says:
March 4, 2012 at 4:25 pm
Damn, I’m going to just give up. Mousepad disease. Looks like I’ve completely lost two hourlong posts.

To minimize carbon-based computer keyboard interface errors of large time durations and many characters, may I recommend you work “off-screen” for any answers greater than 45 or 50 lines.
Write your responses at length in MS Word (or even notepad) – as long as you can spell check conveniently.
Regularly save them via that process; then, at the end of your patience – or when you’re through (whichever comes first) “select all” and “paste” (the words) over here in the WordPress dialog box.

RockyRoad
March 4, 2012 5:10 pm

I suspect that temperatures over the earth’s surface have a lognormal distribution–most fall within a fairly narrow range with extremes at both ends caused by elevation or latitude. The cumulative average of a quantity that provides a lognormal data set upon sampling is not the average of the data points. I repeat–NOT the average. Why? The central limit theorem (and how it applies to a normal distribution, but not a lognormal distribution).
So regardless of the “confidence” based on the “law of large numbers” (seen used above by several posters to erroneously justify the status quo), it is equally erroneous to believe that it applies to “average global temperature”. What would more closely (and I emphasize the term “closely”) approximate the true average would be to first properly interpolate temperature from data points to volumes of air (and no, Steve Mosher–just because your “guess” matches an actual data point does not corroborate your “guesstimation”). Then these comparable volumes could be quantitatively averaged to get a “close average”.
It would require geostatistics to make this all possible, but an estimation variance (precision of the estimate) could be obtained in the process (which is not to be confused with so-called precision of the temperatures data points, although that would be of interest to compare to the “nugget effect” on the variograms).
(All this I explained earlier on the discussion of the Argos data set Willis brought to our attention.)

Mindbuilder
March 4, 2012 5:37 pm

My initial guess at how to define global temperature would be the temperature if you took every molecule in the volume of air between 1.5 and 2.5 meters above the surface, and instantly transported all those molecules to random locations within a cube of equal volume without changing their speed, energy, etc. Of course you can’t measure such a thing exactly, but I bet you could make a pretty close estimate. Probably well under 1K.

jonathan frodsham
March 4, 2012 5:38 pm

Yes, Yes and Yes. The very idea that M.Mann could accuracy measure the average temperature of the earth for the last 1000 years is preposterous beyond belief. What makes it even worse is the fact that so called scientists actually continue to defend the badly broken stick. Mann could not even measure the average temperature of his back yard for a year; no doubt he would cut down a tree there and proclaim to the world that he has discovered the secret to his back yard temperature and get a Nobel Peace Prize for it! The mind boggles how this scam has been going for so long?
Is it because that the majority of the population are really that stupid? I think that the answer is Yes, Yes and Yes.

GaryM
March 4, 2012 5:40 pm

Nick Stokes says:
March 4, 2012 at 3:24 pm
“…You won’t find such a number, to 0.05K, in AR4.”
From the AR45 quote in my comment:
“This can now be compared with observed values of about 0.2°C per decade, strengthening confidence in near-term projections.”
Now I am not a climate scientist, but as far as my limited ability takes me, a change in global average temperature of “about 0.2°C per decade” calculates to “about 0.02°C per year.” No, they don’t use the number .05in that passage, but their claim of accuracy seems to me to be if anything somewhat greater.

kakatoa
March 4, 2012 5:53 pm

I find the Global Average Temperature metric to be insufficient for any personal decision making- even if it was accurate. I, and I hope my local and regional leaders, need a local metric for it to be of any value. Say Heating and Cooling Degree Hours and Days when it comes down to temperature For forecasts, projections of future climate in my area I am interested in these two metrics as well as my number of hours below 32F in the winter.

Jeff Alberts
March 4, 2012 6:04 pm

Wow. My comment provoked that response and this post? I’m honored!

AlexS
March 4, 2012 6:06 pm

I am surprised by those that criticize the piece.
They are those who make a strawman.
They assume that the earth is uniform in it’s temperature path be it getting warm, cold, or stabilized. It isn’t, in some places is getting cold and in others is getting hotter.
Let’s suppose that an area somewhere is getting hotter, but is badly sampled – and what we have more is bad samples – then we miss that increase in temperature. Same for inverse if an area is getting cold but there are no stations.
Now this even doesn’t get in altitude temperature, temperature time and many other variables.

Brian H
March 4, 2012 6:08 pm

Climate Science is deep in “intensive variable denial”.
Take 20 different materials, in samples of various sizes, and determine the density of each. Average the results. Now, determine the average density of the ensemble as a unit. Derive an adjustment coefficient to make them match.
Repeat, 20X, with different sizes of all the samples.
Now, examine the 21 different coefficients, and the averages, and determine what the true average density is.
When complete, and when the answer can be proven correct and the algorithm robust, you can graduate to temperatures. But not before.