Global annualized temperature – "full of [snip] up to their eyebrows"

Guest Post by Dr. Robert Brown,

Physics Dept. Duke University [elevated from comments]

Dr. Brown mentions “global temperature” several times. I’d like to know what he thinks of this.

Dr. Brown thinks that this is a very nice piece of work, and is precisely the reason that he said that anybody who claims to know the annualized average temperature of the Earth, or the Ocean, to 0.05 K is, as the saying goes, full of [snip] up to their eyebrows.

What I think one can define is an average “Global Temperature” — noting well the quotes — by following some fixed and consistent rule that goes from a set of data to a result. For example, the scheme that is used to go from satellite data to the UAH lower troposphere temperature. This scheme almost certainly does not return “the average Global Temperature of the Earth” in degrees absolute as something that reliably represents the coarse-grain averaged temperature of (say) the lowest 5 kilometers of the air column, especially not the air column as its height varies over an irregular terrain that is itself sometimes higher than 5 kilometers. It does, however, return something that is likely to be close to what this average would be if one could sample and compute it, and one at least hopes that the two would co-vary monotonically most of the time.

The accuracy of the measure is very likely not even 1K (IMO, others may disagree) where accuracy is |T_{LTT} - T_{TGT}| — the absolute difference between lower troposphere temperature and the “true global temperature” of the lower troposphere. The various satellites that contribute to temperature have (IIRC) a variance on this order so the data itself is probably not more accurate than that. The “precision” of the data is distinct — that’s a measure of how much variance there is in the data sources themselves, and is a quantity that can be systematically improved by more data, where accuracy, especially in a situation like this where one is indirectly inferring a quantity that is not exactly the same as what is being measured cannot be improved by more or more precise measurements, it can only be improved by figuring out the map between the data one is using and the actual quantity you are making claims about.

Things are not better for (land) surface measurements — they are worse. There the actual data is (again, in my opinion) hopelessly corrupted by confounding phenomena and the measurement errors are profound. Worse, the measurement errors tend to have a variable monotonic bias compared to the mythical “true average surface Global Temperature” one wishes to measure.

One is in trouble from the very beginning. The Moon has no atmosphere, so its “global average temperature” can be defined without worrying about measuring its temperature at all. When one wishes to speak of the surface temperature at a given point, what does one use as a definition? Is it the temperature an actual high precision thermometer would read (say) 1 cm below the surface at that point? 5 mm? 1 mm? 1 meter? All of these would almost certainly yield different results, results that depend on things like the albedo and emissivity of the point on the surface, the heat capacity and thermal conductivity of the surface matter, the latitude. Is it the “blackbody” temperature of the surface (the inferred temperature of the surface determined by measuring the outgoing full spectrum of radiated light)?

Even inferring the temperature from the latter — probably the one that is most relevant to an airless open system’s average state — is not trivial, because the surface albedo varies, the emissivity varies, and the outgoing radiation from any given point just isn’t a perfect blackbody curve as a result.

How much more difficult is it to measure the Earth’s comparable “surface temperature” at a single point on the surface? For one thing, we don’t do anything of the sort. We don’t place our thermometers 1 meter, 1 cm, 1 mm deep in — what, the soil? The grass or trees? What exactly is the “surface” of a planet largely covered with living plants? We place them in the air some distance above the surface. That distance varies. The surface itself is being heated directly by the sun part of the time, and is radiatively cooling directly to space (in at least some frequencies) all of the time. Its temperature varies by degrees K on a time scale of minutes to hours as clouds pass between the location and the sun, as the sun sets, as it starts to rain. It doesn’t just heat or cool from radiation — it is in tight thermal contact with a complex atmosphere that has a far greater influence on the local temperature than even local variations in insolation.

Yesterday it was unseasonably warm in NC, not because the GHE caused the local temperature to be higher by trapping additional heat but because the air that was flowing over the state came from the warm wet waters of the ocean to the south, so we had a relatively warm rain followed by a nighttime temperature that stayed warm (low overnight of maybe 46F) because the sky was cloudy. Today it is almost perfectly seasonal — high 50’s with a few scattered clouds, winds out of the WSW still carrying warm moisture from the Gulf and warm air from the south central US, but as the day progresses the wind is going to shift to the NW and it will go down to solidly freeze (30F) tonight. Tomorrow it will be seasonal but wet, but by tomorrow night the cooler air that has moved in from the north will make it go down to 25F overnight. The variation in local temperature is determined far more by what is going on somewhere else than it is by actual insolation and radiation here.

If a real cold front comes down from Canada (as they frequently do this time of year) we could have daytime highs in the 30’s or low 40’s and nighttime lows down in the the low 20s. OTOH, if the wind shifts to the right quarter, the temperature outside could reach the low 80s high and low 50s low. We can, and do, have both extremes within a single week.

Clearly surface temperatures are being driven as strongly by the air and moisture flowing over or onto them as they are by the “ideal” picture of radiative energy warming the surface and radiation cooling it. The warming of the surface at any given point isn’t solely responsible for the warming or cooling of the air above it, the temperature of the surface is equally dependent on the temperature of the air as determined by the warming of the surface somewhere else, as determined by the direct warming and cooling of the air itself via radiation, as determined by phase changes of water vapor in the air and on the surface, as determined by factor of ten modulations of insolation as clouds float around over surface and the lower atmosphere alike.

Know the true average surface Global Temperature to within 1K? I don’t even know how one would define a “true” average surface Global Temperature. It was difficult enough for the moon without an atmosphere, assuming one can agree on the particular temperature one is going to “average” and how one is going to perform the average. For the Earth with a complex, wet, atmosphere, there isn’t any possibility of agreeing on a temperature to average! One cannot even measure the air temperature in a way that is not sensitive to where the sun is and what it is doing relative to the measurement apparatus, and the air temperature can easily be in the 40s or 50s while there is snow covering the ground so that the actual surface temperature of the ground is presumably no higher than 32F — depending on the depth one is measuring.

And then oops — we forgot the Oceans, that cover 70% of the surface of the planet.

What do we count as the “temperature” of a piece of the ocean? There is the temperature of the air above the surface of the ocean. In general this temperature differs from the actual temperature of the water itself by order of 5-10K. The air temperature during the day is often warmer than the temperature of the water, in most places. The air temperature at night is often cooler than the temperature of the water.

Or is it? What exactly is “the temperature of the water”? Is it the temperature of the top 1 mm of the surface, where the temperature is dominated by chemical potential as water molecules are constantly being knocked off into the air, carrying away heat? Is it the temperature 1 cm deep? 10 cm? 1 m? 10 m? 50 m? 100m? 1 km?

Is it the average over a vertical column from the surface to the bottom (where the actual depth of the bottom varies by as much as 10 km)? This will bias the temperature way, way down for deep water and make the global average temperature of the ocean very nearly 4K very nearly everywhere, dropping the estimate of the Earth’s average Global Temperature by well over 10K. Yet if we do anything else, we introduce a completely arbitrary bias into our average. Every value we might use as a depth to average over has consequences that cause large variations in the final value of the average. As anyone who swims knows, it is quite easy for the top meter or so of water to be warm enough to be comfortable while the water underneath that is cold enough to take your breath away.

Even if one defines — arbitrarily, as arbitrary in its own way as the definition that one uses for T_{LT} or the temperature you are going to assign to a particular point on the surface on the basis of a “corrected” or “uncorrected” thermometer with location biases that can easily exceed several degrees K compared to equally arbitrary definitions for what the thermometer “should” be reading for the unbiased temperature and how that temperature is supposed to relate to a “true” temperature for the location — a sea surface temperature SST to go with land surface temperature LST and then tries to take the actual data for both and turn them into a average global temperature, one has a final problem to overcome. One’s data is (with the possible exception of modern satellite derived data) sparse! Very sparse.

In particular, it is sparse compared to the known and observed granularity of surface temperature variations, for both LST and SST. Furthermore, it has obvious sampling biases. We have lots and lots of measurements where people live. We have very few measurements (per square kilometer of surface area) where people do not live. Surface temperatures can easily vary by 1K over a kilometer in lateral distance (e.g. at terrain features where one goes up a few hundred meters over a kilometer of grade). They can and do vary by 1 K over order of 5-10 kilometers variations routinely.

I can look at e.g. the Weather Underground’s weather map readings from weather stations scattered around Durham at a glance, for example. At the moment I’m typing this there is a 13 F variation from the coldest to the warmest station reading within a 15 km radius of where I’m sitting. Worse, nearly all of these weather station readings are between 50 and 55 F, but there are two outliers. One of them is 46.5 F (in a neighborhood in Chapel Hill), and the other is Durham itself, the “official” reading for Durham (probably downtown somewhere) which is 59.5 F!

Guess which one will end up being the temperature used to compute the average surface temperature for Durham today, and assigned to an entirely disproportionate area of the surface of the planet in a global average surface temperature reconstruction?

Incidentally, the temperature outside of my house at this particular moment is 52F. This is a digital electronic thermometer in the shade of the north side of the house, around a meter off of the ground. The air temperature on the other side of the house is almost certainly a few degrees warmer as the house sits on a southwest-facing hill with pavement and green grass absorbing the bright sunlight. The temperature back in the middle of the cypresses behind my house (dense shade all day long, but with decent airflow) would probably be no warmer than 50 F. The temperature a meter over the driveway itself (facing and angled square into the sun, and with the house itself reflecting additional heat and light like a little reflector oven) is probably close to 60 F. I’m guessing there is close to 10F variation between the air flowing over the southwest facing dark roof shingles and the northeast facing dark roof shingles, biased further by loss of heat from my (fairly well insulated) house.

I don’t even know how to compute an average surface temperature for the 1/2 acre plot of land my own house sits on, today, right now, from any single thermometer sampling any single location. It is 50F, 52 F, 58 F, 55F, 61 F, depending on just where my thermometer is located. My house is on a long hill (over a km long) that rises to an elevation perhaps 50-100 m higher than my house at the top — we’re in the piedmont in between Durham and Chapel Hill, where Chapel Hill really is up on a hill, or rather a series of hills that stretch past our house. I’d bet a nickel that it is a few degrees different at the top of the hill than it is where my house is today. Today it is windy, so the air is well mixed and the height is probably cooler. On a still night, the colder air tends to settle down in the hollows at the bottoms of hills, so last frost comes earlier up on hilltops or hillsides; Chapel Hill typically has spring a week or so before Durham does, in contradiction of the usual rule that higher locations are cooler.

This is why I am enormously cynical about Argo, SSTs, GISS, and so on as reliable estimates of average Global Temperature. They invariably claim impossible accuracy and impossible precision. Mere common sense suffices to reject their claim otherwise. If they disagree, they can come to my house and try to determine what the “correct” average temperature is for my humble half acre, and how it can be inferred from a single thermometer located on the actual property, let alone from a thermometer located in some weather station out in Duke Forest five kilometers away.

That is why I think that we have precisely 33 years of reasonably reliable global temperature data, not in terms of accuracy (which is unknown and perhaps unknowable) but in terms of statistical precision and as the result of a reasonably uniform sampling of the actual globe. The UAH T_{LT} is what it is, is fairly precisely known, and is at least expected to be monotonically related to a “true average surface Global Temperature”. It is therefore good for determining actual trends in global temperature, not so good for making pronouncements about whether or not the temperature now is or is not the warmest that it has been in the Holocene.

Hopefully the issues above make it just how absurd any such assertion truly is. We don’t know the actual temperature of the globe now, with modern instrumentation and computational methodology to an accuracy of 1 K in any way that can be compared apples-to-apples to any temperature reconstruction, instrument based or proxy based, from fifty, one hundred, one thousand, or ten thousand years ago. 1 K is the close order of all of the global warming supposedly observed since the invention of the thermometer itself (and hence the start of the direct instrumental record). We cannot compare even “anomalies” across such records — they simply don’t compare because of confounding variables, as the “Hide the Decline” and “Bristlecone Pine” problems clearly reveal in the hockey stick controversy. One cannot remove the effects of these confounding variables in any defensible way because one does not know what they are because things (e.g. annual rainfall and the details of local temperature and many other things) are not the same today as they were 100 years ago, and we lack the actual data needed to correct the proxies.

A year with a late frost, for example, can stunt the growth of a tree for a whole year by simply damaging its new leaves or can enhance it by killing off its fruit (leaving more energy for growth that otherwise would have gone into reproduction) completely independent of the actual average temperature for the year.

To conclude, one of many, many problems with modern climate research is that the researchers seem to take their thermal reconstructions far too seriously and assign completely absurd measures of accuracy and precision, with a very few exceptions. In my opinion it is categorically impossible to “correct” for things like the UHI effect — it presupposes a knowledge of the uncorrected temperature that one simply cannot have or reliably infer from the data. The problem becomes greater and greater the further back in time one proceeds, with big jumps (in uncertainty) 250, 200, 100 and 40 odd years ago. The proxy-derived record from more than 250 years ago is uncertain in the extreme, with the thermal record of well over 70% of the Earth’s surface completely inaccessible and with an enormously sparse sampling of highly noisy and confounded proxies elsewhere. To claim accuracy greater than 2-3 K is almost certainly sheer piffle, given that we probably don’t know current “true” global average temperatures within 1 K, and 5K is more likely.

I’m certain that some paleoclimatologists would disagree with such a pessimistic range. Surely, they might say, if we sample Greenland or Antarctic ice cores we can obtain an accurate proxy of temperatures there 1000 or 2000 years ago. Why aren’t those comparable to the present?

The answer is because we cannot be certain that the Earth’s primary climate drivers distributed its heat the same way then as now. We can clearly see how important e.g. the decadal oscillations are in moving heat around and causing variations in global average temperature. ENSO causes spikes and seems responsible for discrete jumps in global average temperature over the recent (decently thermometric) past that are almost certainly jumps from one poincare’ attractor to another in a complex turbulence model. We don’t even know if there was an ENSO 1000 years ago, or if there was if it was at the same location and had precisely the same dependences on e.g. solar state. As a lovely paper Anthony posted this morning clearly shows, major oceanic currents jump around on millennial timescales that appear connected to millennial scale solar variability and almost certainly modulate the major oscillations themselves in nontrivial ways. It is quite possible for temperatures in the antarctic to anticorrelate with temperatures in the tropics for hundreds of years and then switch so that they correlate again. When an ocean current is diverted, it can change the way ocean average temperatures (however one might compute them, see above) vary over macroscopic fractions of the Earth’s surface all at once.

To some extent one can control for this by looking at lots of places, but “lots” is in practice highly restricted. Most places simply don’t have a good proxy at all, and the ones that do aren’t always easy to accurately reconstruct over very long time scales, or lose all sorts of information at shorter time scales to get the longer time scale averages one can get. I think 2-3 K is a generous statement of the probable real error in most reconstructions for global average temperature over 1000 years ago, again presuming one can define an apples-to-apples global average temperature to compare to which I doubt. Nor can one reliably compare anomalies over such time scales, because of the confounding variables and drift.

This is a hard problem, and calling it settled science is obviously a political statement, not a scientific one. A good scientist would, I truly believe, call this unsettled science, science that is understood far less than physics, chemistry, even biology. It is a place for utter honesty, not egregious claims of impossibly accurate knowledge. In my own utterly personal opinion, informed as well or as badly as chance and a fair bit of effort on my part have thus far informed it, we have 33 years of a reasonably precise and reliable statement of global average temperature, one which is probably not the true average temperature assuming any such thing could be defined in the first place but which is as good as any for the purposes of identifying global warming or cooling trends and mechanisms.

Prior to this we have a jump in uncertainty (in precision, not accuracy) compared to the ground-based thermometric record that is strictly apples-to-oranges compared to the satellite derived averages, with error bars that rapidly grow the further back one goes in the thermometric record. We then have a huge jump in uncertainty (in both precision and accuracy) as we necessarily mount the multiproxy train to still earlier times, where the comparison has unfortunately been between modern era apples, thermometric era oranges, and carefully picked cherries. Our knowledge of global average temperatures becomes largely anecdotal, with uncertainties that are far larger than the observed variation in the instrumental era and larger still than the reliable instrumental era (33 year baseline).

Personally, I think that this is an interesting problem and one well worth studying. It is important to humans in lots of ways; we have only benefitted from our studies of the weather and our ability to predict it is enormously valuable as of today in cash money and avoided loss of life and property. It is, however, high time to admit the uncertainties and get the damn politics out of the science. Global climate is not a “cause”! It is the object of scientific study. For the conclusions of that science to be worth anything at all, they have to be brutally honest — honest in a way that is utterly stripped of bias and that acknowledges to a fault our own ignorance and the difficulty of the problem. Pretending that we know and can measure global average temperatures from a sparse and short instrumental record where it would be daunting to assign an accurate, local average temperature to any given piece of ground based on a dense sampling of temperatures from different locations and environments on that piece of ground does nothing to actually help out the science — any time one claims impossible accuracy for a set of experimentally derived data one is openly inviting false conclusions to be drawn from the analysis. Pretending that we can model what is literally the most difficult problem in computational fluid dynamics we have ever attempted with a handful of relatively simple parametric differential forms and use the results over centennial and greater timescales does nothing for the science, especially when the models, when tested, often fail (and are failing, badly, over the mere 33 years of reliable instrumentation and a uniform definition of at least one of the global average temperatures).

It’s time to stop this, and just start over. And we will. Perhaps not this year, perhaps not next, but within the decade the science will finally start to catch up and put an end to the political foolishness. The problem is that no matter what one can do to proxy reconstructions, no matter how much you can adjust LSTs for UHI and other estimated corrections that somehow always leave things warmer than they arguably should be, no matter what egregious claims are initially made for SSTs based on Argo, the UAH T_{LT} will just keep on trucking, unfutzable, apples to apples to apples. The longer that record gets, the less one can bias an “interpretation” of the record.

In the long run that record will satisfy all properly skeptical scientists, and the “warmist” and “denier” labels will end up being revealed as the pointless political crap that they are. In the long run we might actually start to understand some of the things that contribute to that record, not as hypotheses in models that often fail but in models that actually seem to work, that capture the essential longer time scale phenomena. But that long run might well be centennial in scale — long enough to detect and at least try to predict the millennial variations, something utterly impossible with a 33 year baseline.



newest oldest most voted
Notify of

“anybody who claims to know the annualized average temperature of the Earth, or the Ocean, to 0.05 K is, as the saying goes, full of shit up to their eyebrows.”
This is a strawman argument. Who makes such a claim? Not GISS!. Nor anyone else that I can think of. Can anyone point to such a calc? What is the number?
Of course, anomalies are widely calculated and published. And they prove to have a lot of consistency. That is because they are the result of “following some fixed and consistent rule that goes from a set of data to a result”. And the temporal variation of that process is much more meaningful than the notion of a global average temperature.

Excellent! Best argument I’ve read on exactly how one can define what “global average temperature” really is.

cui bono

Wow! Thanks Dr. Brown. A lot to chew on here.

Malcolm Miller

Excellent outline of the ‘temperature’ problem. I always ask alarmists, ‘Where do you put the thermometer to measure the Eareth’s temperature?’ They have no answer.


Physically, one cannot define a ‘global temperature’ for a system which is not in thermal equilibrium. One can define aproximately a temperature locally, for cvasiequilibrium. Not a global temperature. Temperature is an intensive quantity, that cannot be added. Extensive quantities (like volume or energy) can be added, intensive ones cannot. You add them, you get meaningless values. One can try to be smart and say that if you divide them by a number (be that the number of measurements or any other number picked arbitrarily) you get back a meaningful value. You don’t. In physics, dividing a physical value like that is called scaling. Scaling a meaningless value gets you another meaningless value, and that’s it. If you have a system full of gas, half of it at 100 C and 100 atm, and half of it at 0 C and at 0.001 atm, the AGW pseudoscientists would tell that it has a global temperature of 50 C. Now almost anybody could calculate for such a simple system that the meaninless average is pretty far from an equilibrium temperature. Even for such a simple system it is obvious that the ‘global temperature’ is a dumb value, with no meaning physically.


Yup. And this makes the satellite data important. Particularly in proving ground reading fraud.

Sean Houlihane

What is this post about? seems like a random rant to me…



Dr. Brown: You have said well what I have thought for a long time, that to use some “average global temperature” as proof that carbon dioxide is altering the earth’s “average climate” is questionable at least. Moreover, anecdotal information about proxies for global temperature change is even more questionable. Unlike the physics laboratory where we try to carefully control variables, temperature measurements are subject to random and systematic errors. We try to eliminate and understand the systematic errors and account for the random errors in our assessment. Your treatise identifies many of the errors that potential affect the accuracy and therefore the ability to interpret the earth’s temperature. The earth’s climate is not contained in a fixed, well controlled laboratory and worse, does not lend itself to repeating an experiment to verify that we can repeat the results. Computer programs while useful need calibration against past data with a realistic assessment of the accuracy and uncertainties and not just to each other. In a chaotic system such as the climate of the earth where all the variables are not identified let alone observed, leads one to inescapably to conclude as you in the last paragraph, “it is time to stop this and start over”, acting like scientists working together to understand the climate. It is a waste of brilliant minds to work at cross purposes for the sake of controlling the science.

Garry Stotel

Plenty of common sense. A commodity as rare this day as hen’s teeth. Global warming is a phenomenon, which exists in a number of dimensions, as noted by prof. Bob Carter. We may have had a warming streak in the physical reality world, but what exists in politics, media and heads of common people is something else.
I am not worried about physical reality world, I am very concerned about the “virtual dimension”, as humans prove, again and again, that groupthink and great lies can take sway and cause enormous damage. Just look at the 20th century – Eugenics, Fascism, Nazism, Communism, Lysenko, all extremely popular ideas, despite being insane and ultimately causing pain and damage.
Preserving and advancing sane civilization is a restless task, against odds and frequent defeats, and in the absence of any guarantee of success.
Thank you.


Nick Stokes says:
March 4, 2012 at 12:43 pm
“Of course, anomalies are widely calculated and published. And they prove to have a lot of consistency. That is because they are the result of “following some fixed and consistent rule that goes from a set of data to a result”. And the temporal variation of that process is much more meaningful than the notion of a global average temperature.”
But the rules of that process change all the time. So the rules are not fixed. For instance, every time they kill a thermometer, they hange the rules.


In my opinion it is categorically impossible to “correct” for things like the UHI effect — it presupposes a knowledge of the uncorrected temperature that one simply cannot have or reliably infer from the data.

Exactly! Perhaps they use educated guesses and get it ‘right’ to within a tenth of a degree. / sarc


Couple of points here.
1) Radiative heat loss into space is local. And instantaneous. And completely independent of ANY other temperature of ANY other (otherwise identical) surface at ANY other time at ANY other location with ANY other cloud cover and air mass. ANY yearly “average” heat loss into space is meaningless.
2) Inbound solar radiation gain is completely defined by local cloud cover, local latitude, local air mass (atmosphere thickness). At any given day-of-year, top of atmosphere radiation will be identical at all locations on earth and is completely and accurately predictable, but it will never be any “average” value for the year at any place on earth.
3) Outbound heat loss IS proportional to local temperature. Worse, it is proportional to the degree K raised to the 4th power. Total radiation loss WILL vary tremendously as temperature varies and (local) atmospheric conditions vary. At NO time of year at ANY location will any assumed “average” heat radiation loss be correct.

As I have noted TIME AFTER TIME AFTER TIME…an 86 F day in MN with 60% RH is 38 BTU/Ft^3, and 110 F day in PHX at 10% RH is 33 BTU/Ft^3…
Which is “HOTTER”? HEAT = ENERGY? MN of course, while the temp is lower.
I was at a pro-AWG lecture by a retired U of Wisc “meteorolgy” professor a couple years ago.
When I brought this up, and the example of putting varying amounts of hot and cold water into 10 styrofoam cups, measuring the temps, averaging the result and comparing to the result of putting all the contents in ONE insulated container (AVERAGED TEMP in this case WILL NOT MATCH the temp of all the varying amounts put into the one container!)..he looked (to QUOTE ANOTHER ENGINEER AT THE EVEN) as a “deer in the headlights”…and then finally babbled, “Well, the average temperatures do account for the R.H.”
Ah, like dear (Should be removed) dr. Glickspicable…”My words mean what I wish them to mean, nothing more and nothing less.” Reality BENDS for the AWG group!

As nick notes this is a strawman argument.
Nobody who calculates a global temperature INDEX, believes that it is an “average” Here is a way to think about it that I have found helpful. It’s an estimate of the unobserved temperature at other locations. We collect samples from 7000 locations over land. We average them using a variety of methods. The answer for stations over land is something on the order of 9C ( lets say). Now, what does that mean, what does that represent? It represents our best estimate ( minimizes the error ) of temperatures at unobserved locations. How well does that work.?
Well, we can test that: we now have temperatures from 36,000 locations. How good was our estimate? pretty darn good. Is that number the “average”. no, technically speaking the average is not measurable. Hansen even recognizes that.
I tell you that its 20C at my house and 24C 10 miles away. Estimate the temperature at 5 miles away. Now, you can argue that this estimate is not the “average”. You can hollar that you dont want to make this estimate. But, if do decide to make an estimate based only on the data you have
what is your best guess? 60C? -100C?. If I guessed 22C and then I looked and found that
it was 22C, what would you conclude about my estimation procedure? would you conclude that it worked?
Here is another way to think about it. Step outside. It’s 14C in 2012. We have evidence ( say documentary evidence ) that the LIA was colder than now. What’s that mean? That means that if you had to estimate the temperature in the same spot 300 years ago, you would estimate that it would be…………
thats right…… colder than it is now. Now, chance being a crazy thing, it might actually be warmer in that exact spot, but your best estimate, one that minimizes the error, is that it was colder.


Let alone the obvious difficulties in finding the Earth;s “average temperature” at any given time. there is the recent, obvious, blatant corruption and tampering with the temperature record by “Climate Scientists” in order to massage the results into the form that they desire. In the end we are left with a battle of lies, damned lies and statistics.

Probably our best and most accurate measure of global climate change is the atmospheric concentration of CO2 (not temperature). The reported monthly averages represent background levels and do not include measured event spikes that could be anthropogenic. Global concentrations do not vary significantly with longtitude and have a latidute dependant seasonal variation. The seasonal variation is the least near around 15S and the most above the Arctic Ocean (and tracks well sea ice area- max CO2 when ice is max and miniumum when ice is minimum: both are responding to global temperature changes). Water vapor and clouds are controlling and distributing atmospheric CO2 as well as temperature. CO2 is a lagging measure and not a controlling force. My analysis of the quantitative anthropogenic contribution to global atmospheric concentrations indicates the lag time is around ten years. That’s probably how long it takes to cycle through the biosphere or the oceans. Click on my name for more details. Other lag times could be associated with other longer cycles such as the periodic upwelling of CO2 saturated deep ocean waters and the deep ocean conveyer belt.


Can anyone know the average temperature last year 1 meter above the ocean surface?


I was in a particular place and I measured 12C. At 5 km away it was 25 C. 100 m away it was 22 C. This is a situation pretty often seen in mountains. Just move from a slope over the top on the other side, and see a very different temperature. Move downwards a little bit and see another big difference.
Averaging doesn’t work. Guessing doesn’t work. Interpolation doesn’t work. Reality works. And often shows values wildly different than the calculated ones using unphysical means (that is statistics and/or dumb interpolations). The ‘estimation’ is easy to falsify. Just pick some measurements outside of the dataset used for the temperature INDEX-GOD, and if the values are different than the pseudoscience ‘predicts’, the theory is false and has to be thrown away. As Feynman put it, no matter how smart you are, no matter how beautiful your psueudoscience is, if it contradicts experiments, it is wrong. And in the GOD-INDEX case, it does.

k scott denison

@steve mosher
While your argument contains some sense, using only 7,000 measurements for the entire surface of the earth isn’t exactly enough to say we hav a measurement every 10 miles, is it? I think your analogy is not only inaccurate but also disingenuous.
I note that you also don’t mention anything relative to UAH as perhaps the best data set we have at the moment, per Dr. Brown’s opinion.
Last, your reference to evidence of temperatures in the LIA being lower today is also decidedly one-sided. Do we also not have evidence that temperatures in the MWP were higher than today? If so, I don’t see what point you are trying to make.

Mr Mosher, how of the 36’000 locations provide the air temperature above oceans at a height of 1 meter above ehe water surface?


Malcolm Miller says: “Excellent outline of the ‘temperature’ problem. I always ask alarmists, ‘Where do you put the thermometer to measure the Earth’s temperature?’ They have no answer.”
I’ve thought of a place to stick the thermometer, but Al Gore refuses to cooperate.

In my not so humble view any unitary number that is attempting to represent so kind of dynamic process is no better then any other kind of educated guess. That does not mean they are not used and useful it is means we need to take great care that we full understand the limitations of such a value and the limitations of our process of derivation understanding. All these things are highly relative. Relative to our purpose our ability to collect and representativeness of that to collections and so on. We need also to keep firmly in mind that anytime we take real data and reduce it to anomalies we are making a judgement and producing meta data not real data. For some purposes the metadata is acceptable for other not so much. Always our understanding and analysis must be tempered by a clear measure of the ±errors.
Now my educated guesses for the distribution of some metal oxide in a defined orebody can be very good, extremely good. That is not because I am so smart but because I just happen to have full control over all the variables. This is something dynamic system can only dream about achieving.

Frank K.

Nick Stokes:
“That is because they are the result of “following some fixed and consistent rule that goes from a set of data to a result”.”
Please state this rule for us.
“And the temporal variation of that process is much more meaningful than the notion of a global average temperature.”
Please discuss your reasoning for this conjecture.


Reading this makes me wonder how they would measure the temperature in a jungle, like the Amazon, or another big one. Is there a good and reliable network of weatherstations to measure it?

Frank K.,
I expect that Nick Stokes will back and fill…


When temperatures vary from minus 89 C as claimed for the Russian site in Antarctica to approximately 58 C during tropical desert summers who really thinks the “average” really has much meaning ?
It is the obsession with “averaging” solar radiation to one quarter over 24 hours I feel is more concerning, and then using this result to tell us that without greenhouses gases the Earth would be minus 18 C.
Never mind that this result is the outgoing radiation result hence little to do with the maximum.
Never mind that this calculation requires about 1000 W/sq m incoming to work.
Never mind 1000 W/sq m produces a result of about 87 C in Stefan-Boltzmann.
The whole thing has become silly – the Earth is the way it is – and that is the atmosphere shields us from some pretty powerful solar radiation.
I say again – see the Moon with its maximum daytime temperatures.
As for the “heat hiding in the oceans” argument – why didn’t it do that before when there was some warming – why did it wait till about ’99/ 00 to start this party trick ??
Of course – silly me – its a surprise party and its gonna shout surprise when it jumps out of hiding and global warming comes roaring back.


I too have used to think calculation of global average temperature is a nonsense. Surprisingly, I came up to realization that there is quite a lot of sense to it once I learned what is true meaning of temperature anomaly and station adjustments. Now, I have no comments on how exactly it is done in GISS or BEST. I am not going to judge if they are doing it right or not. And I must say that I know that doing it right is extremely difficult. But the important fact is that it definitely is possible to make sense of measurements at any single place if they’re done consistently over a long period of time … and it definitely is possible to put together such measurements done at many different places under different conditions and using different technology, and still separate important data from noise.
Of course, the only reason average temperature anomalies are calculated to thousandths of a degree is because we can. But I have learned that the uncertainity interval around this single precise number is surprisingly small, small enough for the natural variability not to be just visible but actually pretty clear even if this uncertainity is taken into account:
My personal opinion is, there is no problem with expressing the average to a thousandth of a degree as long as we keep on our mind that even change in range of several tenths of a degree is no big deal.


Funny, this is pretty much what I was pointing out a few days ago. If you think that you have a global average temperature, then you are delusional. If you further believe that your global average temperature has any meaning whatsoever (and I say this whenever the indexes go either SCARY UPWARD or ANOMALOUS DOWNWARD), then you are certifiable.
Stokes’ comment above demonstrates this sort of thinking. No, nick, the index is meaningless too. Yes, nick, people actually think it has meaning, because “climate scientists” tell them it does.
In fact, the myth or delusion or whatever of “global average temperature” is the Number One flaw in the entire ridiculous and discredited hypothesis of “global warming”, or “climate change”, or whatever misleading and inaccurate term the alarmists are using this week.

It helps to realise that the way the average global temperature is aggregated, as the mean of a group of stations in a defined grid geographical grid, is for the temperature of a spherical plane defined by units of latitude and longitude. Problem is that a spherical plane is an artefact apart from not having an mass, and hence any temperature. Classic case of geographers wandering into a physics lab and copying the mathematical process of calculating an average without actually measuring the physical objects……and then believing they have done basic science. Noooooo.


This is an appalling post.
Dr Brown, are you perhaps not familiar with the law of large numbers (
Over the course of many measurements the estimate of any random variable will converge on it’s true value. If there is a consistent bias, the convergence will be on the true value of the variable plus it’s bias. And if looking at anomalies, where the baseline is taken from your estimate, and especially where large scale coherence is seen in the data ( – coherence of anomalies >50% up to 1200km between stations, as checked against thousands of site-pairs), any consistent bias will cancel out as well.
If you actually run the numbers, you get the same anomaly graph down to only 60-100 randomly chosen stations total, albeit with increasing noise with fewer stations.
Dr. Brown’s post is nothing but noise and arm-waving, devoid of useful content or numeracy.

3) Outbound heat loss IS proportional to local temperature. Worse, it is proportional to the degree K raised to the 4th power. Total radiation loss WILL vary tremendously as temperature varies and (local) atmospheric conditions vary. At NO time of year at ANY location will any assumed “average” heat radiation loss be correct.
No, it’s not. That’s evident in the TOA IR spectroscopy that actually photographs it and graphs it out. Reasons even your (still generally reasonable) points are incorrect in detail include:
a) No part of the Earth’s outbound radiating system is a perfect blackbody.
b) No part of the Earth’s outbound radiating system is radiating at the same approximate imperfect blackbody “temperature” in all bands of the spectrum.
c) No part of the Earth’s outbound radiation fails to be modulated on many, many time scales. Remember, “outbound radiation” in total includes the effects of albedo and much more, and albedo at least is constantly changing with cloud cover and local humidity.
A more correct statement is that outbound heat loss is the result of an integral over a rather complicated spectrum containing lots of structure. In very, very approximate terms one can identify certain bands where the radiation is predominantly “directly” from the “surface” of the Earth — whatever that surface might be at that location — and has a spectrum understandable in terms of thermal radiation at an associated surface temperature. In other bands one can observe what appears very crudely to be thermal radiation coming “directly” from gases at or near the top of the troposphere, at a temperature that is reasonably associated with temperatures there. In other bands radiation is nonlinearly blocked in ways that cannot be associated with a thermal temperature at all.
To the extent that radiation in the e.g. water window or CO_2 bands does indeed seem to follow a BB radiation curve with a given temperature, the integrated power in radiation from that part of the band is proportional to T^4 — for that temperature. But since the overall radiated power comes from your choice of multiple bands at multiple temperatures or (better) from the overall integration of the power spectrum regardless of the presumed temperature of the sources or resonances that contribute, the overall variation with the local conditions isn’t as simple as T^4. Indeed, if one takes the top of troposphere as being roughly constant in temperature, the leading order behavior might even be a constant, as heat loss from CO_2 at the top of the troposphere might actually be largely independent of the particular temperature of the surface underneath and hence represent a more or less constant baseline power loss independent of surface temperature.
But I’m not asserting that this is the best way to view it — I haven’t seen enough of the TOA IR spectra to have a good feel for integrated outbound power either overall or in particular bands. The “right” thing to do is just make the measurements and do the integrals and then try to understand the results as they stand alone, not to try to make a theoretical pronouncement of some particular variation based on an idealized notion of blackbody temperature.


I agree with Mosher somewhat, but it should be called a global temperature INDEX and not global temperature, which is misleading. The global temperature indices shouldn’t be adjusted at all. Just take the best stations (no changes, as far as possible from any human influence…). Existing indices are all too “warm”, due to the local warming effects and various adjustments.

Werner Brozek

I am curious as to why only UAH is mentioned and not RSS.
This is especially so since Dr. Spencer says:
“Progress continues on Version 6 of our global temperature dataset. You can anticipate a little cooler anomalies than recently reported, maybe by a few hundredths of a degree, due to a small warming drift we have identified in one of the satellites carrying the AMSU instruments.”

John Whitman

Consider the implication of an obviously much greater variability of temperatures in each regional climate than is shown in current thermometer and proxy data sets of a so-called climate parameter annualized GMST. Given that much higher variability, then the past 50 years of data do not support a strong basis to even propose a hypothesis of comparatively significant AGW much less cAGW or a CAGW. Then the idea of the domination of the total Earth-atmospheric system by an implausible ‘climate’ parameter called annualized GMST in response to some decadal (short) time scale variations of just a single trace element in the atmosphere becomes of minor importance in the balance of climate science.

Frank K. says: March 4, 2012 at 2:18 pm
“Please state this rule for us.”

No, Frank, I want an answer to my question first. Who are these people who claim to calculate the Earth’s “global annualized temperature” to 0.05K? Seen a graph?
But OK, the rules are in Gistemp. Or even TempLS.

geography lady

After spending a goodly part of my professional career in air monitoring, which includes temperature and other instrumentation reading, for over 40 years, this article expresses my thoughts precisely. Taking temperature readings and making them explain differences down to fractions of a degree, much less a degree of accuracy and precision, is totally absurd.


O/T kinda
Someone does not like Hansen. Reads too much like speculation though.
Gleick, FBI, Hansen 1988, peer review


Nick Stokes – If the reported surface temperature anomalies are so wonderful, precise, and accurate, please explain why every new iteration ends up adjusting the temperatures that had been reported to have been “average” a half century ago. It even appears they do it on the sly; for sure they give no fanfare to their adjustment of historical data. This does not happen in real science, but does in pseudo-science and amongst politicians. It is clear to everyone that the data services are making it up as they go. Changing history, handwaving arguments, obfuscation, exaggeration, collusion, “explaining away”, hiding, suppressing research, etc. have become the defining hallmarks of “climate science”. It would benefit you to do some soul searching in an attempt to understand why you are so gullible.


“This is a strawman argument. Who makes such a claim? Not GISS!. Nor anyone else that I can think of. Can anyone point to such a calc? What is the number?”
Semantics is the last refuge of a CAGW advocate.
From the IPCC AR4:
“Since IPCC’s first report in 1990, assessed projections have suggested global average temperature increases between about 0.15°C and 0.3°C per decade for 1990 to 2005. This can now be compared with observed values of about 0.2°C per decade, strengthening confidence in near-term projections.”
It has been the arrogant assertion of the CAGW community that they can discern the present global average temperature to within tenths of a degree, and make projections with such accuracy. Relabeling “global average temperature” now to some other amorphous term (ala global warming to climate change) is to be expected, but changes nothing.
Climate scientists don’t know what they think they know. And we would be fools to reorder our economy based on their hubristic proclamations.


I’ll say this. While the entire concept is intellectually interesting to some small number of people, the vast majority of Earths’ inhabitants ( including animals and plants) couldn’t care less. We (and I include my plant and animal co-habitants) only care about our immediate environment. Which is as it should be, and is why there are different species around the planet. If everything had to exist in some constant ‘average’ state there would be no state to exist in.

GaryM says: March 4, 2012 at 3:13 pm
“Semantics is the last refuge of a CAGW advocate.”

It’s not semantics. What’s the number? A real question. You won’t find such a number, to 0.05K, in AR4. For all the reasons given in this RGB rant but covered much better in the GISS note.


Frank K. says:
March 4, 2012 at 2:18 pm
Nick Stokes:
“That is because they are the result of “following some fixed and consistent rule that goes from a set of data to a result”.”
Please state this rule for us.
The rule is a follows: When the data says what we expect it to say, we leave it alone. When it doesn’t, we adjust it.
Thus for example, 1934 was the hottest year on record in the US according to GISS, until Steve McIntyre made it public. At which time GISS adjusted 1934 downwards to match expectations that temperature had warmed since 1934.
Stalin rewrote history in the USSR. GISS has done the same for the USA.


Nick Stokes – The GISS temperature anomaly, as an example, is ( estimated to have a two-standard-deviation on the order of 0.05°C.
Not the global temperature, mind you, but the temperature anomaly, how it’s changed over the years. And the magnitude of temperature change over the last 100 years is in good agreement between HadCRUT, GISS, NCDC, and so on, despite different coverage, different station sets, etc.
The GISS standard deviation is quite well supported by the number of measurements, as per the law of large numbers ( Individual readings have a much higher variance, but when you have thousands upon thousands of readings the error variance drops accordingly. Assertions to the contrary indicate that (a) you haven’t run the numbers, and (b) you are falling prey to a Common Sense fallacy (

Physics Major

@Nick Stokes
The little Climate Widget in the sidebar shows a February global temperature anomaly of -0.12 K. This would imply an accuracy somewhat greater than 0.05 K.


Furthermore, an ‘average’ ( or mean ) is a mathematical concept, not a state of nature; and is meaningless even at that without supporting concepts such as median, mode, SD, (distribution shape and assumptions), and so forth. It’s pointless to even discuss it.


“Dr. Brown thinks that this is a very nice piece of work, and is precisely the reason that he said that anybody who claims to know the annualized average temperature of the Earth, or the Ocean, to 0.05 K is, as the saying goes, full of [SNIP] up to their eyebrows.”
Does this mean it is ok to unleash my full vocabulary, when commenting.
I’ve been holding back.
[REPLY: No. This is a family blog. -REP]


@Steve Mosher
Hi Steve,
Nobody is saying some form of mean or global temp temp anomaly assessment/trend isn’t useful or indeed valuable in the overall scheme of things. But there is no reasonable way to infer accuracy (statistically based or otherwise) or a degree of absoluteness/relativity given such a limited amount of spatial and temporal data. If you then start chopping editing and changing the data – you are fecking with your dataset and moving the relative goal posts – statistical or otherwise!
The day I see a media headline that says something like ‘Temp records INFER last week/month/year/decade as PERHAPS being the warmest on record’ is the day that I will bother to read or listen to the appended story!
Here’s a simple list of questions for you or anyone else to fill in……
1) what is the surafce area of a thermometers measuring tip?
2) what would be a reasonable figure for the volume of air affecting the recorded measuremenmt of that thermometer? Shall we say 1 cubic metre? 10 cubic metres? – ok – how about a hundred cubic metres? (for the sake of simplicity, we will assume all the atmosphere is indeed static rather than the fact that some cold or warm air may pass from one thermometer to another!! LOL)
3) Multiply that volume by the number of thermometers actually recorded for any given day. (7000?)
4) Estimate the volume of the earths lower atmosphere – pick whichever ‘layer’ you fancy – even if its just the first 10 metres of the atmosphere!. It’s a simple equation – the volume of the outer boundary sphere minus the volume of the inner boundary sphere.
5) Now report the results of your thermometric volume measurements as a percentage of the measured subjects actual volume! And the answer is…………….
I’m not gonna bother getting my calculator out but I’ll bet a case of best booze that it’s a very flipping small percentage………….
Now if someone can explain to me how the reported global temp anomaly is valid science, especially after statistical and judgemental adjustments – I’d be grateful to hear it………


If I took a temp measurement at my house and it was 20C and 10 miles away it was 24C…
…and I guessed that 5 miles away it was 22C….and it was
….it would still be crap
because it could have been anything
You can’t guess Arctic temps where you have no idea what it is…and almost all of the so called warming is in the Arctic…..where it’s frozen solid right now

Avfuktare vind

I think that we have much more accurate data than proxies from tree rings. We have all historical data that tells us how people used to live, at what heights wearing what cllothes etc. We know about frosen or open rivers and oceans etc. From that we can conclude that the world was much warmer in the mwp and much colder in the lia. I would also say that such records are probably much better than 3 K.