Global annualized temperature – "full of [snip] up to their eyebrows"

Guest Post by Dr. Robert Brown,

Physics Dept. Duke University [elevated from comments]

Dr. Brown mentions “global temperature” several times. I’d like to know what he thinks of this.

Dr. Brown thinks that this is a very nice piece of work, and is precisely the reason that he said that anybody who claims to know the annualized average temperature of the Earth, or the Ocean, to 0.05 K is, as the saying goes, full of [snip] up to their eyebrows.

What I think one can define is an average “Global Temperature” — noting well the quotes — by following some fixed and consistent rule that goes from a set of data to a result. For example, the scheme that is used to go from satellite data to the UAH lower troposphere temperature. This scheme almost certainly does not return “the average Global Temperature of the Earth” in degrees absolute as something that reliably represents the coarse-grain averaged temperature of (say) the lowest 5 kilometers of the air column, especially not the air column as its height varies over an irregular terrain that is itself sometimes higher than 5 kilometers. It does, however, return something that is likely to be close to what this average would be if one could sample and compute it, and one at least hopes that the two would co-vary monotonically most of the time.

The accuracy of the measure is very likely not even 1K (IMO, others may disagree) where accuracy is |T_{LTT} - T_{TGT}| — the absolute difference between lower troposphere temperature and the “true global temperature” of the lower troposphere. The various satellites that contribute to temperature have (IIRC) a variance on this order so the data itself is probably not more accurate than that. The “precision” of the data is distinct — that’s a measure of how much variance there is in the data sources themselves, and is a quantity that can be systematically improved by more data, where accuracy, especially in a situation like this where one is indirectly inferring a quantity that is not exactly the same as what is being measured cannot be improved by more or more precise measurements, it can only be improved by figuring out the map between the data one is using and the actual quantity you are making claims about.

Things are not better for (land) surface measurements — they are worse. There the actual data is (again, in my opinion) hopelessly corrupted by confounding phenomena and the measurement errors are profound. Worse, the measurement errors tend to have a variable monotonic bias compared to the mythical “true average surface Global Temperature” one wishes to measure.

One is in trouble from the very beginning. The Moon has no atmosphere, so its “global average temperature” can be defined without worrying about measuring its temperature at all. When one wishes to speak of the surface temperature at a given point, what does one use as a definition? Is it the temperature an actual high precision thermometer would read (say) 1 cm below the surface at that point? 5 mm? 1 mm? 1 meter? All of these would almost certainly yield different results, results that depend on things like the albedo and emissivity of the point on the surface, the heat capacity and thermal conductivity of the surface matter, the latitude. Is it the “blackbody” temperature of the surface (the inferred temperature of the surface determined by measuring the outgoing full spectrum of radiated light)?

Even inferring the temperature from the latter — probably the one that is most relevant to an airless open system’s average state — is not trivial, because the surface albedo varies, the emissivity varies, and the outgoing radiation from any given point just isn’t a perfect blackbody curve as a result.

How much more difficult is it to measure the Earth’s comparable “surface temperature” at a single point on the surface? For one thing, we don’t do anything of the sort. We don’t place our thermometers 1 meter, 1 cm, 1 mm deep in — what, the soil? The grass or trees? What exactly is the “surface” of a planet largely covered with living plants? We place them in the air some distance above the surface. That distance varies. The surface itself is being heated directly by the sun part of the time, and is radiatively cooling directly to space (in at least some frequencies) all of the time. Its temperature varies by degrees K on a time scale of minutes to hours as clouds pass between the location and the sun, as the sun sets, as it starts to rain. It doesn’t just heat or cool from radiation — it is in tight thermal contact with a complex atmosphere that has a far greater influence on the local temperature than even local variations in insolation.

Yesterday it was unseasonably warm in NC, not because the GHE caused the local temperature to be higher by trapping additional heat but because the air that was flowing over the state came from the warm wet waters of the ocean to the south, so we had a relatively warm rain followed by a nighttime temperature that stayed warm (low overnight of maybe 46F) because the sky was cloudy. Today it is almost perfectly seasonal — high 50’s with a few scattered clouds, winds out of the WSW still carrying warm moisture from the Gulf and warm air from the south central US, but as the day progresses the wind is going to shift to the NW and it will go down to solidly freeze (30F) tonight. Tomorrow it will be seasonal but wet, but by tomorrow night the cooler air that has moved in from the north will make it go down to 25F overnight. The variation in local temperature is determined far more by what is going on somewhere else than it is by actual insolation and radiation here.

If a real cold front comes down from Canada (as they frequently do this time of year) we could have daytime highs in the 30’s or low 40’s and nighttime lows down in the the low 20s. OTOH, if the wind shifts to the right quarter, the temperature outside could reach the low 80s high and low 50s low. We can, and do, have both extremes within a single week.

Clearly surface temperatures are being driven as strongly by the air and moisture flowing over or onto them as they are by the “ideal” picture of radiative energy warming the surface and radiation cooling it. The warming of the surface at any given point isn’t solely responsible for the warming or cooling of the air above it, the temperature of the surface is equally dependent on the temperature of the air as determined by the warming of the surface somewhere else, as determined by the direct warming and cooling of the air itself via radiation, as determined by phase changes of water vapor in the air and on the surface, as determined by factor of ten modulations of insolation as clouds float around over surface and the lower atmosphere alike.

Know the true average surface Global Temperature to within 1K? I don’t even know how one would define a “true” average surface Global Temperature. It was difficult enough for the moon without an atmosphere, assuming one can agree on the particular temperature one is going to “average” and how one is going to perform the average. For the Earth with a complex, wet, atmosphere, there isn’t any possibility of agreeing on a temperature to average! One cannot even measure the air temperature in a way that is not sensitive to where the sun is and what it is doing relative to the measurement apparatus, and the air temperature can easily be in the 40s or 50s while there is snow covering the ground so that the actual surface temperature of the ground is presumably no higher than 32F — depending on the depth one is measuring.

And then oops — we forgot the Oceans, that cover 70% of the surface of the planet.

What do we count as the “temperature” of a piece of the ocean? There is the temperature of the air above the surface of the ocean. In general this temperature differs from the actual temperature of the water itself by order of 5-10K. The air temperature during the day is often warmer than the temperature of the water, in most places. The air temperature at night is often cooler than the temperature of the water.

Or is it? What exactly is “the temperature of the water”? Is it the temperature of the top 1 mm of the surface, where the temperature is dominated by chemical potential as water molecules are constantly being knocked off into the air, carrying away heat? Is it the temperature 1 cm deep? 10 cm? 1 m? 10 m? 50 m? 100m? 1 km?

Is it the average over a vertical column from the surface to the bottom (where the actual depth of the bottom varies by as much as 10 km)? This will bias the temperature way, way down for deep water and make the global average temperature of the ocean very nearly 4K very nearly everywhere, dropping the estimate of the Earth’s average Global Temperature by well over 10K. Yet if we do anything else, we introduce a completely arbitrary bias into our average. Every value we might use as a depth to average over has consequences that cause large variations in the final value of the average. As anyone who swims knows, it is quite easy for the top meter or so of water to be warm enough to be comfortable while the water underneath that is cold enough to take your breath away.

Even if one defines — arbitrarily, as arbitrary in its own way as the definition that one uses for T_{LT} or the temperature you are going to assign to a particular point on the surface on the basis of a “corrected” or “uncorrected” thermometer with location biases that can easily exceed several degrees K compared to equally arbitrary definitions for what the thermometer “should” be reading for the unbiased temperature and how that temperature is supposed to relate to a “true” temperature for the location — a sea surface temperature SST to go with land surface temperature LST and then tries to take the actual data for both and turn them into a average global temperature, one has a final problem to overcome. One’s data is (with the possible exception of modern satellite derived data) sparse! Very sparse.

In particular, it is sparse compared to the known and observed granularity of surface temperature variations, for both LST and SST. Furthermore, it has obvious sampling biases. We have lots and lots of measurements where people live. We have very few measurements (per square kilometer of surface area) where people do not live. Surface temperatures can easily vary by 1K over a kilometer in lateral distance (e.g. at terrain features where one goes up a few hundred meters over a kilometer of grade). They can and do vary by 1 K over order of 5-10 kilometers variations routinely.

I can look at e.g. the Weather Underground’s weather map readings from weather stations scattered around Durham at a glance, for example. At the moment I’m typing this there is a 13 F variation from the coldest to the warmest station reading within a 15 km radius of where I’m sitting. Worse, nearly all of these weather station readings are between 50 and 55 F, but there are two outliers. One of them is 46.5 F (in a neighborhood in Chapel Hill), and the other is Durham itself, the “official” reading for Durham (probably downtown somewhere) which is 59.5 F!

Guess which one will end up being the temperature used to compute the average surface temperature for Durham today, and assigned to an entirely disproportionate area of the surface of the planet in a global average surface temperature reconstruction?

Incidentally, the temperature outside of my house at this particular moment is 52F. This is a digital electronic thermometer in the shade of the north side of the house, around a meter off of the ground. The air temperature on the other side of the house is almost certainly a few degrees warmer as the house sits on a southwest-facing hill with pavement and green grass absorbing the bright sunlight. The temperature back in the middle of the cypresses behind my house (dense shade all day long, but with decent airflow) would probably be no warmer than 50 F. The temperature a meter over the driveway itself (facing and angled square into the sun, and with the house itself reflecting additional heat and light like a little reflector oven) is probably close to 60 F. I’m guessing there is close to 10F variation between the air flowing over the southwest facing dark roof shingles and the northeast facing dark roof shingles, biased further by loss of heat from my (fairly well insulated) house.

I don’t even know how to compute an average surface temperature for the 1/2 acre plot of land my own house sits on, today, right now, from any single thermometer sampling any single location. It is 50F, 52 F, 58 F, 55F, 61 F, depending on just where my thermometer is located. My house is on a long hill (over a km long) that rises to an elevation perhaps 50-100 m higher than my house at the top — we’re in the piedmont in between Durham and Chapel Hill, where Chapel Hill really is up on a hill, or rather a series of hills that stretch past our house. I’d bet a nickel that it is a few degrees different at the top of the hill than it is where my house is today. Today it is windy, so the air is well mixed and the height is probably cooler. On a still night, the colder air tends to settle down in the hollows at the bottoms of hills, so last frost comes earlier up on hilltops or hillsides; Chapel Hill typically has spring a week or so before Durham does, in contradiction of the usual rule that higher locations are cooler.

This is why I am enormously cynical about Argo, SSTs, GISS, and so on as reliable estimates of average Global Temperature. They invariably claim impossible accuracy and impossible precision. Mere common sense suffices to reject their claim otherwise. If they disagree, they can come to my house and try to determine what the “correct” average temperature is for my humble half acre, and how it can be inferred from a single thermometer located on the actual property, let alone from a thermometer located in some weather station out in Duke Forest five kilometers away.

That is why I think that we have precisely 33 years of reasonably reliable global temperature data, not in terms of accuracy (which is unknown and perhaps unknowable) but in terms of statistical precision and as the result of a reasonably uniform sampling of the actual globe. The UAH T_{LT} is what it is, is fairly precisely known, and is at least expected to be monotonically related to a “true average surface Global Temperature”. It is therefore good for determining actual trends in global temperature, not so good for making pronouncements about whether or not the temperature now is or is not the warmest that it has been in the Holocene.

Hopefully the issues above make it just how absurd any such assertion truly is. We don’t know the actual temperature of the globe now, with modern instrumentation and computational methodology to an accuracy of 1 K in any way that can be compared apples-to-apples to any temperature reconstruction, instrument based or proxy based, from fifty, one hundred, one thousand, or ten thousand years ago. 1 K is the close order of all of the global warming supposedly observed since the invention of the thermometer itself (and hence the start of the direct instrumental record). We cannot compare even “anomalies” across such records — they simply don’t compare because of confounding variables, as the “Hide the Decline” and “Bristlecone Pine” problems clearly reveal in the hockey stick controversy. One cannot remove the effects of these confounding variables in any defensible way because one does not know what they are because things (e.g. annual rainfall and the details of local temperature and many other things) are not the same today as they were 100 years ago, and we lack the actual data needed to correct the proxies.

A year with a late frost, for example, can stunt the growth of a tree for a whole year by simply damaging its new leaves or can enhance it by killing off its fruit (leaving more energy for growth that otherwise would have gone into reproduction) completely independent of the actual average temperature for the year.

To conclude, one of many, many problems with modern climate research is that the researchers seem to take their thermal reconstructions far too seriously and assign completely absurd measures of accuracy and precision, with a very few exceptions. In my opinion it is categorically impossible to “correct” for things like the UHI effect — it presupposes a knowledge of the uncorrected temperature that one simply cannot have or reliably infer from the data. The problem becomes greater and greater the further back in time one proceeds, with big jumps (in uncertainty) 250, 200, 100 and 40 odd years ago. The proxy-derived record from more than 250 years ago is uncertain in the extreme, with the thermal record of well over 70% of the Earth’s surface completely inaccessible and with an enormously sparse sampling of highly noisy and confounded proxies elsewhere. To claim accuracy greater than 2-3 K is almost certainly sheer piffle, given that we probably don’t know current “true” global average temperatures within 1 K, and 5K is more likely.

I’m certain that some paleoclimatologists would disagree with such a pessimistic range. Surely, they might say, if we sample Greenland or Antarctic ice cores we can obtain an accurate proxy of temperatures there 1000 or 2000 years ago. Why aren’t those comparable to the present?

The answer is because we cannot be certain that the Earth’s primary climate drivers distributed its heat the same way then as now. We can clearly see how important e.g. the decadal oscillations are in moving heat around and causing variations in global average temperature. ENSO causes spikes and seems responsible for discrete jumps in global average temperature over the recent (decently thermometric) past that are almost certainly jumps from one poincare’ attractor to another in a complex turbulence model. We don’t even know if there was an ENSO 1000 years ago, or if there was if it was at the same location and had precisely the same dependences on e.g. solar state. As a lovely paper Anthony posted this morning clearly shows, major oceanic currents jump around on millennial timescales that appear connected to millennial scale solar variability and almost certainly modulate the major oscillations themselves in nontrivial ways. It is quite possible for temperatures in the antarctic to anticorrelate with temperatures in the tropics for hundreds of years and then switch so that they correlate again. When an ocean current is diverted, it can change the way ocean average temperatures (however one might compute them, see above) vary over macroscopic fractions of the Earth’s surface all at once.

To some extent one can control for this by looking at lots of places, but “lots” is in practice highly restricted. Most places simply don’t have a good proxy at all, and the ones that do aren’t always easy to accurately reconstruct over very long time scales, or lose all sorts of information at shorter time scales to get the longer time scale averages one can get. I think 2-3 K is a generous statement of the probable real error in most reconstructions for global average temperature over 1000 years ago, again presuming one can define an apples-to-apples global average temperature to compare to which I doubt. Nor can one reliably compare anomalies over such time scales, because of the confounding variables and drift.

This is a hard problem, and calling it settled science is obviously a political statement, not a scientific one. A good scientist would, I truly believe, call this unsettled science, science that is understood far less than physics, chemistry, even biology. It is a place for utter honesty, not egregious claims of impossibly accurate knowledge. In my own utterly personal opinion, informed as well or as badly as chance and a fair bit of effort on my part have thus far informed it, we have 33 years of a reasonably precise and reliable statement of global average temperature, one which is probably not the true average temperature assuming any such thing could be defined in the first place but which is as good as any for the purposes of identifying global warming or cooling trends and mechanisms.

Prior to this we have a jump in uncertainty (in precision, not accuracy) compared to the ground-based thermometric record that is strictly apples-to-oranges compared to the satellite derived averages, with error bars that rapidly grow the further back one goes in the thermometric record. We then have a huge jump in uncertainty (in both precision and accuracy) as we necessarily mount the multiproxy train to still earlier times, where the comparison has unfortunately been between modern era apples, thermometric era oranges, and carefully picked cherries. Our knowledge of global average temperatures becomes largely anecdotal, with uncertainties that are far larger than the observed variation in the instrumental era and larger still than the reliable instrumental era (33 year baseline).

Personally, I think that this is an interesting problem and one well worth studying. It is important to humans in lots of ways; we have only benefitted from our studies of the weather and our ability to predict it is enormously valuable as of today in cash money and avoided loss of life and property. It is, however, high time to admit the uncertainties and get the damn politics out of the science. Global climate is not a “cause”! It is the object of scientific study. For the conclusions of that science to be worth anything at all, they have to be brutally honest — honest in a way that is utterly stripped of bias and that acknowledges to a fault our own ignorance and the difficulty of the problem. Pretending that we know and can measure global average temperatures from a sparse and short instrumental record where it would be daunting to assign an accurate, local average temperature to any given piece of ground based on a dense sampling of temperatures from different locations and environments on that piece of ground does nothing to actually help out the science — any time one claims impossible accuracy for a set of experimentally derived data one is openly inviting false conclusions to be drawn from the analysis. Pretending that we can model what is literally the most difficult problem in computational fluid dynamics we have ever attempted with a handful of relatively simple parametric differential forms and use the results over centennial and greater timescales does nothing for the science, especially when the models, when tested, often fail (and are failing, badly, over the mere 33 years of reliable instrumentation and a uniform definition of at least one of the global average temperatures).

It’s time to stop this, and just start over. And we will. Perhaps not this year, perhaps not next, but within the decade the science will finally start to catch up and put an end to the political foolishness. The problem is that no matter what one can do to proxy reconstructions, no matter how much you can adjust LSTs for UHI and other estimated corrections that somehow always leave things warmer than they arguably should be, no matter what egregious claims are initially made for SSTs based on Argo, the UAH T_{LT} will just keep on trucking, unfutzable, apples to apples to apples. The longer that record gets, the less one can bias an “interpretation” of the record.

In the long run that record will satisfy all properly skeptical scientists, and the “warmist” and “denier” labels will end up being revealed as the pointless political crap that they are. In the long run we might actually start to understand some of the things that contribute to that record, not as hypotheses in models that often fail but in models that actually seem to work, that capture the essential longer time scale phenomena. But that long run might well be centennial in scale — long enough to detect and at least try to predict the millennial variations, something utterly impossible with a 33 year baseline.

rgb

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

224 Comments
Inline Feedbacks
View all comments
March 5, 2012 1:20 pm

Nick says :

“No, our ability to measure variations in global temperature is unrelated to the fact that we are putting large amounts of CO2 into the air, and CO2 is a greenhouse gas. The earth will heat regardless of our skill at thermometry.”

So this now sounds more like act of faith ….. CO2 is a greenhouse gas …. it is increasing …. the earth will warm. But supposing net feedbacks are negative and the Earth warms by 0.5 degrees C. Should we continue dismantling our industries just based on belief ?

Slartibartfast
March 5, 2012 1:25 pm

The earth will heat regardless of our skill at thermometry.

Or…not. Our skill at thermometry determines how well, if at all, we can determine the magnitude of said heating.
At some point, you have to validate the models with data. If not, why bother taking data?

March 5, 2012 1:30 pm

– Finally the yearly global temperature anomalies are calculated by taking an area weighted average of all the populated grid points in each year. The formula for this is $Weight = cos( $Lat * PI/ 180 ) where $Lat is the value in degrees of the middle of each grid point. All empty grid points are excluded from this average.
Oh, please don’t tell me that GISS is using latitude and longitude based grid cells. Seriously, guys. Didn’t anybody working there actually ever study numerical integration on spherical manifolds? I mean, there are papers on this where people work out decent ways to do it…
Oh, my sweet lord.
You’d think that somebody at Goddard might be smart enough to use an adaptive icosahedral grid instead of a rectangular Mercator projection, with or without the cosine, which does horrible things at the poles (and equally horrible things at the equator).
You’d also think that somebody might learn about splines or kriging and, I dunno, use it in something like this where it clearly matters, instead of assuming some “anomaly range” and smearing out the data accordingly.
One day if I ever have infinite time I’ll have to work through all this myself. The tragic thing is it isn’t all that difficult to do this correctly. A job for a graduate student or two, or even a bright and well directed undergrad. Undergrad computer science students often have to build icosahedral tesselations of a sphere just as an exercise…
rgb

LazyTeenager
March 5, 2012 1:41 pm

Physics Major on March 4, 2012 at 3:26 pm said:
Stokes
The little Climate Widget in the sidebar shows a February global temperature anomaly of -0.12 K. This would imply an accuracy somewhat greater than 0.05 K
————-
No it doesnt because it is temperature ANOMALY. It is a change in temperature relative a baseline. There is a big difference between claiming an accuracy figure for an anomaly and claiming an accuracy figure for global temperature.

LazyTeenager
March 5, 2012 1:48 pm

Robert Brown says
Oh, please don’t tell me that GISS is using latitude and longitude based grid cells. Seriously, guys.
———-
You’re confused. They say how the weighting factors are calculated. They don’t say they are using a particular grid cell.
Please read more carefully before you try to demonstrate your superiority. If you don’t it tends to be embarrassing.

bill
March 5, 2012 1:52 pm

Nick Stokes 12.51:
“No, our ability to measure variations in global temperature is unrelated to the fact that we are putting large amounts of CO2 into the air, and CO2 is a greenhouse gas. The earth will heat regardless of our skill at thermometry”.
But Nick, since don’t really know what the temperature was, we don’t know by how much it has gone up, do we? Even assuming we’re happy with the theory, all we can say is that at some point we will hit the 3 deg C ‘tipping point’, we can’t say that its in any sense imminent (except at a ‘maybe’ level). So to say that at some point the world will get jolly warm if we carry on putting this amount of GHGs into the atmosphere, doesn’t necessarily imply policy responses now. If in our ignorance, we applied policy solutions now, they might not have practical effects on the future, and might be very disadvantageous over a 50 year scenario. Perhaps the precautionary principle should be recast as ‘do nothing’, or ‘wait and see’?

March 5, 2012 2:03 pm

No, our ability to measure variations in global temperature is unrelated to the fact that we are putting large amounts of CO2 into the air, and CO2 is a greenhouse gas. The earth will heat regardless of our skill at thermometry.
CO_2 is a saturated greenhouse gas in that the atmosphere is already opaque in the CO_2 band up to the top of the troposphere. The CO_2 part of the GHE is strictly determined by the temperature at which the top-of-troposphere CO_2 radiates to space at the point where the atmosphere becomes sufficiently transparent to in-band IR to permit the radiation to escape. The temperature itself can be measured in TOA IR spectroscopy. The only way the GHE will be enhanced is if the CO_2 ceiling lifts so that radiation occurs at still cooler temperatures.
There is no evidence that I’m aware of that TOA IR spectroscopy is detecting a mean cooling/lifting of the CO_2 band (and hence an enhanced GHE) that is functional on concentration. While the tropopause does move up and down — IIRC up during El Nino, down during La Nina, causing an actual real-time increase in the GHE during El Nino (and hence global warming) and a decrease during La Nina — as in right now, where the UAH anomaly is currently -0.1 — I don’t believe I’ve heard of any solid evidence for it moving in a trended way.
Indeed, what has recently happened is that the stratosphere has become more transparent because of decreased H_2O, which permits GHG’s to radiate their energy from further down where it is warmer, leading to net cooling. We are in a period where the GHE itself is not increasing (or at least, is not increasing much) quite independent of what CO_2 is doing.
While there may be participants in this thread that wish to “deny” that the GHE is a real thing that contributes to the warming of the Earth, I am not one of them. However, a glib reply that the GHE is real and hence we are headed for catastrophe is not good science, and it is not reasoned argumentation.
Warming can have many causes. If you like “denial”, stop denying that some of those causes may be independent of and commensurate with CO_2; stop asserting that anybody who thinks otherwise is an unreasoning fool deserving of pejorative and dismissive labels. Natural climate variation is absolutely capable of explaining a significant fraction of the temperature anomaly. You may think that fraction is negligible. I do not. I think it quite possible that the CO_2 attributable fraction may be “negligible” in the sense that it is less than half of the total, including all feedbacks. I also think it is quite likely that the GISS computations of the surface temperature, whether it is of the temperature “anomaly” or the absolute temperature, are fraught with errors. Using a Mercator “degrees” based grid to cover a sphere, for example. Assuming that temperature anomalies in the relatively small fraction of the Earth’s surface sampled extrapolate to cover the whole thing, for example. How, exactly, would you prove this, back in 1880 when Antarctica was completely unexplored, when the Pacific was a great big black hole (that covers almost half of the Earth) as far as data is concerned, when the Australian Outback was raw frontier?
So the question, my friend, is not whether or not global warming has occurred or is occurring. Nor is it whether or not global cooling has occurred or is occurring. It is what fraction of the observed warming or cooling anomaly (to use your own referent) can be attributed to changes in the concentration of CO_2, what fraction of the changes in CO_2 concentration can reasonably be attributed to “human activity”, what other human activities have contributed to climate change (and how), what are the feedbacks and drivers of all contributors to climate change, and is there good reason, based on evidence and sound evidence supported arguments, to fear “catastrophe”.
At the moment, there is scant evidence of any impending catastrophe.
rgb

LazyTeenager
March 5, 2012 2:08 pm

Robert Brown says
Damn skippy! It is brilliant, and the record they are developing is as important as the work of the rest of the GISS and HADCRUT put together —
————–
Hyperbole, but I agree. And since the satellite data agrees with the surface temperature data it gives us some confidence that the surface data set analyses are ok.
The satellite measurements are indirect measurements, have problems of there own and are techically demanding. The analysis produces an average temperature just like the surface temperature data sets, so obviously people like Roy Spencer think temperature averages make sense, unlike some people here.

John Whitman
March 5, 2012 2:19 pm

rgb (Robert Brown) said,
“[ . . . ] Curiously, nobody in the CAGW camp ever seems to be cheered by the possibility that we aren’t headed for catastrophe after all. When the UAH temperature fails to actually continue to increase post the 1998 peak and might even be decreasing, this should be great news! The sky may not be falling after all! At the very least, it is something else to understand, a suggestion from nature to stop omitting important variables and focusing “only” on CO_2.”

rgb,
First, thank you for a very readable science post! I hope you post here at Anthony’s place more often.
Indeed, why aren’t scientists who consistently spread CAGW relieved that we are now becoming more and more confident that their original catastrophes cannot occur? They seem to need catastrophic possibilities for their happiness.
Your well stated alternate view criticalof a government (bureactic) science institution’s position (GISS ) does cause me to reflect on bias in the scientific findings of gov’t institutions like GISS. Lack of balance in those gov’t scientific institutions only seems to serve the alarming messaage of the IPCC centric climate scientific community, it seems never to provide a platform that includes a balance of the views of independent (aka skeptic) thinkers.
John

Andrejs Vanags
March 5, 2012 2:59 pm

Why is the temperature used for the average? We should be using the average of Heat flux or H=sigma*T^4. Then, if desired, an equivalent temperature could be calculated as Tave = (H_ave/sigma)^1/4. It is easy to prove that if the heatflux has a delta variation(lets say a sine function) over the globe with an average of zero the average of all global heat fluxes will be H, but the average of all the temperatures will be T + deltaerror. This error will depend on how the heatfluxes are redistributed over the globe. The globe could be just redistributing its heat, and NOT warming NOR cooling, and the average temperature would still give you a false signal, just form heat redistribution alone. Over a degree or so it doesnt matter, but I assume the global temperature is calculated as the average of day and night temperatures, in which case the heat redistribution error could easily account for over 0.6 degrees we are talking about. Since the data is readily available, one should be able to get all the raw temperature data, convert to heat flux, average it, convert to global temperature, doit for all the years of interest and see if it changes the ‘trend’ at all. Could any one do that?

Reply to  Andrejs Vanags
March 6, 2012 1:55 am

Reply Andrejs:
I have actually done the comparison between T^4 averaging and simple linear averaging using the CRUTEM3 data. You can see the resulte for the full range here and a detailed look at the range from 1950 till 2011 here.There are some subtle effects.
The coverage in stations is poor until 1950 and this effects the global average. One main reason for this is the lack of stations in the Tropics. The CRU/GISS people argue that the anomalies (deltas) . However since 1950 we have a stable set of stations and the trends are very interesting:
1. T^4 and T are the same for the Southern Hemisphere. However the energy flux value (T^4) is higher for the Northern Hemisphere
2. There has been ZERO warming from 1950 in the Southern Hemisphere.
3. All observed temperature rises are in the Northern Hemisphere. NOTE that the GLOBAL: value is just (NH+SH)/2 and adds no new information.
Conclusion: Two effects occur in the NH. A) T^4 values are higher 2) Temperatures have risen since 1950

Edim
March 5, 2012 3:24 pm

Andrejs, I agree in general. But since the heat transfer at the surface is multi-modal (evaporation, radiation, convection), one cannot convert the temperature data to heat flux. Surface temperature is not enough.
I agree with your point about redistributing heat (energy).

March 5, 2012 3:55 pm

Robert Brown says: March 5, 2012 at 2:03 pm
“CO_2 is a saturated greenhouse gas in that the atmosphere is already opaque in the CO_2 band up to the top of the troposphere.”

This is an ancient controversy, and you are echoing Angstrom 1907. But more accurate measurement showed it to be not so.
“There is no evidence that I’m aware of that TOA IR spectroscopy is detecting a mean cooling/lifting of the CO_2 band (and hence an enhanced GHE) that is functional on concentration.”
There’s no evidence either way – we just don’t have the measurement capacity at the moment.
“Indeed, what has recently happened is that the stratosphere has become more transparent because of decreased H_2O, which permits GHG’s to radiate their energy from further down where it is warmer, leading to net cooling.”
No, the temp gradient in the stratosphere goes the other way. Lower is cooler, and leads to warming.
“Using a Mercator “degrees” based grid to cover a sphere, for example.”
There’s actually nothing wrong with this. They have a noisy signal with spatial correlation. There’s no reason to believe a fancier grid with better spatial resolution would help. Their main grid issue is with the variable number of stations reporting each month, and the problem of empty cells. And for various reasons, it’s undesirable to change the grid from month to month to overcome that.
“So the question, my friend, is not whether or not global warming has occurred or is occurring. Nor is it whether or not global cooling has occurred or is occurring. It is what fraction of the observed warming or cooling anomaly (to use your own referent) can be attributed to changes in the concentration of CO_2, …”
I disagree completely. I think the important question is exactly “whether or not global warming has occurred or is occurring”. Whether there is noise in the temperature, or whether its measurement is defective, simply detracts (if true, disputed) from one way we have to confirm that warming.

John Whitman
March 5, 2012 4:34 pm

Robert Brown says:
March 5, 2012 at 2:03 pm
Indeed, what has recently happened is that the stratosphere has become more transparent because of decreased H_2O, which permits GHG’s to radiate their energy from further down where it is warmer, leading to net cooling. We are in a period where the GHE itself is not increasing (or at least, is not increasing much) quite independent of what CO_2 is doing.

= = = = =
rgb,
I am interested in your comment about lower GHE in recent years due to lower stratospheric water vapor.
Is the mechanism causing decreased water vapor (H2O) in the stratosphere primarily caused by decreased production of water vapor from oxidation of methane (CH4) due to reduced amounts of stratospheric methane?
Or is the decrease in stratospheric water vapor a result of increased photo-dissociation of water vapor in the upper troposphere and/or lower stratosphere caused by increases in UV in the SSI caused by recent lower solar cycle activity/strength (low cycle 24 activity in particular). Note; my info is that the lower (less energetic end) of the UV spectrum from the SSI are the cause of the water vapor photo disassociation.
Or a combination of both?
Any other mechanisms for reduced stratospheric water vapor?
Thanks.
John

March 5, 2012 4:46 pm

Can I put that post (the Guest Post by Dr. Robert Brown) on a bumper sticker?
Okay … how about as an appendix to Dick Tracy’s “Crimestoppers’ Textbook” ?
Great summation of the issues, the scenario, Dr. Brown.
.

March 5, 2012 4:48 pm

Nick Stokes says on March 5, 2012 at 3:55 pm:

And then the lawyers descend (or is it ‘ascend’?) to dissemble …
.

Werner Brozek
March 5, 2012 5:36 pm

Is GISS more accurate?
I have read that GISS is the only record that is accurate since it adequately considers what happens in the polar regions, unlike other data sets. I have done some “back of the envelope calculations” to see if this is a valid assumption. I challenge any GISS supporter to challenge my assumptions and/or calculations and show that I am way out to lunch. If you cannot do this, I will assume it is the GISS calculations that are out to lunch.
Here are my assumptions and/or calculations: (I will generally work to 2 significant digits.)
1. The surface area of Earth is 5.1 x 10^8 km squared.
2. The RSS data is only good to 82.5 degrees.
3. It is almost exclusively the northern Arctic that is presumably way warmer and not Antarctica. For example, we always read about the northern ice melting and not what the southern areas are gaining in ice.
4. The circumference of Earth is 40,000 km.
5. I will assume the area between 82.5 degrees and 90 degrees can be assumed to be a flat circle so spherical trigonometry is not needed.
6. The area of a circle is pi r squared.
7. The distance between 82.5 degrees and 90.0 degrees is 40,000 x 7.5/360 = 830 km
8. The area in the north polar region above 82.5 degrees is 2.2 x 10^6 km squared.
9. The ratio of the area between the whole earth and the north polar region above 82.5 degrees is 5.1 x 10^8 km squared/2.2 x 10^6 km squared = 230.
10. People wondered if the satellite record for 2010 would be higher than for 1998. Let us compare these two between RSS and GISS.
11. According to GISS, the difference in anomaly was 0.07 degrees C higher for 2010 versus 1998.
12. According to RSS, it was 0.04 degrees C higher for 1998 versus 2010.
13. The net difference between 1998 and 2010 between RSS and GISS is 0.11 degrees C.
14. If we are to assume the only difference between these is due to GISS accurately accounting for what happens above 82.5 degrees, then this area had to be 230 x 0.11 = 25 degrees warmer in 2010 than 1998.
15. If we assume the site at http://ocean.dmi.dk/arctic/meant80n.uk.php can be trusted for temperatures above 80 degrees north, we see very little difference between 1998 and 2010. The 2010 seems slightly warmer, but nothing remotely close to 25 degrees warmer as an average for the whole year.
Readers may disagree with some assumptions I used, but whatever issue anyone may have, does it affect the final conclusion about the lack of superiority of GISS data to any real extent?

March 5, 2012 5:49 pm

Werner Brozek says: March 5, 2012 at 5:36 pm
“Is GISS more accurate?
I challenge any GISS supporter to challenge my assumptions and/or calculations and show that I am way out to lunch.”

14 is wrong, at least. The main difference between GISS and RSS is that they are measuring different things. One is surface temp, the other is tropospheric. One big downside of current satellite measurement is that it aggregates different levels of the atmosphere.

John Whitman
March 5, 2012 6:03 pm

Resending my below comment again, but this time with correct html tags.

Robert Brown says:
March 5, 2012 at 2:03 pm
Indeed, what has recently happened is that the stratosphere has become more transparent because of decreased H_2O, which permits GHG’s to radiate their energy from further down where it is warmer, leading to net cooling. We are in a period where the GHE itself is not increasing (or at least, is not increasing much) quite independent of what CO_2 is doing.

= = = = =
rgb,
I am interested in your comment about lower GHE in recent years due to lower stratospheric water vapor.
Is the mechanism causing decreased water vapor (H2O) in the stratosphere primarily caused by decreased production of water vapor from oxidation of methane (CH4) due to reduced amounts of stratospheric methane?
Or is the decrease in stratospheric water vapor a result of increased photo-dissociation of water vapor in the upper troposphere and/or lower stratosphere caused by increases in UV in the SSI caused by recent lower solar cycle activity/strength (low cycle 24 activity in particular). Note; my info is that the lower (less energetic end) of the UV spectrum from the SSI are the cause of the water vapor photo disassociation.
Or a combination of both?
Any other mechanisms for reduced stratospheric water vapor?
Thanks.
John

Werner Brozek
March 5, 2012 6:23 pm

Nick Stokes says:
March 5, 2012 at 5:49 pm
14 is wrong, at least. The main difference between GISS and RSS is that they are measuring different things.

Fair enough. However the relative difference between Hadcrut3 and GISS is also around the same (0.12).

March 5, 2012 7:07 pm

It is quite impossible to measure the average temperature of the earth’s surface, let alone of the earth’s atmosphere, and this article is full of basic errors. Temperature is an INTENSIVE property, so it does not even exist unless it pervades the whole substance that is being measured.
It is theoretically possible to divide the earth’s surface, or even the whole atmosphere, into three dimensional infinitesimal increments, all of which possess temperature. Each of them will change continuously with time. If it were possible to sensor temperature for each of these increments and integrate them, it is possible to envisage some sort of average, but since its distribution curve is likely to be skewed there is a choice of several kinds of average. Only when this system is in operation for many years would it be possible to judge how it changes over time.
We obviously do not possess such a system and any claims that we know of an average figure for the temperature of the earth’s surface, or any part of the atmosphere, are false.
The IPCC does not claim to have measured average temperature, but instead they place far too much emphasis on what they call “The Mean Global Temperature Anomaly Record”. This is based on multiple averaging and subtraction of large and a varying number of miscellaneous maximum and minimum measurements in unrepresentative places on the earth surface, for which no estimate of the undoubtedly high inaccuracy is provided.. It is not a scientifically or statistically acceptable record of the earth”s surface temperature.
Despite all this, this record does seem to be influenced by such natural events as changes in the sun, volcanic eruptions, ocean oscillations, and cosmic rays, and it is also influenced by urbanisation and land change. There is no evidence that it is influenced by emissions of so-called greenhouse gases.

eyesonu
March 5, 2012 7:23 pm

Dr. Brown, your open participation in the CAGW debate is very much appreciated. I’m sure that I speak for many. I expect to see many more with the knowledge and insight such as yours to come forward in the near future. You do not need a supporting ‘consensus’ as you are doing an excellent job without any help. I’m sure you are an inspiration to many that we will soon hear from.
Thank you as I look forward to your posts. A true beacon of reality in the endless propaganda of CAGW.

George M
March 5, 2012 7:31 pm

NIck Stokes March 4, 2012 1:43pm
“anybody who claims to know the annualized average temperature of the Earth, or the Ocean, to 0.05 K is, as the saying goes, full of shit up to their eyebrows.”
This is a strawman argument. Who makes such a claim? Not GISS!. Nor anyone else that I can think of. Can anyone point to such a calc? What is the number?”
Here is bit of Gistemp anomaly data, collected on March 5 2012
Year Ann.Mean 5yr. Mean
1998 0.58 0.40
1999 0.33 0.43
2000 0.35 0.46
2001 0.48 0.46
2002 0.56 0.49
2003 0.55 0.54
2004 0.48 0.55
2005 0.62 0.56
2006 0.55 0.53
2007 0.58 0.55
2008 0.44 0.55
2009 0.57 0.55
2010 0.63 *
2011 0.51 *
2012 * *
So GISS is taking degC temperatures in a range an apporximate range of -50degC to +40 degC to hundredths of a degree. They are plotted to show a temperature anomaly range of from -2 deg.C to +.5 deg C as a presumably physically significant change in temperature. Given the kinds of temperature differences seen daily and the kinds of confounding influences mentioned above this looks ridiculous. It is taking the data for a ride.
Max Hugoson identifies the mistake above.
March 4, 2012 at 1:40 pm
As I have noted TIME AFTER TIME AFTER TIME…an 86 F day in MN with 60% RH is 38 BTU/Ft^3, and 110 F day in PHX at 10% RH is 33 BTU/Ft^3…
Which is “HOTTER”? HEAT = ENERGY? MN of course, while the temp is lower.
All this temperature twiddling is trying to make do with a total lack of the data really needed to analyze the problem.
George M

John another
March 5, 2012 7:50 pm

I thought I had read all of the responses but it seems I missed how the best accuracy of global temperature is presumed by IPCC at aprox. 2 degrees C while the anomaly of global temperature is measured in tenths or less. How can I know the change in volume of my gas tank in ounces but not know the actual volume to within pints? How will we know if our abandonment of fossil fuel has accomplished the required 2 degree drop the planet requires to survive evil mankind if that’s the best we can measure?

March 5, 2012 8:11 pm

John another says: March 5, 2012 at 7:50 pm
“How can I know the change in volume of my gas tank in ounces but not know the actual volume to within pints?”

Measure the flow in the fuel line.

Werner Brozek
March 5, 2012 8:24 pm

RSS for February just came out at woodfortrees. It came in at -0.121 C. So the combined January-February average is -0.09 placing it the 26th warmest so far. (UAH was also 26th warmest on its set after February.) For RSS, it is now 15 years and 3 months, since December, 1996, that the slope has no trend. (slope = -0.000234717 per year) See
http://www.woodfortrees.org/plot/rss/from:1994/plot/rss/from:1996.9/trend