Global annualized temperature – "full of [snip] up to their eyebrows"

Guest Post by Dr. Robert Brown,

Physics Dept. Duke University [elevated from comments]

Dr. Brown mentions “global temperature” several times. I’d like to know what he thinks of this.

Dr. Brown thinks that this is a very nice piece of work, and is precisely the reason that he said that anybody who claims to know the annualized average temperature of the Earth, or the Ocean, to 0.05 K is, as the saying goes, full of [snip] up to their eyebrows.

What I think one can define is an average “Global Temperature” — noting well the quotes — by following some fixed and consistent rule that goes from a set of data to a result. For example, the scheme that is used to go from satellite data to the UAH lower troposphere temperature. This scheme almost certainly does not return “the average Global Temperature of the Earth” in degrees absolute as something that reliably represents the coarse-grain averaged temperature of (say) the lowest 5 kilometers of the air column, especially not the air column as its height varies over an irregular terrain that is itself sometimes higher than 5 kilometers. It does, however, return something that is likely to be close to what this average would be if one could sample and compute it, and one at least hopes that the two would co-vary monotonically most of the time.

The accuracy of the measure is very likely not even 1K (IMO, others may disagree) where accuracy is |T_{LTT} - T_{TGT}| — the absolute difference between lower troposphere temperature and the “true global temperature” of the lower troposphere. The various satellites that contribute to temperature have (IIRC) a variance on this order so the data itself is probably not more accurate than that. The “precision” of the data is distinct — that’s a measure of how much variance there is in the data sources themselves, and is a quantity that can be systematically improved by more data, where accuracy, especially in a situation like this where one is indirectly inferring a quantity that is not exactly the same as what is being measured cannot be improved by more or more precise measurements, it can only be improved by figuring out the map between the data one is using and the actual quantity you are making claims about.

Things are not better for (land) surface measurements — they are worse. There the actual data is (again, in my opinion) hopelessly corrupted by confounding phenomena and the measurement errors are profound. Worse, the measurement errors tend to have a variable monotonic bias compared to the mythical “true average surface Global Temperature” one wishes to measure.

One is in trouble from the very beginning. The Moon has no atmosphere, so its “global average temperature” can be defined without worrying about measuring its temperature at all. When one wishes to speak of the surface temperature at a given point, what does one use as a definition? Is it the temperature an actual high precision thermometer would read (say) 1 cm below the surface at that point? 5 mm? 1 mm? 1 meter? All of these would almost certainly yield different results, results that depend on things like the albedo and emissivity of the point on the surface, the heat capacity and thermal conductivity of the surface matter, the latitude. Is it the “blackbody” temperature of the surface (the inferred temperature of the surface determined by measuring the outgoing full spectrum of radiated light)?

Even inferring the temperature from the latter — probably the one that is most relevant to an airless open system’s average state — is not trivial, because the surface albedo varies, the emissivity varies, and the outgoing radiation from any given point just isn’t a perfect blackbody curve as a result.

How much more difficult is it to measure the Earth’s comparable “surface temperature” at a single point on the surface? For one thing, we don’t do anything of the sort. We don’t place our thermometers 1 meter, 1 cm, 1 mm deep in — what, the soil? The grass or trees? What exactly is the “surface” of a planet largely covered with living plants? We place them in the air some distance above the surface. That distance varies. The surface itself is being heated directly by the sun part of the time, and is radiatively cooling directly to space (in at least some frequencies) all of the time. Its temperature varies by degrees K on a time scale of minutes to hours as clouds pass between the location and the sun, as the sun sets, as it starts to rain. It doesn’t just heat or cool from radiation — it is in tight thermal contact with a complex atmosphere that has a far greater influence on the local temperature than even local variations in insolation.

Yesterday it was unseasonably warm in NC, not because the GHE caused the local temperature to be higher by trapping additional heat but because the air that was flowing over the state came from the warm wet waters of the ocean to the south, so we had a relatively warm rain followed by a nighttime temperature that stayed warm (low overnight of maybe 46F) because the sky was cloudy. Today it is almost perfectly seasonal — high 50’s with a few scattered clouds, winds out of the WSW still carrying warm moisture from the Gulf and warm air from the south central US, but as the day progresses the wind is going to shift to the NW and it will go down to solidly freeze (30F) tonight. Tomorrow it will be seasonal but wet, but by tomorrow night the cooler air that has moved in from the north will make it go down to 25F overnight. The variation in local temperature is determined far more by what is going on somewhere else than it is by actual insolation and radiation here.

If a real cold front comes down from Canada (as they frequently do this time of year) we could have daytime highs in the 30’s or low 40’s and nighttime lows down in the the low 20s. OTOH, if the wind shifts to the right quarter, the temperature outside could reach the low 80s high and low 50s low. We can, and do, have both extremes within a single week.

Clearly surface temperatures are being driven as strongly by the air and moisture flowing over or onto them as they are by the “ideal” picture of radiative energy warming the surface and radiation cooling it. The warming of the surface at any given point isn’t solely responsible for the warming or cooling of the air above it, the temperature of the surface is equally dependent on the temperature of the air as determined by the warming of the surface somewhere else, as determined by the direct warming and cooling of the air itself via radiation, as determined by phase changes of water vapor in the air and on the surface, as determined by factor of ten modulations of insolation as clouds float around over surface and the lower atmosphere alike.

Know the true average surface Global Temperature to within 1K? I don’t even know how one would define a “true” average surface Global Temperature. It was difficult enough for the moon without an atmosphere, assuming one can agree on the particular temperature one is going to “average” and how one is going to perform the average. For the Earth with a complex, wet, atmosphere, there isn’t any possibility of agreeing on a temperature to average! One cannot even measure the air temperature in a way that is not sensitive to where the sun is and what it is doing relative to the measurement apparatus, and the air temperature can easily be in the 40s or 50s while there is snow covering the ground so that the actual surface temperature of the ground is presumably no higher than 32F — depending on the depth one is measuring.

And then oops — we forgot the Oceans, that cover 70% of the surface of the planet.

What do we count as the “temperature” of a piece of the ocean? There is the temperature of the air above the surface of the ocean. In general this temperature differs from the actual temperature of the water itself by order of 5-10K. The air temperature during the day is often warmer than the temperature of the water, in most places. The air temperature at night is often cooler than the temperature of the water.

Or is it? What exactly is “the temperature of the water”? Is it the temperature of the top 1 mm of the surface, where the temperature is dominated by chemical potential as water molecules are constantly being knocked off into the air, carrying away heat? Is it the temperature 1 cm deep? 10 cm? 1 m? 10 m? 50 m? 100m? 1 km?

Is it the average over a vertical column from the surface to the bottom (where the actual depth of the bottom varies by as much as 10 km)? This will bias the temperature way, way down for deep water and make the global average temperature of the ocean very nearly 4K very nearly everywhere, dropping the estimate of the Earth’s average Global Temperature by well over 10K. Yet if we do anything else, we introduce a completely arbitrary bias into our average. Every value we might use as a depth to average over has consequences that cause large variations in the final value of the average. As anyone who swims knows, it is quite easy for the top meter or so of water to be warm enough to be comfortable while the water underneath that is cold enough to take your breath away.

Even if one defines — arbitrarily, as arbitrary in its own way as the definition that one uses for T_{LT} or the temperature you are going to assign to a particular point on the surface on the basis of a “corrected” or “uncorrected” thermometer with location biases that can easily exceed several degrees K compared to equally arbitrary definitions for what the thermometer “should” be reading for the unbiased temperature and how that temperature is supposed to relate to a “true” temperature for the location — a sea surface temperature SST to go with land surface temperature LST and then tries to take the actual data for both and turn them into a average global temperature, one has a final problem to overcome. One’s data is (with the possible exception of modern satellite derived data) sparse! Very sparse.

In particular, it is sparse compared to the known and observed granularity of surface temperature variations, for both LST and SST. Furthermore, it has obvious sampling biases. We have lots and lots of measurements where people live. We have very few measurements (per square kilometer of surface area) where people do not live. Surface temperatures can easily vary by 1K over a kilometer in lateral distance (e.g. at terrain features where one goes up a few hundred meters over a kilometer of grade). They can and do vary by 1 K over order of 5-10 kilometers variations routinely.

I can look at e.g. the Weather Underground’s weather map readings from weather stations scattered around Durham at a glance, for example. At the moment I’m typing this there is a 13 F variation from the coldest to the warmest station reading within a 15 km radius of where I’m sitting. Worse, nearly all of these weather station readings are between 50 and 55 F, but there are two outliers. One of them is 46.5 F (in a neighborhood in Chapel Hill), and the other is Durham itself, the “official” reading for Durham (probably downtown somewhere) which is 59.5 F!

Guess which one will end up being the temperature used to compute the average surface temperature for Durham today, and assigned to an entirely disproportionate area of the surface of the planet in a global average surface temperature reconstruction?

Incidentally, the temperature outside of my house at this particular moment is 52F. This is a digital electronic thermometer in the shade of the north side of the house, around a meter off of the ground. The air temperature on the other side of the house is almost certainly a few degrees warmer as the house sits on a southwest-facing hill with pavement and green grass absorbing the bright sunlight. The temperature back in the middle of the cypresses behind my house (dense shade all day long, but with decent airflow) would probably be no warmer than 50 F. The temperature a meter over the driveway itself (facing and angled square into the sun, and with the house itself reflecting additional heat and light like a little reflector oven) is probably close to 60 F. I’m guessing there is close to 10F variation between the air flowing over the southwest facing dark roof shingles and the northeast facing dark roof shingles, biased further by loss of heat from my (fairly well insulated) house.

I don’t even know how to compute an average surface temperature for the 1/2 acre plot of land my own house sits on, today, right now, from any single thermometer sampling any single location. It is 50F, 52 F, 58 F, 55F, 61 F, depending on just where my thermometer is located. My house is on a long hill (over a km long) that rises to an elevation perhaps 50-100 m higher than my house at the top — we’re in the piedmont in between Durham and Chapel Hill, where Chapel Hill really is up on a hill, or rather a series of hills that stretch past our house. I’d bet a nickel that it is a few degrees different at the top of the hill than it is where my house is today. Today it is windy, so the air is well mixed and the height is probably cooler. On a still night, the colder air tends to settle down in the hollows at the bottoms of hills, so last frost comes earlier up on hilltops or hillsides; Chapel Hill typically has spring a week or so before Durham does, in contradiction of the usual rule that higher locations are cooler.

This is why I am enormously cynical about Argo, SSTs, GISS, and so on as reliable estimates of average Global Temperature. They invariably claim impossible accuracy and impossible precision. Mere common sense suffices to reject their claim otherwise. If they disagree, they can come to my house and try to determine what the “correct” average temperature is for my humble half acre, and how it can be inferred from a single thermometer located on the actual property, let alone from a thermometer located in some weather station out in Duke Forest five kilometers away.

That is why I think that we have precisely 33 years of reasonably reliable global temperature data, not in terms of accuracy (which is unknown and perhaps unknowable) but in terms of statistical precision and as the result of a reasonably uniform sampling of the actual globe. The UAH T_{LT} is what it is, is fairly precisely known, and is at least expected to be monotonically related to a “true average surface Global Temperature”. It is therefore good for determining actual trends in global temperature, not so good for making pronouncements about whether or not the temperature now is or is not the warmest that it has been in the Holocene.

Hopefully the issues above make it just how absurd any such assertion truly is. We don’t know the actual temperature of the globe now, with modern instrumentation and computational methodology to an accuracy of 1 K in any way that can be compared apples-to-apples to any temperature reconstruction, instrument based or proxy based, from fifty, one hundred, one thousand, or ten thousand years ago. 1 K is the close order of all of the global warming supposedly observed since the invention of the thermometer itself (and hence the start of the direct instrumental record). We cannot compare even “anomalies” across such records — they simply don’t compare because of confounding variables, as the “Hide the Decline” and “Bristlecone Pine” problems clearly reveal in the hockey stick controversy. One cannot remove the effects of these confounding variables in any defensible way because one does not know what they are because things (e.g. annual rainfall and the details of local temperature and many other things) are not the same today as they were 100 years ago, and we lack the actual data needed to correct the proxies.

A year with a late frost, for example, can stunt the growth of a tree for a whole year by simply damaging its new leaves or can enhance it by killing off its fruit (leaving more energy for growth that otherwise would have gone into reproduction) completely independent of the actual average temperature for the year.

To conclude, one of many, many problems with modern climate research is that the researchers seem to take their thermal reconstructions far too seriously and assign completely absurd measures of accuracy and precision, with a very few exceptions. In my opinion it is categorically impossible to “correct” for things like the UHI effect — it presupposes a knowledge of the uncorrected temperature that one simply cannot have or reliably infer from the data. The problem becomes greater and greater the further back in time one proceeds, with big jumps (in uncertainty) 250, 200, 100 and 40 odd years ago. The proxy-derived record from more than 250 years ago is uncertain in the extreme, with the thermal record of well over 70% of the Earth’s surface completely inaccessible and with an enormously sparse sampling of highly noisy and confounded proxies elsewhere. To claim accuracy greater than 2-3 K is almost certainly sheer piffle, given that we probably don’t know current “true” global average temperatures within 1 K, and 5K is more likely.

I’m certain that some paleoclimatologists would disagree with such a pessimistic range. Surely, they might say, if we sample Greenland or Antarctic ice cores we can obtain an accurate proxy of temperatures there 1000 or 2000 years ago. Why aren’t those comparable to the present?

The answer is because we cannot be certain that the Earth’s primary climate drivers distributed its heat the same way then as now. We can clearly see how important e.g. the decadal oscillations are in moving heat around and causing variations in global average temperature. ENSO causes spikes and seems responsible for discrete jumps in global average temperature over the recent (decently thermometric) past that are almost certainly jumps from one poincare’ attractor to another in a complex turbulence model. We don’t even know if there was an ENSO 1000 years ago, or if there was if it was at the same location and had precisely the same dependences on e.g. solar state. As a lovely paper Anthony posted this morning clearly shows, major oceanic currents jump around on millennial timescales that appear connected to millennial scale solar variability and almost certainly modulate the major oscillations themselves in nontrivial ways. It is quite possible for temperatures in the antarctic to anticorrelate with temperatures in the tropics for hundreds of years and then switch so that they correlate again. When an ocean current is diverted, it can change the way ocean average temperatures (however one might compute them, see above) vary over macroscopic fractions of the Earth’s surface all at once.

To some extent one can control for this by looking at lots of places, but “lots” is in practice highly restricted. Most places simply don’t have a good proxy at all, and the ones that do aren’t always easy to accurately reconstruct over very long time scales, or lose all sorts of information at shorter time scales to get the longer time scale averages one can get. I think 2-3 K is a generous statement of the probable real error in most reconstructions for global average temperature over 1000 years ago, again presuming one can define an apples-to-apples global average temperature to compare to which I doubt. Nor can one reliably compare anomalies over such time scales, because of the confounding variables and drift.

This is a hard problem, and calling it settled science is obviously a political statement, not a scientific one. A good scientist would, I truly believe, call this unsettled science, science that is understood far less than physics, chemistry, even biology. It is a place for utter honesty, not egregious claims of impossibly accurate knowledge. In my own utterly personal opinion, informed as well or as badly as chance and a fair bit of effort on my part have thus far informed it, we have 33 years of a reasonably precise and reliable statement of global average temperature, one which is probably not the true average temperature assuming any such thing could be defined in the first place but which is as good as any for the purposes of identifying global warming or cooling trends and mechanisms.

Prior to this we have a jump in uncertainty (in precision, not accuracy) compared to the ground-based thermometric record that is strictly apples-to-oranges compared to the satellite derived averages, with error bars that rapidly grow the further back one goes in the thermometric record. We then have a huge jump in uncertainty (in both precision and accuracy) as we necessarily mount the multiproxy train to still earlier times, where the comparison has unfortunately been between modern era apples, thermometric era oranges, and carefully picked cherries. Our knowledge of global average temperatures becomes largely anecdotal, with uncertainties that are far larger than the observed variation in the instrumental era and larger still than the reliable instrumental era (33 year baseline).

Personally, I think that this is an interesting problem and one well worth studying. It is important to humans in lots of ways; we have only benefitted from our studies of the weather and our ability to predict it is enormously valuable as of today in cash money and avoided loss of life and property. It is, however, high time to admit the uncertainties and get the damn politics out of the science. Global climate is not a “cause”! It is the object of scientific study. For the conclusions of that science to be worth anything at all, they have to be brutally honest — honest in a way that is utterly stripped of bias and that acknowledges to a fault our own ignorance and the difficulty of the problem. Pretending that we know and can measure global average temperatures from a sparse and short instrumental record where it would be daunting to assign an accurate, local average temperature to any given piece of ground based on a dense sampling of temperatures from different locations and environments on that piece of ground does nothing to actually help out the science — any time one claims impossible accuracy for a set of experimentally derived data one is openly inviting false conclusions to be drawn from the analysis. Pretending that we can model what is literally the most difficult problem in computational fluid dynamics we have ever attempted with a handful of relatively simple parametric differential forms and use the results over centennial and greater timescales does nothing for the science, especially when the models, when tested, often fail (and are failing, badly, over the mere 33 years of reliable instrumentation and a uniform definition of at least one of the global average temperatures).

It’s time to stop this, and just start over. And we will. Perhaps not this year, perhaps not next, but within the decade the science will finally start to catch up and put an end to the political foolishness. The problem is that no matter what one can do to proxy reconstructions, no matter how much you can adjust LSTs for UHI and other estimated corrections that somehow always leave things warmer than they arguably should be, no matter what egregious claims are initially made for SSTs based on Argo, the UAH T_{LT} will just keep on trucking, unfutzable, apples to apples to apples. The longer that record gets, the less one can bias an “interpretation” of the record.

In the long run that record will satisfy all properly skeptical scientists, and the “warmist” and “denier” labels will end up being revealed as the pointless political crap that they are. In the long run we might actually start to understand some of the things that contribute to that record, not as hypotheses in models that often fail but in models that actually seem to work, that capture the essential longer time scale phenomena. But that long run might well be centennial in scale — long enough to detect and at least try to predict the millennial variations, something utterly impossible with a 33 year baseline.

rgb

0 0 votes
Article Rating
224 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
March 4, 2012 12:43 pm

“anybody who claims to know the annualized average temperature of the Earth, or the Ocean, to 0.05 K is, as the saying goes, full of shit up to their eyebrows.”
This is a strawman argument. Who makes such a claim? Not GISS!. Nor anyone else that I can think of. Can anyone point to such a calc? What is the number?
Of course, anomalies are widely calculated and published. And they prove to have a lot of consistency. That is because they are the result of “following some fixed and consistent rule that goes from a set of data to a result”. And the temporal variation of that process is much more meaningful than the notion of a global average temperature.

March 4, 2012 12:46 pm

Excellent! Best argument I’ve read on exactly how one can define what “global average temperature” really is.

cui bono
March 4, 2012 12:52 pm

Wow! Thanks Dr. Brown. A lot to chew on here.

Malcolm Miller
March 4, 2012 12:54 pm

Excellent outline of the ‘temperature’ problem. I always ask alarmists, ‘Where do you put the thermometer to measure the Eareth’s temperature?’ They have no answer.

Somebody
March 4, 2012 12:54 pm

Physically, one cannot define a ‘global temperature’ for a system which is not in thermal equilibrium. One can define aproximately a temperature locally, for cvasiequilibrium. Not a global temperature. Temperature is an intensive quantity, that cannot be added. Extensive quantities (like volume or energy) can be added, intensive ones cannot. You add them, you get meaningless values. One can try to be smart and say that if you divide them by a number (be that the number of measurements or any other number picked arbitrarily) you get back a meaningful value. You don’t. In physics, dividing a physical value like that is called scaling. Scaling a meaningless value gets you another meaningless value, and that’s it. If you have a system full of gas, half of it at 100 C and 100 atm, and half of it at 0 C and at 0.001 atm, the AGW pseudoscientists would tell that it has a global temperature of 50 C. Now almost anybody could calculate for such a simple system that the meaninless average is pretty far from an equilibrium temperature. Even for such a simple system it is obvious that the ‘global temperature’ is a dumb value, with no meaning physically.

pat
March 4, 2012 12:55 pm

Yup. And this makes the satellite data important. Particularly in proving ground reading fraud.

Sean Houlihane
March 4, 2012 1:02 pm

What is this post about? seems like a random rant to me…

pat
March 4, 2012 1:03 pm

Perfect

March 4, 2012 1:15 pm

Dr. Brown: You have said well what I have thought for a long time, that to use some “average global temperature” as proof that carbon dioxide is altering the earth’s “average climate” is questionable at least. Moreover, anecdotal information about proxies for global temperature change is even more questionable. Unlike the physics laboratory where we try to carefully control variables, temperature measurements are subject to random and systematic errors. We try to eliminate and understand the systematic errors and account for the random errors in our assessment. Your treatise identifies many of the errors that potential affect the accuracy and therefore the ability to interpret the earth’s temperature. The earth’s climate is not contained in a fixed, well controlled laboratory and worse, does not lend itself to repeating an experiment to verify that we can repeat the results. Computer programs while useful need calibration against past data with a realistic assessment of the accuracy and uncertainties and not just to each other. In a chaotic system such as the climate of the earth where all the variables are not identified let alone observed, leads one to inescapably to conclude as you in the last paragraph, “it is time to stop this and start over”, acting like scientists working together to understand the climate. It is a waste of brilliant minds to work at cross purposes for the sake of controlling the science.

Garry Stotel
March 4, 2012 1:16 pm

Plenty of common sense. A commodity as rare this day as hen’s teeth. Global warming is a phenomenon, which exists in a number of dimensions, as noted by prof. Bob Carter. We may have had a warming streak in the physical reality world, but what exists in politics, media and heads of common people is something else.
I am not worried about physical reality world, I am very concerned about the “virtual dimension”, as humans prove, again and again, that groupthink and great lies can take sway and cause enormous damage. Just look at the 20th century – Eugenics, Fascism, Nazism, Communism, Lysenko, all extremely popular ideas, despite being insane and ultimately causing pain and damage.
Preserving and advancing sane civilization is a restless task, against odds and frequent defeats, and in the absence of any guarantee of success.
Thank you.

DirkH
March 4, 2012 1:18 pm

Nick Stokes says:
March 4, 2012 at 12:43 pm
“Of course, anomalies are widely calculated and published. And they prove to have a lot of consistency. That is because they are the result of “following some fixed and consistent rule that goes from a set of data to a result”. And the temporal variation of that process is much more meaningful than the notion of a global average temperature.”
But the rules of that process change all the time. So the rules are not fixed. For instance, every time they kill a thermometer, they hange the rules.

Jimbo
March 4, 2012 1:19 pm

In my opinion it is categorically impossible to “correct” for things like the UHI effect — it presupposes a knowledge of the uncorrected temperature that one simply cannot have or reliably infer from the data.

Exactly! Perhaps they use educated guesses and get it ‘right’ to within a tenth of a degree. / sarc

RACookPE1978
Editor
March 4, 2012 1:28 pm

Couple of points here.
1) Radiative heat loss into space is local. And instantaneous. And completely independent of ANY other temperature of ANY other (otherwise identical) surface at ANY other time at ANY other location with ANY other cloud cover and air mass. ANY yearly “average” heat loss into space is meaningless.
2) Inbound solar radiation gain is completely defined by local cloud cover, local latitude, local air mass (atmosphere thickness). At any given day-of-year, top of atmosphere radiation will be identical at all locations on earth and is completely and accurately predictable, but it will never be any “average” value for the year at any place on earth.
3) Outbound heat loss IS proportional to local temperature. Worse, it is proportional to the degree K raised to the 4th power. Total radiation loss WILL vary tremendously as temperature varies and (local) atmospheric conditions vary. At NO time of year at ANY location will any assumed “average” heat radiation loss be correct.

March 4, 2012 1:40 pm

As I have noted TIME AFTER TIME AFTER TIME…an 86 F day in MN with 60% RH is 38 BTU/Ft^3, and 110 F day in PHX at 10% RH is 33 BTU/Ft^3…
Which is “HOTTER”? HEAT = ENERGY? MN of course, while the temp is lower.
I was at a pro-AWG lecture by a retired U of Wisc “meteorolgy” professor a couple years ago.
When I brought this up, and the example of putting varying amounts of hot and cold water into 10 styrofoam cups, measuring the temps, averaging the result and comparing to the result of putting all the contents in ONE insulated container (AVERAGED TEMP in this case WILL NOT MATCH the temp of all the varying amounts put into the one container!)..he looked (to QUOTE ANOTHER ENGINEER AT THE EVEN) as a “deer in the headlights”…and then finally babbled, “Well, the average temperatures do account for the R.H.”
Ah, like dear (Should be removed) dr. Glickspicable…”My words mean what I wish them to mean, nothing more and nothing less.” Reality BENDS for the AWG group!

March 4, 2012 1:41 pm

As nick notes this is a strawman argument.
Nobody who calculates a global temperature INDEX, believes that it is an “average” Here is a way to think about it that I have found helpful. It’s an estimate of the unobserved temperature at other locations. We collect samples from 7000 locations over land. We average them using a variety of methods. The answer for stations over land is something on the order of 9C ( lets say). Now, what does that mean, what does that represent? It represents our best estimate ( minimizes the error ) of temperatures at unobserved locations. How well does that work.?
Well, we can test that: we now have temperatures from 36,000 locations. How good was our estimate? pretty darn good. Is that number the “average”. no, technically speaking the average is not measurable. Hansen even recognizes that.
I tell you that its 20C at my house and 24C 10 miles away. Estimate the temperature at 5 miles away. Now, you can argue that this estimate is not the “average”. You can hollar that you dont want to make this estimate. But, if do decide to make an estimate based only on the data you have
what is your best guess? 60C? -100C?. If I guessed 22C and then I looked and found that
it was 22C, what would you conclude about my estimation procedure? would you conclude that it worked?
Here is another way to think about it. Step outside. It’s 14C in 2012. We have evidence ( say documentary evidence ) that the LIA was colder than now. What’s that mean? That means that if you had to estimate the temperature in the same spot 300 years ago, you would estimate that it would be…………
thats right…… colder than it is now. Now, chance being a crazy thing, it might actually be warmer in that exact spot, but your best estimate, one that minimizes the error, is that it was colder.

March 4, 2012 1:51 pm

Let alone the obvious difficulties in finding the Earth;s “average temperature” at any given time. there is the recent, obvious, blatant corruption and tampering with the temperature record by “Climate Scientists” in order to massage the results into the form that they desire. In the end we are left with a battle of lies, damned lies and statistics.

March 4, 2012 1:52 pm

Probably our best and most accurate measure of global climate change is the atmospheric concentration of CO2 (not temperature). The reported monthly averages represent background levels and do not include measured event spikes that could be anthropogenic. Global concentrations do not vary significantly with longtitude and have a latidute dependant seasonal variation. The seasonal variation is the least near around 15S and the most above the Arctic Ocean (and tracks well sea ice area- max CO2 when ice is max and miniumum when ice is minimum: both are responding to global temperature changes). Water vapor and clouds are controlling and distributing atmospheric CO2 as well as temperature. CO2 is a lagging measure and not a controlling force. My analysis of the quantitative anthropogenic contribution to global atmospheric concentrations indicates the lag time is around ten years. That’s probably how long it takes to cycle through the biosphere or the oceans. Click on my name for more details. Other lag times could be associated with other longer cycles such as the periodic upwelling of CO2 saturated deep ocean waters and the deep ocean conveyer belt.

Jimbo
March 4, 2012 1:53 pm

Can anyone know the average temperature last year 1 meter above the ocean surface?

Somebody
March 4, 2012 1:57 pm

I was in a particular place and I measured 12C. At 5 km away it was 25 C. 100 m away it was 22 C. This is a situation pretty often seen in mountains. Just move from a slope over the top on the other side, and see a very different temperature. Move downwards a little bit and see another big difference.
Averaging doesn’t work. Guessing doesn’t work. Interpolation doesn’t work. Reality works. And often shows values wildly different than the calculated ones using unphysical means (that is statistics and/or dumb interpolations). The ‘estimation’ is easy to falsify. Just pick some measurements outside of the dataset used for the temperature INDEX-GOD, and if the values are different than the pseudoscience ‘predicts’, the theory is false and has to be thrown away. As Feynman put it, no matter how smart you are, no matter how beautiful your psueudoscience is, if it contradicts experiments, it is wrong. And in the GOD-INDEX case, it does.

k scott denison
March 4, 2012 2:04 pm

mosher
While your argument contains some sense, using only 7,000 measurements for the entire surface of the earth isn’t exactly enough to say we hav a measurement every 10 miles, is it? I think your analogy is not only inaccurate but also disingenuous.
I note that you also don’t mention anything relative to UAH as perhaps the best data set we have at the moment, per Dr. Brown’s opinion.
Last, your reference to evidence of temperatures in the LIA being lower today is also decidedly one-sided. Do we also not have evidence that temperatures in the MWP were higher than today? If so, I don’t see what point you are trying to make.

March 4, 2012 2:07 pm

Mr Mosher, how of the 36’000 locations provide the air temperature above oceans at a height of 1 meter above ehe water surface?

jorgekafkazar
March 4, 2012 2:12 pm

Malcolm Miller says: “Excellent outline of the ‘temperature’ problem. I always ask alarmists, ‘Where do you put the thermometer to measure the Earth’s temperature?’ They have no answer.”
I’ve thought of a place to stick the thermometer, but Al Gore refuses to cooperate.

March 4, 2012 2:14 pm

In my not so humble view any unitary number that is attempting to represent so kind of dynamic process is no better then any other kind of educated guess. That does not mean they are not used and useful it is means we need to take great care that we full understand the limitations of such a value and the limitations of our process of derivation understanding. All these things are highly relative. Relative to our purpose our ability to collect and representativeness of that to collections and so on. We need also to keep firmly in mind that anytime we take real data and reduce it to anomalies we are making a judgement and producing meta data not real data. For some purposes the metadata is acceptable for other not so much. Always our understanding and analysis must be tempered by a clear measure of the ±errors.
Now my educated guesses for the distribution of some metal oxide in a defined orebody can be very good, extremely good. That is not because I am so smart but because I just happen to have full control over all the variables. This is something dynamic system can only dream about achieving.

Frank K.
March 4, 2012 2:18 pm

Nick Stokes:
“That is because they are the result of “following some fixed and consistent rule that goes from a set of data to a result”.”
Please state this rule for us.
“And the temporal variation of that process is much more meaningful than the notion of a global average temperature.”
Please discuss your reasoning for this conjecture.
Thanks.

Scarface
March 4, 2012 2:19 pm

Reading this makes me wonder how they would measure the temperature in a jungle, like the Amazon, or another big one. Is there a good and reliable network of weatherstations to measure it?

March 4, 2012 2:20 pm

Frank K.,
I expect that Nick Stokes will back and fill…

Rosco
March 4, 2012 2:21 pm

When temperatures vary from minus 89 C as claimed for the Russian site in Antarctica to approximately 58 C during tropical desert summers who really thinks the “average” really has much meaning ?
It is the obsession with “averaging” solar radiation to one quarter over 24 hours I feel is more concerning, and then using this result to tell us that without greenhouses gases the Earth would be minus 18 C.
Never mind that this result is the outgoing radiation result hence little to do with the maximum.
Never mind that this calculation requires about 1000 W/sq m incoming to work.
Never mind 1000 W/sq m produces a result of about 87 C in Stefan-Boltzmann.
The whole thing has become silly – the Earth is the way it is – and that is the atmosphere shields us from some pretty powerful solar radiation.
I say again – see the Moon with its maximum daytime temperatures.
As for the “heat hiding in the oceans” argument – why didn’t it do that before when there was some warming – why did it wait till about ’99/ 00 to start this party trick ??
Of course – silly me – its a surprise party and its gonna shout surprise when it jumps out of hiding and global warming comes roaring back.

Kasuha
March 4, 2012 2:24 pm

I too have used to think calculation of global average temperature is a nonsense. Surprisingly, I came up to realization that there is quite a lot of sense to it once I learned what is true meaning of temperature anomaly and station adjustments. Now, I have no comments on how exactly it is done in GISS or BEST. I am not going to judge if they are doing it right or not. And I must say that I know that doing it right is extremely difficult. But the important fact is that it definitely is possible to make sense of measurements at any single place if they’re done consistently over a long period of time … and it definitely is possible to put together such measurements done at many different places under different conditions and using different technology, and still separate important data from noise.
Of course, the only reason average temperature anomalies are calculated to thousandths of a degree is because we can. But I have learned that the uncertainity interval around this single precise number is surprisingly small, small enough for the natural variability not to be just visible but actually pretty clear even if this uncertainity is taken into account:
http://www.volny.cz/kasuha/temperatures/tlt_anom_histo.png
My personal opinion is, there is no problem with expressing the average to a thousandth of a degree as long as we keep on our mind that even change in range of several tenths of a degree is no big deal.

CodeTech
March 4, 2012 2:27 pm

Funny, this is pretty much what I was pointing out a few days ago. If you think that you have a global average temperature, then you are delusional. If you further believe that your global average temperature has any meaning whatsoever (and I say this whenever the indexes go either SCARY UPWARD or ANOMALOUS DOWNWARD), then you are certifiable.
Stokes’ comment above demonstrates this sort of thinking. No, nick, the index is meaningless too. Yes, nick, people actually think it has meaning, because “climate scientists” tell them it does.
In fact, the myth or delusion or whatever of “global average temperature” is the Number One flaw in the entire ridiculous and discredited hypothesis of “global warming”, or “climate change”, or whatever misleading and inaccurate term the alarmists are using this week.

Louis Hissink
March 4, 2012 2:33 pm

It helps to realise that the way the average global temperature is aggregated, as the mean of a group of stations in a defined grid geographical grid, is for the temperature of a spherical plane defined by units of latitude and longitude. Problem is that a spherical plane is an artefact apart from not having an mass, and hence any temperature. Classic case of geographers wandering into a physics lab and copying the mathematical process of calculating an average without actually measuring the physical objects……and then believing they have done basic science. Noooooo.

KR
March 4, 2012 2:33 pm

This is an appalling post.
Dr Brown, are you perhaps not familiar with the law of large numbers (http://en.wikipedia.org/wiki/Law_of_large_numbers)?
Over the course of many measurements the estimate of any random variable will converge on it’s true value. If there is a consistent bias, the convergence will be on the true value of the variable plus it’s bias. And if looking at anomalies, where the baseline is taken from your estimate, and especially where large scale coherence is seen in the data (http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.187.9839&rep=rep1&type=pdf – coherence of anomalies >50% up to 1200km between stations, as checked against thousands of site-pairs), any consistent bias will cancel out as well.
If you actually run the numbers, you get the same anomaly graph down to only 60-100 randomly chosen stations total, albeit with increasing noise with fewer stations.
Dr. Brown’s post is nothing but noise and arm-waving, devoid of useful content or numeracy.

March 4, 2012 2:34 pm

3) Outbound heat loss IS proportional to local temperature. Worse, it is proportional to the degree K raised to the 4th power. Total radiation loss WILL vary tremendously as temperature varies and (local) atmospheric conditions vary. At NO time of year at ANY location will any assumed “average” heat radiation loss be correct.
No, it’s not. That’s evident in the TOA IR spectroscopy that actually photographs it and graphs it out. Reasons even your (still generally reasonable) points are incorrect in detail include:
a) No part of the Earth’s outbound radiating system is a perfect blackbody.
b) No part of the Earth’s outbound radiating system is radiating at the same approximate imperfect blackbody “temperature” in all bands of the spectrum.
c) No part of the Earth’s outbound radiation fails to be modulated on many, many time scales. Remember, “outbound radiation” in total includes the effects of albedo and much more, and albedo at least is constantly changing with cloud cover and local humidity.
A more correct statement is that outbound heat loss is the result of an integral over a rather complicated spectrum containing lots of structure. In very, very approximate terms one can identify certain bands where the radiation is predominantly “directly” from the “surface” of the Earth — whatever that surface might be at that location — and has a spectrum understandable in terms of thermal radiation at an associated surface temperature. In other bands one can observe what appears very crudely to be thermal radiation coming “directly” from gases at or near the top of the troposphere, at a temperature that is reasonably associated with temperatures there. In other bands radiation is nonlinearly blocked in ways that cannot be associated with a thermal temperature at all.
To the extent that radiation in the e.g. water window or CO_2 bands does indeed seem to follow a BB radiation curve with a given temperature, the integrated power in radiation from that part of the band is proportional to T^4 — for that temperature. But since the overall radiated power comes from your choice of multiple bands at multiple temperatures or (better) from the overall integration of the power spectrum regardless of the presumed temperature of the sources or resonances that contribute, the overall variation with the local conditions isn’t as simple as T^4. Indeed, if one takes the top of troposphere as being roughly constant in temperature, the leading order behavior might even be a constant, as heat loss from CO_2 at the top of the troposphere might actually be largely independent of the particular temperature of the surface underneath and hence represent a more or less constant baseline power loss independent of surface temperature.
But I’m not asserting that this is the best way to view it — I haven’t seen enough of the TOA IR spectra to have a good feel for integrated outbound power either overall or in particular bands. The “right” thing to do is just make the measurements and do the integrals and then try to understand the results as they stand alone, not to try to make a theoretical pronouncement of some particular variation based on an idealized notion of blackbody temperature.
rgb

Edim
March 4, 2012 2:43 pm

I agree with Mosher somewhat, but it should be called a global temperature INDEX and not global temperature, which is misleading. The global temperature indices shouldn’t be adjusted at all. Just take the best stations (no changes, as far as possible from any human influence…). Existing indices are all too “warm”, due to the local warming effects and various adjustments.

Werner Brozek
March 4, 2012 2:45 pm

I am curious as to why only UAH is mentioned and not RSS.
This is especially so since Dr. Spencer says:
“Progress continues on Version 6 of our global temperature dataset. You can anticipate a little cooler anomalies than recently reported, maybe by a few hundredths of a degree, due to a small warming drift we have identified in one of the satellites carrying the AMSU instruments.”

March 4, 2012 2:49 pm

Consider the implication of an obviously much greater variability of temperatures in each regional climate than is shown in current thermometer and proxy data sets of a so-called climate parameter annualized GMST. Given that much higher variability, then the past 50 years of data do not support a strong basis to even propose a hypothesis of comparatively significant AGW much less cAGW or a CAGW. Then the idea of the domination of the total Earth-atmospheric system by an implausible ‘climate’ parameter called annualized GMST in response to some decadal (short) time scale variations of just a single trace element in the atmosphere becomes of minor importance in the balance of climate science.
John

March 4, 2012 2:52 pm

Frank K. says: March 4, 2012 at 2:18 pm
“Please state this rule for us.”

No, Frank, I want an answer to my question first. Who are these people who claim to calculate the Earth’s “global annualized temperature” to 0.05K? Seen a graph?
But OK, the rules are in Gistemp. Or even TempLS.

geography lady
March 4, 2012 3:02 pm

After spending a goodly part of my professional career in air monitoring, which includes temperature and other instrumentation reading, for over 40 years, this article expresses my thoughts precisely. Taking temperature readings and making them explain differences down to fractions of a degree, much less a degree of accuracy and precision, is totally absurd.

Jimbo
March 4, 2012 3:02 pm

O/T kinda
Someone does not like Hansen. Reads too much like speculation though.
Gleick, FBI, Hansen 1988, peer review
http://www.webcommentary.com/php/ShowArticle.php?id=osullivanj&date=120304

AFPhys
March 4, 2012 3:10 pm

Nick Stokes – If the reported surface temperature anomalies are so wonderful, precise, and accurate, please explain why every new iteration ends up adjusting the temperatures that had been reported to have been “average” a half century ago. It even appears they do it on the sly; for sure they give no fanfare to their adjustment of historical data. This does not happen in real science, but does in pseudo-science and amongst politicians. It is clear to everyone that the data services are making it up as they go. Changing history, handwaving arguments, obfuscation, exaggeration, collusion, “explaining away”, hiding, suppressing research, etc. have become the defining hallmarks of “climate science”. It would benefit you to do some soul searching in an attempt to understand why you are so gullible.

GaryM
March 4, 2012 3:13 pm

“This is a strawman argument. Who makes such a claim? Not GISS!. Nor anyone else that I can think of. Can anyone point to such a calc? What is the number?”
Semantics is the last refuge of a CAGW advocate.
From the IPCC AR4:
“Since IPCC’s first report in 1990, assessed projections have suggested global average temperature increases between about 0.15°C and 0.3°C per decade for 1990 to 2005. This can now be compared with observed values of about 0.2°C per decade, strengthening confidence in near-term projections.”
http://www.ipcc.ch/publications_and_data/ar4/wg1/en/spmsspm-projections-of.html
It has been the arrogant assertion of the CAGW community that they can discern the present global average temperature to within tenths of a degree, and make projections with such accuracy. Relabeling “global average temperature” now to some other amorphous term (ala global warming to climate change) is to be expected, but changes nothing.
Climate scientists don’t know what they think they know. And we would be fools to reorder our economy based on their hubristic proclamations.

Curiousgeorge
March 4, 2012 3:18 pm

I’ll say this. While the entire concept is intellectually interesting to some small number of people, the vast majority of Earths’ inhabitants ( including animals and plants) couldn’t care less. We (and I include my plant and animal co-habitants) only care about our immediate environment. Which is as it should be, and is why there are different species around the planet. If everything had to exist in some constant ‘average’ state there would be no state to exist in.

March 4, 2012 3:24 pm

GaryM says: March 4, 2012 at 3:13 pm
“Semantics is the last refuge of a CAGW advocate.”

It’s not semantics. What’s the number? A real question. You won’t find such a number, to 0.05K, in AR4. For all the reasons given in this RGB rant but covered much better in the GISS note.

ferd berple
March 4, 2012 3:25 pm

Frank K. says:
March 4, 2012 at 2:18 pm
Nick Stokes:
“That is because they are the result of “following some fixed and consistent rule that goes from a set of data to a result”.”
Please state this rule for us.
The rule is a follows: When the data says what we expect it to say, we leave it alone. When it doesn’t, we adjust it.
Thus for example, 1934 was the hottest year on record in the US according to GISS, until Steve McIntyre made it public. At which time GISS adjusted 1934 downwards to match expectations that temperature had warmed since 1934.
Stalin rewrote history in the USSR. GISS has done the same for the USA.

KR
March 4, 2012 3:25 pm

Nick Stokes – The GISS temperature anomaly, as an example, is (http://data.giss.nasa.gov/gistemp/2011/) estimated to have a two-standard-deviation on the order of 0.05°C.
Not the global temperature, mind you, but the temperature anomaly, how it’s changed over the years. And the magnitude of temperature change over the last 100 years is in good agreement between HadCRUT, GISS, NCDC, and so on, despite different coverage, different station sets, etc.
The GISS standard deviation is quite well supported by the number of measurements, as per the law of large numbers (http://en.wikipedia.org/wiki/Law_of_large_numbers). Individual readings have a much higher variance, but when you have thousands upon thousands of readings the error variance drops accordingly. Assertions to the contrary indicate that (a) you haven’t run the numbers, and (b) you are falling prey to a Common Sense fallacy (http://www.don-lindsay-archive.org/skeptic/arguments.html#commonsense).

Physics Major
March 4, 2012 3:26 pm

Stokes
The little Climate Widget in the sidebar shows a February global temperature anomaly of -0.12 K. This would imply an accuracy somewhat greater than 0.05 K.

Curiousgeorge
March 4, 2012 3:27 pm

Furthermore, an ‘average’ ( or mean ) is a mathematical concept, not a state of nature; and is meaningless even at that without supporting concepts such as median, mode, SD, (distribution shape and assumptions), and so forth. It’s pointless to even discuss it.

u.k.(us)
March 4, 2012 3:34 pm

“Dr. Brown thinks that this is a very nice piece of work, and is precisely the reason that he said that anybody who claims to know the annualized average temperature of the Earth, or the Ocean, to 0.05 K is, as the saying goes, full of [SNIP] up to their eyebrows.”
=============
Does this mean it is ok to unleash my full vocabulary, when commenting.
I’ve been holding back.
[REPLY: No. This is a family blog. -REP]

Kev-in-UK
March 4, 2012 3:35 pm

Mosher
Hi Steve,
Nobody is saying some form of mean or global temp temp anomaly assessment/trend isn’t useful or indeed valuable in the overall scheme of things. But there is no reasonable way to infer accuracy (statistically based or otherwise) or a degree of absoluteness/relativity given such a limited amount of spatial and temporal data. If you then start chopping editing and changing the data – you are fecking with your dataset and moving the relative goal posts – statistical or otherwise!
The day I see a media headline that says something like ‘Temp records INFER last week/month/year/decade as PERHAPS being the warmest on record’ is the day that I will bother to read or listen to the appended story!
Here’s a simple list of questions for you or anyone else to fill in……
1) what is the surafce area of a thermometers measuring tip?
2) what would be a reasonable figure for the volume of air affecting the recorded measuremenmt of that thermometer? Shall we say 1 cubic metre? 10 cubic metres? – ok – how about a hundred cubic metres? (for the sake of simplicity, we will assume all the atmosphere is indeed static rather than the fact that some cold or warm air may pass from one thermometer to another!! LOL)
3) Multiply that volume by the number of thermometers actually recorded for any given day. (7000?)
4) Estimate the volume of the earths lower atmosphere – pick whichever ‘layer’ you fancy – even if its just the first 10 metres of the atmosphere!. It’s a simple equation – the volume of the outer boundary sphere minus the volume of the inner boundary sphere.
5) Now report the results of your thermometric volume measurements as a percentage of the measured subjects actual volume! And the answer is…………….
I’m not gonna bother getting my calculator out but I’ll bet a case of best booze that it’s a very flipping small percentage………….
Now if someone can explain to me how the reported global temp anomaly is valid science, especially after statistical and judgemental adjustments – I’d be grateful to hear it………
.

Latitude
March 4, 2012 3:36 pm

If I took a temp measurement at my house and it was 20C and 10 miles away it was 24C…
…and I guessed that 5 miles away it was 22C….and it was
….it would still be crap
because it could have been anything
You can’t guess Arctic temps where you have no idea what it is…and almost all of the so called warming is in the Arctic…..where it’s frozen solid right now

Avfuktare vind
March 4, 2012 3:38 pm

I think that we have much more accurate data than proxies from tree rings. We have all historical data that tells us how people used to live, at what heights wearing what cllothes etc. We know about frosen or open rivers and oceans etc. From that we can conclude that the world was much warmer in the mwp and much colder in the lia. I would also say that such records are probably much better than 3 K.

Dolphinhead
March 4, 2012 4:00 pm

Dr Brown
can you comment on the statement by Max Hugoson wrt RH. This is somethingt that has always concerned me. Is it right that the dry air temperature of the arctic is averaged with the very humid air of the tropics as if they were apples and apples? Does it not take a lot more energy to change the temperature of humid air than dry air?

k scott denison
March 4, 2012 4:02 pm

KR says:
March 4, 2012 at 2:33 pm
This is an appalling post.
Dr Brown, are you perhaps not familiar with the law of large numbers (http://en.wikipedia.org/wiki/Law_of_large_numbers)?
Over the course of many measurements the estimate of any random variable will converge on it’s true value.
=========================
Um, KR, those 7,000 thermometers are, at any instant in time, measuring 7,000 DIFFERENT variables, that is, the temperature, at that instant in time, of 7,000 unique geographic locations. How, exactly, does the law of large numbers apply?
HINT: it doesn’t.

DirkH
March 4, 2012 4:05 pm

Nick Stokes says:
March 4, 2012 at 2:52 pm
“Frank K. says: March 4, 2012 at 2:18 pm
“Please state this rule for us.”
No, Frank, I want an answer to my question first. Who are these people who claim to calculate the Earth’s “global annualized temperature” to 0.05K? Seen a graph?”
The answer is that you have erected a strawman yourself. Read the sentence:
“anybody who claims to know the annualized average temperature of the Earth, or the Ocean, to 0.05 K is, as the saying goes, full of …”
Did Dr. Brown bash James Hansen, or any other GISS employee? No. Can you calm down now?
I am, BTW, very satisfied that you agree with Dr. Brown.

DirkH
March 4, 2012 4:11 pm

KR says:
March 4, 2012 at 3:25 pm
“Individual readings have a much higher variance, but when you have thousands upon thousands of readings the error variance drops accordingly.”
They have 1,500 readings (since the big Dying Of The Thermometers) and the error goes down with the square root of that, BUT that assumes that they culled in a non-systematic way.
Which they haven’t:
Aussie thermometers march north, the rest of the world marches south in GHCN :
http://chiefio.wordpress.com/2009/10/23/gistemp-aussy-fair-go-and-far-gone/

Latitude
March 4, 2012 4:11 pm

Well…..who ya gonna believe
“Based on readings from more than 30,000 measuring stations, the data was issued last week without fanfare by the Met Office and the University of East Anglia Climatic Research Unit. It confirms that the rising trend in world temperatures ended in 1997.”
and it was Gleick that declared “the science is settled”…..
…you see where that got him

March 4, 2012 4:13 pm

Curses, foiled again.. Continuing….
underestimates the UHI effect by 0.3K. That would make the two curves agree quite nicely, cutting GISS back to a rise of 0.3 K over the same time frame.
Of course, that would provide one with no evidence of a catastrophe, and not much evidence of warming, especially if one allows for any modulation of global temperatures due to e.g. solar magnetic modulation of GCRs and consequent variations in albedo. Indeed, variation of albedo then becomes almost 100% of the actual driver of T_{LT}. Which makes sense, given a saturated CO_2 based GHE and moderate negative feedbacks from the water cycle.
Sooner or later, though, GISS is going to have to face UAH LTT, and deal with the growing discrepancy. And it is pretty obvious which one will win. They are NASA’s own satellites, after all.
rgb

Allen63
March 4, 2012 4:24 pm

A timely article. I agree with the thrust.
“Timely” because I’ve just re-started “from the most basic concepts” myself in attempting to accurately define the true error bars around the Global Surface Temperature Anomalies. My gut feeling is that the error bars are too large to support theories regarding AGW. But, gut isn’t science.
Like the author, I am going beyond “ordinary statistics” (which I think are not wholly applicable as used) and trying to consider “literally every physical thing” that might impact the final accuracy of a calculated Anomaly. So, thanks for this post.

March 4, 2012 4:25 pm

Damn, I’m going to just give up. Mousepad disease. Looks like I’ve completely lost two hourlong posts.
Grrr.
rgb

Steve from Rockwood
March 4, 2012 4:28 pm

KR says:
March 4, 2012 at 2:33 pm

This is an appalling post.
Dr Brown, are you perhaps not familiar with the law of large numbers (http://en.wikipedia.org/wiki/Law_of_large_numbers)?

Kr, I cannot believe that someone could be so stupid as to quote the law of large numbers within the context of global temperatures. Do you know the difference between random and systematic error? Have you ever measured anything? The problem is that any sampling theory falls apart when you start working with such heavily biased, manipulated and badly measured data such as global temperatures.
You owe Dr. Brown an apology.

If you actually run the numbers, you get the same anomaly graph down to only 60-100 randomly chosen stations total, albeit with increasing noise with fewer stations.

If this were true we wouldn’t see so much massaging of the temperature record. Unless you’re suggesting a margin of error of +/- 5 degrees. I spent 4 days at a small airport in Northern Ontario that has a weather station. The guy who plows the snow parks his huge snow plow about 5 m from the weather station while it warms up. I’m just wondering if that station points to warmer winters since the airport built the new garage so close to the weather station (and paved the whole area). But I didn’t see that effect discussed in your ridiculous link to the law of large numbers.

1DandyTroll
March 4, 2012 4:32 pm

The average temperature for out loonely planet is about the same as ut has always been the rest a mere figment of creative statistics, and tremendously creatively manhandled statistics at that.
To dive into the spaghetti logic that is the alarmist climatological logic but if they can get the number of polar bears to be, twenty to twenty five thousand, every year, since the early 1970’s from less than half of the half they think they knew something aoubt, or, stated differentlly, from a mere handfull of beers, and bears, and a sh*t load of helicopter fuel. What’s to say they don’t treat temperature data the same.
But one data point is just half of a binary set. So lets further the thing with data from the other pole. How many penguins are there in the world? They’re as endangered, and as accounted for, as the beer bottles in the polar region, right?
Is it wikianswers reported low 20 million or Seaworlds high of 70 million birds?
The accuracy of the statstics are as bold as eight million three hundred and thirty eight thousand and seven hundred and eighty individuals of one specie to, between 1540 and 1 855 times two for another. Essentially, nobody could be bothered to count less than 2000 pair of individuals but somebody had the time to, apparently, do some pretty “accurate” extrapolation equations.
When it comes to temperatures, as Mr Watts has been kind enough to show the world, nobody seem to have the time to accurately and properly maintina to properly and accurately observe, but when it comes to global temperature, for which we can thank thanks the likes of NASA/GISS, Had CRU, WMO, and ultimately IPCC, for, too many seem to have too much time to extrapolate between too many fears (of too few or too many, or too low or too high.)
According to statistics, sixty eight million one hundred and sixty three thousand and four hundred and ninety, plus, of the global populace of penguins can’t be wrong: The arctic truly, utterly, and completely unequivocally, blows, but probably blows less than, and the alarmist’s precautionary principle applied, the hordes of scientifically looking, but obviously evil looking, humanoids infesting their frozen beach fronts.

Latitude
March 4, 2012 4:39 pm

Robert Brown says:
March 4, 2012 at 4:25 pm
Damn, I’m going to just give up. Mousepad disease. Looks like I’ve completely lost two hourlong posts.
=======================
Alright darn it……I’ve been waiting on you to reply too!….LOL
Force yourself to get into the habit of word…..copy and paste……

March 4, 2012 4:41 pm

KR says: March 4, 2012 at 3:25 pm
“The GISS standard deviation is quite well supported by the number of measurements, as per the law of large numbers ..”
Yes, I’ve quite happy with GISS’ handling of anomalies. But this post says:
“anybody who claims to know the annualized average temperature of the Earth, or the Ocean, to 0.05 K is, as the saying goes, full of shit up to their eyebrows.”
And yes, that’s true. But who does? I’m not hearing. What’s the number?
Here’s what GISS says:
“For the global mean, the most trusted models produce a value of roughly 14°C, i.e. 57.2°F, but it may easily be anywhere between 56 and 58°F and regionally, let alone locally, the situation is even worse.”
Well, that’s a number. But it doesn’t sound like a claim of 0.05K accuracy.

Steve from Rockwood
March 4, 2012 4:44 pm

DirkH says:
March 4, 2012 at 4:11 pm

KR says:
March 4, 2012 at 3:25 pm
“Individual readings have a much higher variance, but when you have thousands upon thousands of readings the error variance drops accordingly.”
They have 1,500 readings (since the big Dying Of The Thermometers) and the error goes down with the square root of that, BUT that assumes that they culled in a non-systematic way.
Which they haven’t:
Aussie thermometers march north, the rest of the world marches south in GHCN :
http://chiefio.wordpress.com/2009/10/23/gistemp-aussy-fair-go-and-far-gone/

Only random error goes down with the square root of the number of readings. Systematic error (which tends to be much higher) only drops to its DC bias value (which can be substantial). You would have to have a series of systematic errors being random such that these errors were likewise reduced by the high number of sampling points. Good luck with that.
I recall a large diameter hole being drilled at a mine from surface to an underground stope from which they hoped to pump concentrate waste (with concrete, to fill the empty stope and keep the concentrate out of the tailings). Turns out the hole was about 125 m away from its desired target, even though it was accurately surveyed repeatedly with a north-seeking gyro survey probe that boasted an uncertainty of less than 2.5 m over that distance. How could such an accurate instrument that records almost continuously and produces tens of thousands of measurement points be out by so much? You won’t find the answer in the law of large numbers. Possibly in the law of common screw-ups though but I don’t see that in Wikipedia.

Douglas Hoyt
March 4, 2012 4:57 pm

There is insufficient areal coverage before 1957 to calculate the mean temperature for the Southern Hemisphere. We know this because if you calculate temperatures in degrees Kelvin, rather than using anomalies, you will not get a temperature anywhere near 288 K.
From the above, it follows that global temperature changes before 1957 (the IGY year) are not known with any reliability.
Northern Hemisphere temperatures can be calculated back to about 1900. Before that year, there is insufficient areal coverage to deduce how climate was varying.
An honest appraisal of climate change would use absolute temperatures rather than anomalies, which are very deceiving. In fact, using anomalies should be avoided altogether.

KR
March 4, 2012 4:59 pm

Nick Stokes – That’s why I said anomaly
Dr. Brown conflates absolute temperatures (which have the uncertainties you note) with anomalies (which are supported by a very large data set, and have the 2-SD accuracy of 0.05C). Which is quite the strawman argument, a bait and switch.
As I noted above, if there is a consistent bias the temperature estimation will have a constant offset of about that bias – and for anomalies that bias will cancel out to zero, as it will be in both the baseline and the time-point of interest. If the bias is consistently changing over time, it will show up in the anomalies – but the only real candidate for that is the urban heat island effect, and as the BEST data shows (along with any number of other analyses, including Fall and Watts 2009) that there is no such overall effect on temperature averages.

A. Scott
March 4, 2012 4:59 pm

[C]alling it settled science is obviously a political statement, not a scientific one. A good scientist would, I truly believe, call this unsettled science, science that is understood far less than physics, chemistry, even biology. It is a place for utter honesty, not egregious claims of impossibly accurate knowledge.
Our knowledge of global average temperatures [is] largely anecdotal, with uncertainties that are far larger than the observed variation in the instrumental era and larger still than the reliable instrumental era (33 year baseline).
It is, however, high time to admit the uncertainties and get the damn politics out of the science. Global climate is not a “cause”! It is the object of scientific study. For the conclusions of that science to be worth anything at all, they have to be brutally honest — honest in a way that is utterly stripped of bias and that acknowledges to a fault our own ignorance and the difficulty of the problem.

Excellent points.
How can the science be “settled” when there is NO reliability in the data?
When the data is continuously manipulated (always creating increasingly warmer averages).
We have 30 or so years of somewhat decent instrumental record, but even that is plagued with problems, most importantly a serious lack of global coverage. We also have several instrumental records going back to the 1600’s. And then various “reconstructed” temperature records derived from proxies – each localized, and none with a high reliability over shorter time frames.
Many, such as tree rings – the heart of current CAGW’s “science” – are increasingly proven to be unreliable; having been cherry picked and constantly manipulated for desired results – and when they failed to accurately represent current instrumental record, simply discard for current data – while keeping for historical data.
“Settled” … not remotely.

RACookPE1978
Editor
March 4, 2012 5:06 pm

Robert Brown says:
March 4, 2012 at 4:25 pm
Damn, I’m going to just give up. Mousepad disease. Looks like I’ve completely lost two hourlong posts.

To minimize carbon-based computer keyboard interface errors of large time durations and many characters, may I recommend you work “off-screen” for any answers greater than 45 or 50 lines.
Write your responses at length in MS Word (or even notepad) – as long as you can spell check conveniently.
Regularly save them via that process; then, at the end of your patience – or when you’re through (whichever comes first) “select all” and “paste” (the words) over here in the WordPress dialog box.

RockyRoad
March 4, 2012 5:10 pm

I suspect that temperatures over the earth’s surface have a lognormal distribution–most fall within a fairly narrow range with extremes at both ends caused by elevation or latitude. The cumulative average of a quantity that provides a lognormal data set upon sampling is not the average of the data points. I repeat–NOT the average. Why? The central limit theorem (and how it applies to a normal distribution, but not a lognormal distribution).
So regardless of the “confidence” based on the “law of large numbers” (seen used above by several posters to erroneously justify the status quo), it is equally erroneous to believe that it applies to “average global temperature”. What would more closely (and I emphasize the term “closely”) approximate the true average would be to first properly interpolate temperature from data points to volumes of air (and no, Steve Mosher–just because your “guess” matches an actual data point does not corroborate your “guesstimation”). Then these comparable volumes could be quantitatively averaged to get a “close average”.
It would require geostatistics to make this all possible, but an estimation variance (precision of the estimate) could be obtained in the process (which is not to be confused with so-called precision of the temperatures data points, although that would be of interest to compare to the “nugget effect” on the variograms).
(All this I explained earlier on the discussion of the Argos data set Willis brought to our attention.)

Mindbuilder
March 4, 2012 5:37 pm

My initial guess at how to define global temperature would be the temperature if you took every molecule in the volume of air between 1.5 and 2.5 meters above the surface, and instantly transported all those molecules to random locations within a cube of equal volume without changing their speed, energy, etc. Of course you can’t measure such a thing exactly, but I bet you could make a pretty close estimate. Probably well under 1K.

jonathan frodsham
March 4, 2012 5:38 pm

Yes, Yes and Yes. The very idea that M.Mann could accuracy measure the average temperature of the earth for the last 1000 years is preposterous beyond belief. What makes it even worse is the fact that so called scientists actually continue to defend the badly broken stick. Mann could not even measure the average temperature of his back yard for a year; no doubt he would cut down a tree there and proclaim to the world that he has discovered the secret to his back yard temperature and get a Nobel Peace Prize for it! The mind boggles how this scam has been going for so long?
Is it because that the majority of the population are really that stupid? I think that the answer is Yes, Yes and Yes.

GaryM
March 4, 2012 5:40 pm

Nick Stokes says:
March 4, 2012 at 3:24 pm
“…You won’t find such a number, to 0.05K, in AR4.”
From the AR45 quote in my comment:
“This can now be compared with observed values of about 0.2°C per decade, strengthening confidence in near-term projections.”
Now I am not a climate scientist, but as far as my limited ability takes me, a change in global average temperature of “about 0.2°C per decade” calculates to “about 0.02°C per year.” No, they don’t use the number .05in that passage, but their claim of accuracy seems to me to be if anything somewhat greater.

kakatoa
March 4, 2012 5:53 pm

I find the Global Average Temperature metric to be insufficient for any personal decision making- even if it was accurate. I, and I hope my local and regional leaders, need a local metric for it to be of any value. Say Heating and Cooling Degree Hours and Days when it comes down to temperature For forecasts, projections of future climate in my area I am interested in these two metrics as well as my number of hours below 32F in the winter.

Jeff Alberts
March 4, 2012 6:04 pm

Wow. My comment provoked that response and this post? I’m honored!

AlexS
March 4, 2012 6:06 pm

I am surprised by those that criticize the piece.
They are those who make a strawman.
They assume that the earth is uniform in it’s temperature path be it getting warm, cold, or stabilized. It isn’t, in some places is getting cold and in others is getting hotter.
Let’s suppose that an area somewhere is getting hotter, but is badly sampled – and what we have more is bad samples – then we miss that increase in temperature. Same for inverse if an area is getting cold but there are no stations.
Now this even doesn’t get in altitude temperature, temperature time and many other variables.

Brian H
March 4, 2012 6:08 pm

Climate Science is deep in “intensive variable denial”.
Take 20 different materials, in samples of various sizes, and determine the density of each. Average the results. Now, determine the average density of the ensemble as a unit. Derive an adjustment coefficient to make them match.
Repeat, 20X, with different sizes of all the samples.
Now, examine the 21 different coefficients, and the averages, and determine what the true average density is.
When complete, and when the answer can be proven correct and the algorithm robust, you can graduate to temperatures. But not before.

March 4, 2012 6:09 pm

And yes, that’s true. But who does? I’m not hearing. What’s the number?
Here’s what GISS says:
“For the global mean, the most trusted models produce a value of roughly 14°C, i.e. 57.2°F, but it may easily be anywhere between 56 and 58°F and regionally, let alone locally, the situation is even worse.”
Well, that’s a number. But it doesn’t sound like a claim of 0.05K accuracy.

OK, so I’ll address this one thing again, even though it was part of a really detailed reply with lots of stuff that got turned into randomness by Ifni a big earlier.
Visit here:
http://data.giss.nasa.gov/gistemp/graphs_v3/
Look at the first graph.
Look at the right hand side of the first graph.
Look at the error bar (in green) at the right hand side of the first graph.
Note its magnitude — 0.05 K.
Now look at the first graph again.
Look at the left hand side of the first graph.
Look around 1885, where there is another green error bar.
Note its magnitude — 0.1 K.
Note the website. These are the public graphs for GISS for global average temperature, and this is just such a graph, is it not? If I were John Q. Public I might never read through all of the details of just how this graph were computed — I would just look at it, look at those teensy error bars, and go “Gosh damn, CAGW is real. Look, the world has warmed 0.6K in just thirty year! We’ll all bake to death by 2100, especially if this rate of increase gets even bigger because of all of the enormous feedbacks they tell me are completely true, proven, settled science!”
Consider: In 1885, perhaps a dozen people with a scientific education had actually landed on the entire continent of Antarctica. The Pacific Ocean was mostly Aqua Incognita, visited occasionally by whalers, with a handful of missionaries and plantation owners on a few of its islands. Siberia, China, Mongolia, Tibet were nearly devoid of thermometers, certainly compared to today. The Amazon and much of central Africa was dense jungle, and Burton was still being lionized for having actually having made it to the headwaters of the Nile. Australia and the Western US were still largely a frontier. Many of the ships that plied the waters of the world still did so under sail. Thermometers of the era were generally no more accurate than 1 K, and were sampled enormously erratically compared to 1985. Yet the precision acknowledged in the temperature estimate for 1885 is only twice that for 1985!
Pardon me, it takes me a minute or two to stop laughing. I mean seriously, do they take us all for idiots?
Apparently so.
So when you assert that 0.05 is not indeed the precision that GISS claims for its contemporary global average temperatures, that turns out not to be the case. It is indeed the precision they so claim. 0.1 is the precision implied on the Wikipedia page for the Earth, which lists its temperature as 287.2 K — we do try to teach our students not to put that decimal down unless you mean it, but of course nobody listens. GISS claims a precision of 0.1 for its global average temperature estimates from 130 year ago! It also claims, by virtue of putting these temperature estimates on a single curve, that these are apples to apples numbers, that the numbers derived from 1880 data mean exactly the same thing as those derived from 2012 data, that the temperatures in question aren’t just precise in a defensible way given certain presumptions about the data but that they are accurate in similar ways across that entire range.
Excuse me, I have to wipe my eyes again. Oh, my aching sides.
What amazes me is that no one actually calls them on this. In my opinion without having the slightest regard for their methodology their error estimates for 1885 are absurd. I don’t care, in other words, how the number is derived. The precision claimed fails the mere test of common sense — it isn’t true unless an entire textbook worth of assumptions are all true, none of them directly verifiable, all of them begging the question concerning the very phenomenon that is being tested against the numbers.
As I said — and it appears that we might even agree — the correct ballpark for the accuracy of GISS would be 0.5-1 K for its contemporary numbers, and I’d personally guestimate at least 2-3 times that for number out of the nineteenth century, wouldn’t you? As for precision — if there isn’t at least an order of magnitude worse precision for global temperature estimates from the nineteenth century, there is something seriously wrong, quite independent of what you think the sources of the errors might be at the two ends of the scale.
After all, in contemporary weather measurement, we use high precision, precisely calibrated thermometers that typically record temperature with a very high temporal granularity throughout the day and night, in carefully selected locations that still demonstrably suck as Anthony himself and many others have faithfully documented. In the 1880’s the instruments in common use were one to two orders of magnitude less precise, were even more indifferently located, and we have no idea how most of them were sampled by the bored civil servants that took and recorded their temperatures.
I was not attacking GISS per se, mind you, only pointing out that its claimed precision is, in my opinion, absurd on the face of it, independent of methodology. Nor is its accuracy defensible. Nor is its generality across the range of temperatures at hand, not really. Missing Antarctica? We don’t even do that well with Antarctica today, not really.
An honest presentation of errors in GISS would make its error bars far, far larger for the far end of its curves, don’t you think, if you just apply a bit of common sense?
No, now I’ll attack GISS , and indicate why satellite data is so important. Consider the UAH lower troposphere temperature. It is known, both accurately and precisely, apples to apples, from roughly 1979 to the present. It shows almost no statistically significant increase in temperature over that entire period. The temperature anomaly in 1980 was around -0.1 K. The temperature anomaly last month was around -0.1K. Sure, that cherrypicks end points and a linear fit is probably closer to 0.3 K, but the R value of that fit — (without computing it, and assuming point errors on the order of the variance in the data) is going to suck, because the linear trend isn’t very strong compared to no trend at all, noting that the current anomaly on a 31 year average is negative.
That last fact says it all, compared to a purported ~0.6 K increase in GISS, supposedly precise to 0.05 K.
Now if I had to bet on one of the two producing the correct relative anomaly, which one would I choose? In other words, if I imagine that there is such a thing as a global average temperature — in and of itself a bit dubious — and that the changes in GISS or UAH temperatures are an accurate measure of the changes in the global average temperature, we have this pesky factor of two to deal with. That’s a pretty serious thing. It’s the difference between Catastrophe and “Ho, hum.” 0.2 K/decade is in no way comparable to 0.1 K/decade, maybe, if not a lot less.
Personally, I think there is little doubt that the UAH satellite derived lower troposphere temperature is the more reliable number and the more accurate measure of the global mean temperature. For one thing, it is unbiased by things like the UHI effect and sparse sampling that make the GISS result questionable at best anyway! For another, there simply aren’t as many adjustments one can make for things like instrumentation, and one has a variety of controls (e.g. soundings) that are similarly uncorrupted by UHI effects in ground-level air. Simple pictures are the best — there are too many opportunities in GISS to bias the result intentionally or otherwise.
I suspect that this is one of the reasons that the climate science community is “suddenly” becoming a lot more open to the possibility that CAGW is just plain wrong. GISS and UAH LTT are diverging, and of the two GISS is almost certainly the one that is wrong. Wrong by a rather lot. Which makes one look at the graphs a bit more critically, note (perhaps for the first time) how tiny the error is that they are claiming for nineteenth century global average temperatures and say “bullshit“.
This is clearly wrong. If this is clearly wrong, is the entire curve wrong? What’s going on, here?
For some, this comes as a bit of a revelation. They feel betrayed. How could they have ever been so stupid as to believe that we know the global average temperature in 1885 to 0.1 K on the same basis that we now know it to 0.05K? How could they actually be fooled into thinking that we know the global average temperature at all within a half degree now, let alone to within a few hundredths of a degree? They get quite angry and do things like publicly repudiate the IPCC and AR5, because they can’t get anyone to put the right (or at least reasonable) damn error estimates into the data that the public gets to see, lest the public look at them and go “We’re spending a trillion dollars because that is what passes for ‘settled science’ proving Catastrophic Anthropogenic Global Warming?”
It’s enough to make the peasants reach for their torches and their pitchforks, isn’t it?
Once they pick themselves up off the floor and stop laughing.
Not that there is anything terribly amusing about tens of billions of dollars (racing towards hundreds) swindled out of the public by doing the moral equivalent of yelling “fire” in a theater, where the Earth is one big damn theater with no way out.
rgb

Brian H
March 4, 2012 6:10 pm

Typo: “Now, determine the average density of the ensemble as a unit.”

davidmhoffer
March 4, 2012 6:12 pm

Mr Mosher,
Sure there is a difference between the trend and the temperature itself, point taken. But…
1. The same raw data that is used to calculate the trend is also used to calculate the average temperature of the earth, a notion that Dr. Brown has shown conclusively is nonsense.
2. The average temperature calculated as per above is then used to try and arrive at an energy balance for the earth against an incoming insolation of 240 w/m2. Since we haven’t a clue what the “average” temperature is, trying to determine if we have any energy imbalance (due to anything, not just CO2) is more nonsense.
3. While the trend from a few thousand thermometers may well be the same as the trend from tens of thousands of thermometers, so what? What does this tell us about energy balance? Nothing! One degree increase in the arctic means a very different thing than a one degree increase in the tropics. Unless we have actual surface temperatures converted to w/m2 and average that, all we have is a bunch of numbers that have a similar trend, but remain meaningless from the perspective of understanding energy balance.
4. Further to Dr Brown’s point, I grew up in a rather harsh climate. One interesting thing I can attest to is that in spring it is quite possible to get a sunburn because you took your shirt off to cool down, but you left your boots on because you were standing in snow. The temperature measured on a day [like] that would be meaningful…. how?

Jeff Alberts
March 4, 2012 6:15 pm

I’ve noticed that the “this” I had asked Dr. Brown about wasn’t carried over in the post. Here it is:
http://www.uoguelph.ca/~rmckitri/research/globaltemp/GlobTemp.JNET.pdf
Essex, et al. 2006: “Does a Global Temperature Exist”, J. Non-Equilibrium Thermodynamics

Abstract
Physical, mathematical and observational grounds are employed to show that there
is no physically meaningful global temperature for the Earth in the context of the issue
of global warming. While it is always possible to construct statistics for any given set of
local temperature data, an infinite range of such statistics is mathematically permissible
if physical principles provide no explicit basis for choosing among them. Distinct and
equally valid statistical rules can and do show opposite trends when applied to the
results of computations from physical models and real data in the atmosphere. A given
temperature field can be interpreted as both “warming” and “cooling” simultaneously,
making the concept of warming in the context of the issue of global warming physically
ill-posed.

March 4, 2012 6:17 pm

My initial guess at how to define global temperature would be the temperature if you took every molecule in the volume of air between 1.5 and 2.5 meters above the surface, and instantly transported all those molecules to random locations within a cube of equal volume without changing their speed, energy, etc. Of course you can’t measure such a thing exactly, but I bet you could make a pretty close estimate. Probably well under 1K.
ROTFL
But what about the molecules of the actual surface? Should we throw them in? If so, to what depth?
rgb

Ian W
March 4, 2012 6:19 pm

Max Hugoson says:
March 4, 2012 at 1:40 pm
As I have noted TIME AFTER TIME AFTER TIME…an 86 F day in MN with 60% RH is 38 BTU/Ft^3, and 110 F day in PHX at 10% RH is 33 BTU/Ft^3…
Which is “HOTTER”? HEAT = ENERGY? MN of course, while the temp is lower.
I was at a pro-AWG lecture by a retired U of Wisc “meteorolgy” professor a couple years ago.
When I brought this up, and the example of putting varying amounts of hot and cold water into 10 styrofoam cups, measuring the temps, averaging the result and comparing to the result of putting all the contents in ONE insulated container (AVERAGED TEMP in this case WILL NOT MATCH the temp of all the varying amounts put into the one container!)..he looked (to QUOTE ANOTHER ENGINEER AT THE EVEN) as a “deer in the headlights”…and then finally babbled, “Well, the average temperatures do account for the R.H.”
Ah, like dear (Should be removed) dr. Glickspicable…”My words mean what I wish them to mean, nothing more and nothing less.” Reality BENDS for the AWG group!

Max – I fully agree – but you are wasting your time nobody listens. There seems to be a total lack of understanding of the realities of atmospheric enthalpy so everyone happily goes off using their elastic metric ‘atmospheric temperature’ then compound the errors by averaging…
Your example of MN and and AZ matches one I often use of LA and AZ. But nobody cares; they gather with the climate ‘scientists’ under the ‘temperature lamp post’.
More importantly take the case of a day in AZ that _starts_ at 60% RH and 86F, then during the day the temperature rises to 110F in the late afternoon but the RH has dropped to 10%. The actual ‘heat content’ of the air has dropped, but the temperature has increased. Nevertheless, along come the climate ‘scientists’ and _average_ the temperature and talk about the extra heat that has been ‘trapped’ by GHG when the actual heat content of the AZ air has _dropped_ despite the temperature rise.
I think the reason that enthalpy, latent heat and other confounding factors are avoided is that it is so mathematically simple to use the Stefan Boltzmann equations on the back of an envelope and pretend that everything is understood.

RACookPE1978
Editor
March 4, 2012 6:26 pm

Dr. Glieckenspicable is more user-friendly to the tongue.

Steve from Rockwood
March 4, 2012 6:30 pm

KR says:
March 4, 2012 at 4:59 pm

Nick Stokes – That’s why I said anomaly
Dr. Brown conflates absolute temperatures (which have the uncertainties you note) with anomalies (which are supported by a very large data set, and have the 2-SD accuracy of 0.05C). Which is quite the strawman argument, a bait and switch.
As I noted above, if there is a consistent bias the temperature estimation will have a constant offset of about that bias – and for anomalies that bias will cancel out to zero, as it will be in both the baseline and the time-point of interest. If the bias is consistently changing over time, it will show up in the anomalies – but the only real candidate for that is the urban heat island effect, and as the BEST data shows (along with any number of other analyses, including Fall and Watts 2009) that there is no such overall effect on temperature averages.

But there is no consistent bias in the temperature estimation. This is the problem. The biases are constantly being changed. And these biases are much greater than 0.05 degrees. Spend some time confirming these arbitrary adjustments and then come back and argue for a 0.05 accuracy in the temperature anomalies.
So the UHI doesn’t show up in the BEST data. This raises red flags already. Are you relying on the power of many other stations not in urban areas to suppress the obvious real effect of UHI? Do they account for UHI and if so what is a typical correction? Do they routinely under-estimate, over-estimate or (within random error) do they get the UHI correction more or less correct? When I drive into town from the country the change in temperature is 3-5 degrees. To suggest there is no DC bias in temperature data is ridiculous.

March 4, 2012 6:31 pm

Robert Brown says: March 4, 2012 at 6:09 pm
Dr Brown,
As KR says, you are conflating global average temperatures with anomalies. Most of your arguments address the question of what is an average absolute temperature. And that is beset with many difficulties, as GISS is able to explain in far fewer words than you use.
Anomalies measure the change from some mean. Their utility relies on the fact that they are correlated between all these different situations that you describe. Correlation means that samples are statistically representative of their regions, and an attempt to spatially integrate makes sense.
Now you may if you wish try to argue that the correlation is illusory. But you need to present the evidence. It’s a different argument. And a much-studied subject.

u.k.(us)
March 4, 2012 6:34 pm

Robert Brown says:
March 4, 2012 at 6:09 pm
“Which makes one look at the graphs a bit more critically, note (perhaps for the first time) how tiny the error is that they are claiming for nineteenth century global average temperatures and say “bullshit“.”
=============
[SNIP: Not your decision to make. -REP]

March 4, 2012 6:44 pm

I am curious as to why only UAH is mentioned and not RSS.
This is especially so since Dr. Spencer says:
“Progress continues on Version 6 of our global temperature dataset. You can anticipate a little cooler anomalies than recently reported, maybe by a few hundredths of a degree, due to a small warming drift we have identified in one of the satellites carrying the AMSU instruments.”

No particular reason — the differences between the two is usually small and they provide a valuable check on one another.
Both of them return numbers that should be warmer than GISS surface numbers but in fact are quite a bit cooler, with significant differences in slope/trend. Indeed, if one assumes that the satellite record is accurate and that the relation between lower troposphere temperature and surface temperature is what it is “supposed” to be according to the models (both dubious, the latter more than the former) the surface warming is much smaller than GISS indicates, with an increasing divergence.
Depending on where and how you fit, UAH gives a trend of less than 0.14 K/decade (dropping at the moment, because a negative anomaly at the end tends to pull best fit down pretty agressively just like a negative anomaly at the beginning tends to push it up). That would correspond to an actual surface temperature anomaly of maybe 0.1 K/decade, instead of the nearly 0.2 K/decade seen in GISS over more or less the same period.
That’s a pretty serious problem — for GISS. Even if it does nothing else, it makes GISS numbers
uncertain by at least the difference, and biases that uncertainty down not up. It makes it very likely that GISS overestimates the warming.
The interesting thing is that it somehow manages to overestimate the warming systematically, so that the two curves have different slope, different trends. That makes no sense at all. The same trend and different absolute temperatures would be understandable. Different trends means one curve or the other (or both!) have time dependent biases or the assumption that they are somehow connected to the same global average is not only wrong, it is wrong on average which again makes no sense whatsoever.
As I’ve already said, I personally consider the UAH/RSS results to be more reliable than the GISS result. They have multiple instruments, several ways to check for and correct for systematic biases by comparing their results, and the two series are themselves at least somewhat independent and yet closely commensurate most of the time. If you simply averaged the two you’d get a result hardly distinguishable from either one at a glance.
Indeed, GISS is up against a rock. If it continues going up as the UAH says that the lower troposphere is at least holding its own if not actually cooling, that rock will break it. Although truthfully, a glance at their error claims from the 19th century make it clear that it is broken already. You can’t squeeze blood from a turnip, and there are some pretty strict limits on the possible precision of our knowledge of temperatures as one goes back in the instrumental record. At the moment their claims have the look of a bloody turnip.
rgb

March 4, 2012 6:49 pm

Steve:
“So the UHI doesn’t show up in the BEST data. This raises red flags already. Are you relying on the power of many other stations not in urban areas to suppress the obvious real effect of UHI? Do they account for UHI and if so what is a typical correction? Do they routinely under-estimate, over-estimate or (within random error) do they get the UHI correction more or less correct? When I drive into town from the country the change in temperature is 3-5 degrees. To suggest there is no DC bias in temperature data is ridiculous.”
That is not exactly true. First very few people in this debate understand how variable UHI is.
Typically, they draw from very small samples to come up with the notion that UHI us huge: huge in all cases, huge at all times. It’s not. Here are some facts:
1. UHI varies across the urban landscape. That means in some places you find NEGATIVE
UHI and in other places you find no UHI and in other places you find mild UHI and in other
places you find large UHI. You really have to understand the last 100 meters.
2. UHI varies by Latitude; Its higher in the NH and lower in the SH.
3. UHI varies by season. Its present in some seasons and absent in others depending on
the area.
4. UHI varies according to the weather. With winds blowing over 7m/sec it vanishes.
in some places a 2m/sec breeze is all it takes.
So you can find UHI in BEST data, the tough thing is finding pervasive UHI. Several
things work against this. most importantly the fraction of sites that re in large cities is very small.
Basically, you are looking for a small signal (UHI is less than .1C decade ) and many things can
confound that.
There are no adjustments for UHI.
Your anecdote is interesting, but the problem is that studying many sites over long periods of
time does not yield a similiar result.

Jeremy
March 4, 2012 6:51 pm

Dr Brown,
Please keep these enlightening talks going. It is wonderful to hear words that reflect the views of countless engineers with practical experience of the “real” world and how inadequate models are.
As far as I am concerned, the very concept of a “global average temperature” is completely and totally flawed from any practical perspective. However, it does make a useful simplified ‘concept’ for an excellent thought experiment when describing basic radiative physics at the undergraduate level, as I recall learning in third year physics.
A tragic mistake was made in the 70’s and 80’s when Physics professors dumbed it all down and began teaching non-Physics (humanities) students some of the most basic elements of this science, without the students ever being capable of grasping the REAL complexity behind it. These students grasped very quickly the political & journalistic implications of warming from man-made CO2 without ever realizing they were being presented a mere slice of a dumbed down over-simplified picture of what was in reality an utterly overwhelmingly complex system.
A “little knowledge” is such a dangerous thing!

Frank K.
March 4, 2012 6:56 pm

Nick Stokes says:
March 4, 2012 at 2:52 pm
Frank K. says: March 4, 2012 at 2:18 pm
“Please state this rule for us.”
No, Frank, I want an answer to my question first. Who are these people who claim to calculate the Earth’s “global annualized temperature” to 0.05K? Seen a graph?
But OK, the rules are in Gistemp. Or even TempLS.

Oh – I see. You gone from “consistent rule” to rules, and more than one set of rules to boot! Very interesting. I thought it was so easy…
You still didn’t address my second question. Here it is again:
“And the temporal variation of that process is much more meaningful than the notion of a global average temperature.”
Please discuss your reasoning for this conjecture.
You also brought up yet another good question…
Nick Stokes says:
March 4, 2012 at 6:31 pm
“Anomalies measure the change from some mean. Their utility relies on the fact that they are correlated between all these different situations that you describe. Correlation means that samples are statistically representative of their regions, and an attempt to spatially integrate makes sense.
Can you please provide quantitative justification for the good correlation of the temperature anomalies?
Thanks.

Tim Minchin
March 4, 2012 6:56 pm

If the anomoly is currently negative – it implies that ALL EXTRA HEAT is no longer in the system.
And another thing in regards to measurement is reading intervals. Ideally temperature would be ready digitally and continuously and the 24 hour daily heat pattern would be and total 24 hour heat gain/loss would be easily usable with cloud cover data to determine the insulating effect or otherwise.
resolution in both space and time is going to be the holy grail of accuracy. At the moment we are working with giant grids and hourly temperatures. Until we can work with fractal chaotic systems we won’t be close to measuring or modelling reality at any great sigma level,

Mindbuilder
March 4, 2012 7:05 pm

Robert Brown wrote:
Mindbuilder wrote:
My initial guess at how to define global temperature would be the temperature if you took every molecule in the volume of air between 1.5 and 2.5 meters above the surface, and instantly transported all those molecules to random locations within a cube of equal volume without changing their speed, energy, etc. Of course you can’t measure such a thing exactly, but I bet you could make a pretty close estimate. Probably well under 1K.
“But what about the molecules of the actual surface? Should we throw them in? If so, to what depth?”
No, the molecules of the surface haven’t been historically measured. Since air temperature records of the past have historically been measured about 2m from the surface, maintaining that height makes for easier comparisons of changes. Of course my proposal could be refined. Maybe 1.5m is a more common average historical height. Mabye you should consider 2m +/- 10cm instead of +/- .5m. You would also have to account for current and historical sampling or lack of sampling in odd places like atop mountian ranges. But again, I expect you could get within +/- .2K of the theoretical value, possibly even with historical records. 0.05K seems optimistic though, even for modern measurements.

March 4, 2012 7:08 pm

Robert Brown says: March 4, 2012 at 6:44 pm
“That would correspond to an actual surface temperature anomaly of maybe 0.1 K/decade, instead of the nearly 0.2 K/decade seen in GISS over more or less the same period.”
The main reason why the satellites show a lower slope in the shorter trends is that they reacted very strongly to the El Nino of 1998. That downweights trend when it sits at the back end in time. But if you compare GISS with UAH from 1985 to 2011, so 1998 is in the middle, then there isn’t much difference.

March 4, 2012 7:12 pm

Frank K. says: March 4, 2012 at 6:56 pm
“Can you please provide quantitative justification for the good correlation of the temperature anomalies?”

No. This post is about people who claim something about temperatures are full of something. You can lead with some evidence.

u.k.(us)
March 4, 2012 7:13 pm

u.k.(us) says:
March 4, 2012 at 6:34 pm
Robert Brown says:
March 4, 2012 at 6:09 pm
“Which makes one look at the graphs a bit more critically, note (perhaps for the first time) how tiny the error is that they are claiming for nineteenth century global average temperatures and say “bullshit“.”
=============
[SNIP: Not your decision to make. -REP]
Just for the record, it was only an observation.

March 4, 2012 7:22 pm

Steve from Rockwood says: March 4, 2012 at 6:30 pm
“Spend some time confirming these arbitrary adjustments and then come back and argue for a 0.05 accuracy in the temperature anomalies.”

I have done that. So have others. That is, we did calcs with no adjustments at all. And we got essentially the same answers.

rilfeld
March 4, 2012 7:23 pm

Its not about average temperatures or average anomolies. Its about whether the data shouts ‘catastrophe coming!’ clearly enough to be giving up huge chunks of liberty and wealth to politicians supposedly ‘saving’ us. If it weren’t for the money this argument, if it took place at all, would be in an obscure journal somewhere. In a general sense I think Dr. Brown has it right: the data shouts “We just don’t know”.

KR
March 4, 2012 7:23 pm

Frank K.“Can you please provide quantitative justification for the good correlation of the temperature anomalies?”
Yes. Read http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.187.9839&rep=rep1&type=pdf – In particular Fig. 3. This shows correlations between “annual mean temperatures for pairs of randomly chosen stations with at least 50 common years in their records”, and shows strong correlations of temperature anomalies, with a correlation >50% up to 1200km distance.
A valley and an adjacent mountain will have very different absolute temperatures, but given the size of weather patterns, they will have closely correlated anomalies from their average temperatures.

Interstellar Bill
March 4, 2012 7:24 pm

I’ve gone hiking at night when a cold (62F) air-current was going down one side of a hill-street and a warm (77F) air-current 20 feet away going up the other side. What’s their ‘average’?
I’ve hiked up and down night-time temperature inversions with 88F at 1200 feet and 62F at sea level. Which one of those was doing the radiating to space?

KR
March 4, 2012 7:41 pm

Steve from Rockwood“But there is no consistent bias in the temperature estimation. This is the problem. The biases are constantly being changed. And these biases are much greater than 0.05 degrees. Spend some time confirming these arbitrary adjustments and then come back and argue for a 0.05 accuracy in the temperature anomalies.”
If the biases are constantly changing back and forth, they become another random variable with a mean of zero.
If you don’t trust the adjustments, then calculate the temperatures without them. You will find that you see essentially the same results – that we’re seeing warming at around 0.15-0.16ºC/decade right now.
This is something that I’ve never understood – these complaints about adjustments. For each station these are a compilation of the best information about changes at that station – time of day for observations, changes in siting, in replacement thermometers, etc. They should act to reduce uncertainty. But even with no adjustments whatsoever to the raw data, you see just about the same results. The climate is warming, and quickly, and it correlates to our emissions.
And yet the adjustments are waved about as some kind of reason to ignore the issue entirely. That just doesn’t make any sense…

Frank K.
March 4, 2012 7:44 pm

“Not that there is anything terribly amusing about tens of billions of dollars (racing towards hundreds) swindled out of the public by doing the moral equivalent of yelling “fire” in a theater, where the Earth is one big damn theater with no way out.”
rgb

I nominate this for quote of the week…

Brian H
March 4, 2012 7:45 pm

Robert Brown says:
March 4, 2012 at 6:09 pm
Yet the precision acknowledged in the temperature estimate for 1885 is only twice that for 1985!

rgb

I think you got those two dates flipped. Or you need to insert “error” after “precision” or instead of it. Or SLT.

Steve from Rockwood
March 4, 2012 7:51 pm

Steven Mosher says:
March 4, 2012 at 6:49 pm
——————————————
Thanks for your post Steven. I always enjoy them, sometimes (usually) agree.
You point out the variability of UHI. My gut feeling is that in general UHI increases temperature. I work in small airports across Canada (although mainly in B.C. and Northern Ontario). In the last 20 or so years many of these airports have been upgraded (paved runways, access ways plus infrastructure). These airports are the main centers for temperature and precipitation measurements. Even the wind socks often don’t work (because the wind is blocked by the new nearby buildings). So I revert back to my original point that a global temperature anomaly (what ever that is) cannot be accurate to 0.05 degrees (the point of this post I think) and that errors in the temperature record cannot be treated as random as KR attempts to assert.

Frank K.
March 4, 2012 7:52 pm

KR says:
March 4, 2012 at 7:23 pm
Thank you KR. I have read this paper before. Please look at Figure 3 and let me know if you think the correlation coefficients computed for the global surface temperature anomalies are particularly good (especially at high latitudes…). Is a correlation coefficient of 0.5 considered “good”? 0.6? 0.7? Does it matter?
Thanks.
Nick Stokes says:
March 4, 2012 at 7:12 pm
Frank K. says: March 4, 2012 at 6:56 pm
“Can you please provide quantitative justification for the good correlation of the temperature anomalies?”
No. This post is about people who claim something about temperatures are full of something. You can lead with some evidence.

Thanks for the response, Nick. Very helpful, indeed.

March 4, 2012 7:53 pm

As KR says, you are conflating global average temperatures with anomalies. Most of your arguments address the question of what is an average absolute temperature. And that is beset with many difficulties, as GISS is able to explain in far fewer words than you use.
Actually, the bulk of the post was addressing how one can precisely measure an absolute average temperature that is in some fixed relationship to a “true” average temperature. It isn’t just about the impossibility of defining an average, or the problem that different definitions can lead to different trends with different signs — the subject of the article that was linked in originally and was relinked in recently just above — it is about being able to realistically extrapolate from a finite, relatively sparse set of thermometric measurements with unknown systematic biases to even one of these definitions and obtain a result as precise as what is being claimed, let alone as accurate.
I own a shotgun that has its front sight off by some fraction of a millimeter to the left. I’m a damn good shot (if I do say so myself:-). When I first bought it, I naturally didn’t notice the problem — who can see that sort of thing by eye? It wasn’t until I started shooting it at known targets that I could tell that the sight was systematically biased, biased so that at close enough range I might not even notice the problem but at any sort of longer range I’d miss the target entirely. Even then, noticing it was dependent on my skill shooting because for a lot of people, the natural variance of their aim would have masked the systematic error. It was also dependent on knowing the target — if my eyesight had been so poor that I couldn’t even see what I was aiming at I might never have detected the bias either.
GISS is in the unenviable position of trying to detect the actual target a gun was aimed at, given an unknown set of biases in the sights of the gun, based on the shot pattern of a load fired at an unknown distance from the target — when you can’t see that target. You can say all you like about the centroid of the shot pattern and how wide it is, but you cannot infer the bias from the shot pattern. Nor can you be certain that the bias is the same from year to year, from target to target. Nor can you be sure that your grand-dad’s hand-loaded blunderbuss produced the same shot distribution with the same biases a hundred years ago, even though you are using data from that gun the same way you are using the data from your current gun and somebody ripped away half of the older targets so you have no idea if any shot landed on the missing parts or not.
UAH/RSS results are a completely distinct way of shooting at a different target, but one that is supposed to be a fairly predictable distance in a predictable direction from the target GISS is trying to infer. Unfortunately, when comparing the results they reveal that GISS is systematically biased in a way that is getting worse over time, like my shotgun missed more distant targets by a larger distance, and worse, that the actual GISS target is on the opposite side of of the UAH/RSS target from where the GISS shot pattern was landing.
And we won’t even touch the problem of using the half-missing targets using grand-dad’s handloads and gun as if they are comparable to the targets of the gun GISS is using today. If today’s guns aren’t firing even on the right side of the target and their sights are biased to miss by ever greater amounts as it is, how much more difficult is it to state positive concusions from an era without UAH/RSS satellite instrumentation!
Note well that GISS has problems with all three aspects of meaning from the numbers. The numbers have no absolute meaning or necessary relevance to an assumed “global average temperature”. At best one hopes for a monotonic, if unknown, relationship, although the numbers themselves are on the wrong side of the only completely independent and arguably much more reliable way of computing a closely related “global average temperature”. GISS makes egregious claims for the “error” associated with these numbers (where in science, error estimates should include error from all sources, not just an estimate of statstical error a.k.a. precision based on assumptions of statistical independence and a lack of bias). Finally, their numbers have what appear to be the wrong trend compared to the only meaningful check we can perform, so one cannot trust even the anomaly whose correct error is being underestimated.
Three strikes and you’re out. And yes, I would continue to say that even with the UAH/RSS data we don’t know the true global temperature for the last thirty-odd years particularly accurately.
Why does that matter? Because underlying the entire argument concerning global warming and cooling, some fraction of which might be anthropogenic is the zeroth law of thermodynamics, is it not? As a few people have pointed out, temperature is a lousy measure for heat, but perhaps it is the best we can do. Still, in order to make it even a lousy measure for heat energy as it flows through a complex open system, we have to be using the same measure of temperature throughout the range where one asserts that the temperature is known.
UAH/RSS data is precious, because nothing we can imagine doing in the future will change the measure itself. Perhaps there is instrumentation error, sure, but looking the microwave emission from atmospheric oxygen is as close to a direct measure of the temperature of that oxygen as one can hope to get, and we can sample from the entire globe frequently and in a more or less unbiased manner to get it. We can imagine increasing the precision with more instruments and controlling even more tightly for systematic biases with careful independent measures from soundings, but otherwise in 100 years people will be able to look at UAH/RSS data and — within the modest uncertainties of method and statistical resolution — be able to do a direct and valid comparison of the exact same thing (perhaps with greater precision) and make well-justified inferences about the comparative temperatures of the lower troposphere and, by extension, the surface.
We can do nothing of the kind with the pre-satellite thermal record. Really, we can’t. Not ever. There are biases systematic and random scattered everywhere in the data, and there is no hope of being able to go back in time and detect them and correct for them. If anything, contemporary data is showing us how difficult the task really is as GISS fails!
If GISS cannot even show the same trend as UAH/RSS, on the right side of the data, during the 30 year instrumental era where one can check it against a truly independent measurement that utterly lacks the many, many unknown biases in contemporary thermometry, why exactly should we believe its claims for the temperature in 1880?
Indeed, if we lived in a sane Universe, why exactly aren’t we using UAH/RSS to correct GISS? I mean, here we are, in possession of information that we can use to measure the systematic biases in the GISS algorithm and perhaps correct them, right? According to UAH/RSS and contemporary modeling, lower troposphere temperatures should be warmer than surface temperatures by a few tenths of a degree. Warmer implies upper bound. Surely it isn’t unreasonable to use the fact that GISS temperatures is warmer than the upper bound and has the wrong slope of trend to recalibrate, for example, the UHI and so on to at least get its algorithm to agree across the last 30 years with UAH/RSS. Surely it isn’t unreasonable to use the observed discrepancy as an object lesson about egregious claims for accuracy for the future, and for extending GISS estimates into the past as well.
rgb

March 4, 2012 7:58 pm

Simple challenge for anyone that thinks average air temperatures can be measured to sub one degree C precision. Buy 10 digital thermometers, check their “outdoor” probes and determine their relative accuracy. For example put all the outdoor probes in the same cup of tap water mixed with ice and record what they read.
Then take those 10 thermometers and place them in 10 different locations about your house. It is a climate controlled environment and should be much more consistent than an out doors environment. Let all the thermometers stabilize for a few hours and then read them. You will likely find variations on the order of 2-6 deg F (1-3 deg C) just within your climate controlled house. Simply raising or lowering the location of a thermometer can give you several degrees of difference in temperature.
Right now at the location of my furnace thermostat (hall way next to bathroom it is 72.0 deg), 30 ft away in my living room at the same elevation along the north wall it is 70.1 deg F In my bedroom 12 ft from the thermostat location it is 71.5 deg F In my kitchen it is 73 deg F (no cooking for at least 12 hours) In the bathroom about 12 ft from the thermostat location it is 71 deg F.
What is the “real temperature” in my home?
If temps vary that much in a climate controlled environment what are the temperature variations over miles of distance and as noted above different surface environments (lakes, trees, open pavement parking lot)? Which one of those temps is the “correct temperature”?
Larry

Mooloo
March 4, 2012 8:02 pm

My issue with using the “anomaly” method to wave away the issues is that it assumes all our measurements have the same variation. They don’t. One set of anomaly measurements will be measuring “temperature” in quite a different way from the next one.
What you need to do to get a hockey stick is find one set of “anomaly” measurements with little natural variability. Then you add that to another set of measurements with high variability which happens to be trending up at the time you want. Hey presto.
So when Stephen Mosher says: “Is that number the “average”. no, technically speaking the average is not measurable. Hansen even recognizes that.” I think he is likely wrong. Yes, Hansen knows that his data is not actually recording global temperature. But he will allow it to be stitched to another series in a way that assumes both of them are measuring the same thing, when they aren’t.
It’s not what the Team say that matters, it’s what they do with the data. On the face of it they accept that they aren’t measuring an actual global temperature. But they will stitch up a hockey stick by adding together data sets that measure entirely different things. But because they are all called “temperature proxies” people let it slide by.
In fact each of those proxies is measuring something entirely different, and they have no right to be fitted together.

March 4, 2012 8:03 pm

Robert Brown says
“But that long run might well be centennial in scale — long enough to detect and at least try to predict the millennial variations, something utterly impossible with a 33 year baseline.”
Physicists have a good deal of trouble with the complexity of the messy real world .Robert might consult his colleagues book “Disrupted networks from physics to climate science” – West and Scafetta for a review of the problem and how to deal with it using power spectrum and wavelet analysis of the various time series. However scientists dealing with climate can hardly say to their departments ,grant givers or to the politicians running the IPCC “That’s a really interesting question I’ll give you an answer in 100 years.” The main point he makes is true and that is that the climate ” team” have grossly exaggerated the certainty of their so called projections which they allow and indeed encourage the world to consider as predictions with a high degree of confidence.
Robert likes the satellite record but it is too short to be yet of much use. He also points out the uncertainties and problems of the land record. I again suggest that for purposes of discussion the best thing to do is to simply for convenience arbitrarily define global temperature trends to be the trends and changes shown by the Hadley SST data which goes back to 1850 and can be extended back by carefully analysing various proxies.This suggestion is supported by the following considerations.
1. Oceans cover about 70% of the surface.
2. Because of the thermal inertia of water – short term noise is smoothed out.
3. All the questions re UHI, changes in land use local topographic effects etc are simply sidestepped.
4. Perhaps most importantly – what we really need to measure is the enthalpy of the system – the land measurements do not capture this aspect because the relative humidity at the time of temperature measurement is ignored. In water the temperature changes are a good measure of relative enthalpy changes.
5. It is very clear that the most direct means to short term and decadal length predictions is through the study of the interactions of the atmospheric sytems ,ocean currents and temperature regimes – PDO ,ENSO. SOI AMO AO etc etc. and the SST is a major measure of these systems.Certainly the SST data has its own problems but these are much less than those of the land data.
What does the SST data show? The 5 year moving SST temperature average shows that the warming trend peaked in 2003 and a simple regression analysis shows an nine year global SST cooling trend since then .The data shows warming from 1900 – 1940 ,cooling from 1940 to about 1975 and warming from 1975 – 2003. CO2 levels rose monotonically during this entire period.There has been no net warming since 1997 – 15 years with CO2 up 7.9% and no net warming. Anthropogenic CO2 has some effect but our knowledge of the natural drivers is still so poor that we cannot accurately estimate what the anthropogenic CO2 contribution is. Since 2003 CO2 has risen further and yet the global temperature trend is negative. This is obviously a short term on which to base predictions but all statistical analyses of particular time series must be interpreted in conjunction with other ongoing events and in the context of declining solar magnetic field strength and activity – to the extent of a possible Maunder minimum and the negative phase of the Pacific Decadal Oscillation a global 20 – 30 year cooling spell is more likely than a warming trend.
This last simple empirically based statement is about as good as we can do right now. We might add that as Lindzen has shown the humidity and cloud feedbacks are necessarily negative or we wouldn’t be here to discuss the matter we can add with some confidence that catastrophic warming is very unlikely in the forseeable future and this whole CO2 scare is completely unnecessary and indeed economically nuts.
For the next 30 –40 years we might suggest that damaging cooling is a distinct possibilty Some thought might be taken as to how to deal with this by increasing agricultural productivity and stockpiling foodstocks against possible shorter growing seasons ,late frosts and more droughts in a generally less humid world with the greater climate variability that a cooler world would bring

TRM
March 4, 2012 8:12 pm

On a silly note “Malcolm Miller says: March 4, 2012 at 12:54 pm
Excellent outline of the ‘temperature’ problem. I always ask alarmists, ‘Where do you put the thermometer to measure the Eareth’s temperature?’ They have no answer.”
I suggest a rectal temp would be the most accurate so Cleavland it is! 🙂
On a more serious not I’ve always sided with Dr Dyson on the whole global average temp thing. It doesn’t exist and is pretty meaningless.

March 4, 2012 8:20 pm

Frank K. says:March 4, 2012 at 7:52 pm
“Thanks for the response, Nick. Very helpful, indeed.”

Well, Frank, in a spirit of helpfulness…
Here is a plot of monthly readings in 2011. You can make it average over any subset of months that you choose. It is a global map, colored by the temperatures at individual stations. There is no spatial averaging. The colors grade linearly from one station to another.
The thing to see is that there are large correlated regions. Big regional patterns. Anomalies move together. ConUS is the main exception. This might just be to do with the station quality issues that Anthony raises – I don’t know.
You can see the same thing here with trends.

KR
March 4, 2012 8:21 pm

Robert Brown – And yet you continue to conflate the accuracy of temperature anomalies (0.05C, looking at a data set of highly correlated stations) with absolute temperatures.
To be quite frank, I do not care what the absolute temperature accuracy is – but I care quite a bit about how the temperature is changing (which is the anomaly from the baseline). To which point you have made no significant arguments.
I would agree that UAH and RSS are extremely useful measures, primarily of stratospheric and tropospheric temperatures – although they have different sensitivities to variations such as volcanic activity and ENSO than surface temperatures (http://iopscience.iop.org/1748-9326/6/4/044022).
But they show much the same story as the surface measures in terms of trends: http://www.woodfortrees.org/plot/uah/mean:60/plot/rss/mean:60/plot/uah/trend/plot/rss/trend – 0.135C/decade for UAH/RSS, or for GISS/HadCRUT3: http://www.woodfortrees.org/plot/hadcrut3vgl/from:1970/mean:60/plot/gistemp/from:1979/mean:60/plot/hadcrut3vgl/from:1979/trend/plot/gistemp-dts/from:1979/trend – 0.198/0.146C/decade. Considering that the tropospheric satellite signal is somewhat mixed with the stratospheric (cooling) signal, that indicates fairly good agreement. HadCRUT3 fails to include polar data, extrapolating from average global anomaly rather than nearby polar anomaly like GISS, so (personal opinion) I trust GISS more in that respect.
But you know, that really doesn’t matter. Even if you don’t trust surface temperature records, we are (according to the satellite temps as well) seeing considerable warming, considerable changes from what we have seen previously in terms of temperature. And that means changes – changes to crop productivity, sea temperature, rainfall, Hadley cell precipitation, snowpack for water supplies, on and on.
And your unwarranted criticisms of the surface temperature records are (IMO) a mere side-show, prestidigitation to distract from the issue. And, to repeat, completely unsupportable.

KR
March 4, 2012 8:40 pm

My apologies – my previous comment had a bad link, poor construction on my part:
It should be http://www.woodfortrees.org/plot/hadcrut3vgl/from:1970/mean:60/plot/gistemp-dts/from:1979/mean:60/plot/hadcrut3vgl/from:1979/trend/plot/gistemp-dts/from:1979/trend for GISS/HadCRUT3 trends, or perhaps http://www.woodfortrees.org/plot/hadcrut3vgl/from:1970/mean:60/plot/gistemp/from:1979/mean:60/plot/hadcrut3vgl/from:1979/trend/plot/gistemp/from:1979/trend – I had inadvertently mixed GISS global mean and GISS extrapolated global mean.
The latter link is probably more relevant to this discussion; no extrapolation. Trends are 0.146/0.156C/decade for HadCRUT3/GISS, respectively. Which is certainly not trivial in terms of change.

March 4, 2012 8:41 pm

Robert Brown says: March 4, 2012 at 7:53 pm
“GISS is in the unenviable position of trying to detect the actual target a gun was aimed at, given an unknown set of biases in the sights of the gun, based on the shot pattern of a load fired at an unknown distance from the target — when you can’t see that target.”

No, that’s the point of anomalies. You don’t need to quantify the systematic biases. You’re measuring the change.
“Nor can you be certain that the bias is the same from year to year, from target to target. “
Does the misalignment of your gun vary like that? Of course, one can never be certain, but the chance is much improved.

Frank K.
March 4, 2012 8:43 pm

Nick Stokes says:
March 4, 2012 at 8:20 pm
“The thing to see is that there are large correlated regions. Big regional patterns. Anomalies move together.”
How are you inferring correlation from a map of anomaly “averages”? Just wondering… Have you computed correlation coefficients for recent data between random stations like Hansen did in his 1987 paper? That would be interesting.

Frank K.
March 4, 2012 8:45 pm

“And that means changes – changes to crop productivity, sea temperature, rainfall, Hadley cell precipitation, snowpack for water supplies, on and on.
Oh brother…[sigh]

March 4, 2012 8:46 pm

Dr. Brown I am still with you. As I understand it, global temperature in an open thermodynamic system like the earth does not measure heat energy of the globe and the stored heat energy, if it were known, may not change the global climate in sync with the temperature changes. So if climate is changing from stored heat energy, the uncertainty in connecting a very uncertain global temperature changes to climate changes when the heat energy is inferred from uncertainities in the temperature measurements, as you have presented above, borders on a leap of faith..

Reed Coray
March 4, 2012 8:48 pm

Robert Brown says: March 4, 2012 at 4:25 pm. Damn, I’m going to just give up. Mousepad disease. Looks like I’ve completely lost two hourlong posts. Grrr.
I, too, have lost long posts–probably to the benefit of the world. As other commenters have pointed out, one way to avoid this problem is to enter your comment into a word processor and then copy the text into the “Leave a Reply” box. However, you want to be careful when doing this as I’ve seen funny translations of the fonts, paragraph spacings, etc.

March 4, 2012 8:49 pm

WoodForTrees.org just got the January 2012 (2012.08) update for HADCRUT3: 0.218°C.
The HADCRUT3gl trend since 2001, has dropped to near -0.7°C (-1.26°F) per century.
See http://www.woodfortrees.org/plot/hadcrut3gl/from:1980/to:2012.08/plot/hadcrut3gl/from:1980/to:2001/trend/plot/hadcrut3gl/from:2001/to:2012.08/trend

Reed Coray
March 4, 2012 8:51 pm

Dr. Brown. What happened to Duke when they played North Carolina? My sister is an avid Duke fan and I had to suffer her disposition during the game. Please tell coach K to win all remaining games; it makes my life easier.

Brian Macker
March 4, 2012 8:55 pm

@KR,
“Dr Brown, are you perhaps not familiar with the law of large numbers
… devoid of useful content or numeracy.”
LOL, the law of large numbers only works for a repeatable experiment like rolling dice, and deals with probability. How do you propose on a single day (or year) to get an accurate picture of temperature using this law? With dice each roll is presumed identical. No such presumption is possible in taking the temperature at any, or any collection of locations.
Over what timescale do you expect your temperature measurements to converge to a precise measurement, and measurement of what? By the time you get any large quantity of measurements then entire system has moved on. Also any measurements at a single location aren’t truly repeatable.
Why on earth would you expect a chaotic system like the weather to converge on anything.
You are a silly silly man. Luckily you are anonomous so you don’t have to live with the reputation of your foolishness.

KR
March 4, 2012 8:58 pm

Frank K.“Please look at Figure 3 and let me know if you think the correlation coefficients computed for the global surface temperature anomalies are particularly good (especially at high latitudes…). Is a correlation coefficient of 0.5 considered “good”? 0.6? 0.7? Does it matter?”
The correlations are quite strong, and I will note that (a) variances from the correlation are both positive and negative (hence evening out variances) and (b) most stations are much less than 1200km from each other. And hence the correlation is considerably higher than 0.5 for the vast majority of stations.
This correlation distance is mainly an issue at the poles, where there are fewer stations. GISS extrapolates polar values from near-polar stations (which seems reasonable, given the polar amplification of any temperature change), while HadCRUT3 extrapolates from the global average anomaly. HadCRUT4, which will be released shortly, looks to have more Siberian and near-polar stations – and shows greater warming as a result of including more relevant data.

George E. Smith
March 4, 2012 9:03 pm

Well I think Nick Stokes said it best; about following a set of rules to get a consistent result.
Such is GISSTemp, or HADcruD.
They are good representations of GISSTemp and HADcruD. It is the Temperature of the earth, that they have nothing to do with.
I would differ with Professor Brown, in that the only “global Temperature” it makes any sense to try and observe is the “surface Temperature”. That would be the ocean water surface Temperature for around 73% of the Total area, and land surface Temperature for the 27% or so that isn’t ocean.
Those are the surfaces, which actually emit, the primary surface LWIR radiation, or directly heat the atmosphere above via conduction or other thermal processes.
And the only way to accurately make such measurments, is to comply with the Nyquist Sampling Theorem, which applies to ALL sampled Data Systems. Neither GISSTemp nor HADcruD do that, so neither correctly gathers global Temperature data. But as Nick implies, they give very consistent values for GISSTemp or HADcruD; whatever the purpose of those observations is.

March 4, 2012 9:05 pm

KR says:
“…we are (according to the satellite temps as well) seeing considerable warming, considerable changes from what we have seen previously in terms of temperature. And that means changes – changes to crop productivity, sea temperature, rainfall, Hadley cell precipitation, snowpack for water supplies, on and on.”
Yadda, yadda. As a matter of fact, agricultural productivity is increasing in lock step with increasing CO2. Global warming is entirely beneficial, and causes more precipitation – a good thing, no?
Your “on and on” and “considerable” are emotional, unquantified terms that have no place in scientific discussions. The plain fact is that empirical observations support the [unfalsified] hypothesis that CO2 is both harmless and beneficial.
Nothing unusual is occurring. Nothing! Natural climate variability is the same as always. If and when global temperatures begin to accelerate, wake me. Until then, all you’re doing is pseudo-scientific hand waving.

KR
March 4, 2012 9:05 pm

Frank K. – Regarding changes:
Changes will cost $. Larger changes will cost more $$$. In my region we’ve seen (over the last 30 years) a drop in precipitation, a rise in temperature, a shift in growth zones (viable plant species), and average flowering dates movement of ~9-10 days earlier.
You think that comes for free?

Bennett
March 4, 2012 9:17 pm

Curiousgeorge says:
March 4, 2012 at 3:18 pm
Brilliant!
In other news.,,
“full of [SNIP] up to their eyebrows.”
Oh lord, my mind just quails at the thought of which four letter word is actually being represented by [SNIP].
Dear Moderators, please continue to protect my eyes from the written version of words I think or hear on a daily basis.
You must think you’re saving my soul or something.
Pass.
Bennett Dawson
[REPLY: We didn’t do it for you. This is a family blog. Think of the children! -REP]

KR
March 4, 2012 9:24 pm

Sigh… – OK, ignore what I said about any “consequences”. Opinions will differ there.
Back to the post:
Dr. Brown is conflating absolute temperatures with anomalies (strawman argument), which include baseline offsets as part of the data. Anomalies have extremely strong correlations over considerable distances, making them quite measurable, and reducing uncertainties with more data to the extent that a 2-STD variance of 0.05C is supported by the numbers. RSS and UAH data give much the same trends as the surface records, indicating mutual support for the observed trends.
Dr. Brown has talked quite a lot, but it appears (IMO) nothing but a distraction from the observed trends. His claims of uncertainty are not supported by the data.

Frank K.
March 4, 2012 9:59 pm

KR says:
March 4, 2012 at 8:58 pm
“The correlations are quite strong, and I will note that (a) variances from the correlation are both positive and negative (hence evening out variances) and (b) most stations are much less than 1200km from each other. And hence the correlation is considerably higher than 0.5 for the vast majority of stations.”
Please define “strong”. 0.5 is strong? Just wondering…
Thanks.

Werner Brozek
March 4, 2012 10:03 pm

For certain time scales, the slopes can often be very similar between different data sets, but over the last 15 years, there are often huge differences. And it is not a satellite versus land issue. See:
http://www.woodfortrees.org/plot/hadcrut3gl/from:1997.08/trend/plot/uah/from:1997.08/trend/plot/rss/from:1997.08/trend/plot/gistemp/from:1997.08/trend/plot/hadsst2gl/from:1997.08/trend

u.k.(us)
March 4, 2012 10:15 pm

Whilst we are fighting among ourselves, may I remind those that think an opportunity has presented itself:

Someone embedded it, once.

McComberBoy
March 4, 2012 10:21 pm

Stephen Mosher says: “I tell you that its 20C at my house and 24C 10 miles away. Estimate the temperature at 5 miles away. Now, you can argue that this estimate is not the “average”. You can hollar that you dont want to make this estimate. But, if do decide to make an estimate based only on the data you have what is your best guess? 60C? -100C?. If I guessed 22C and then I looked and found that it was 22C, what would you conclude about my estimation procedure? would you conclude that it worked?
Here is another way to think about it. Step outside. It’s 14C in 2012. We have evidence ( say documentary evidence ) that the LIA was colder than now. What’s that mean? That means that if you had to estimate the temperature in the same spot 300 years ago, you would estimate that it would be…………thats right…… colder than it is now. Now, chance being a crazy thing, it might actually be warmer in that exact spot, but your best estimate, one that minimizes the error, is that it was colder.”
Mosh,
Sometimes you hit the nail on the head, other times you just smash your finger(s). Today it’s fingers. All of them. Eight times in two paragraphs you champion the cause of estimating temperatures based on guesses. Isn’t that Dr. Brown’s complaint about the way that this data is presented to the public? Explain to us all again how we’re going to get even a one degree accuracy from guesses and extrapolations. Your guess about whether the temperature was 22C is still a guess, even if it turned out to be right. It could easily be that there is a 5,000 foot peak at the mid-point and your imaginary point at ten miles away. Then your guess would still be a guess, but it would also be **** because it would be wrong and a guess at the same time. Stop trying to defend the indefensible.

CodeTech
March 4, 2012 10:23 pm

Amazingly, they’re still at it…
Now the target has moved. Instead of trying to justify the obvious inaccuracy of the mythical “global temperature” they’re defending the accuracy of the equally meaningless “anomaly”.
Well, here’s a tip for free: your “anomaly” is STILL only an anomaly for the fraction of the planet being measured. Very small fraction. Microscopic fraction, actually. And yeah, it’s easy for me to say this, living in a geographical area that has experienced some of the coolest summers in decades that somehow, magically, show up as warming.
Anomaly, my [snip]. You’re dealing with fantasy numbers, and if you don’t know it then you really ARE delusional.

Markus Fitzhenry
March 4, 2012 11:34 pm

[REPLY: We didn’t do it for you. This is a family blog. Think of the children! -REP]
Thanks for keeping that [snip] evil away from my soul [REP].

pwl
March 4, 2012 11:41 pm

“There may be many ways to define a global temperature, but there needn’t be a unique way to make many of them useful. http://www.realclimate.org/index.php/archives/2007/03/does-a-global-temperature-exist/ ” – Daniel Packman, NCAR.
How does this Real Climate dot org article comments, the paper they reference and the other articles they link to fit into this post?

David Cage
March 4, 2012 11:42 pm

The climate scientists are treated as being experts in their field. They are by no means the most skilled or qualified in it. On average across all the skills involved it may be true, but if you take each specific skill involved it is a false assumption. The climate models it is clear from the Climategate data are just a tiny step above pure amateur by ignoring half the required information on the pattern and behaviour of the natural world part of the CO2 cycle.
Without a clear cut proof that man’s CO2 is not just treated as a tiny add on to a system that is self balancing they have made a bland and unreasonable assumption based on near total ignorance.
I am told by statisticians from the marketing field that this is equally true in the methods and conclusions of their statistical work. The most serious accusation in this area is that since the results have a strong regional bias there is no justification whatever for taking averages at all.
Similarly in the data acquisition the claim for the results to be scientific when they are heavily relying on a statistical distribution rather than accurate results is unsound.
In a small suburban garden I have five thermometers which read the same when placed in the same point to 0.1 of a degree plus or minus 0.1 but when placed around the garden give a five degree or more variation. There is no one point where a single thermometer can give this average reading consistent to 0.5 degree. What is the true value for measurements covering 100,000 sq km?
What trust should we have in peer review when those who pointed this out within the profession had their grants removed and left to join engineering over a decade ago?

RACookPE1978
Editor
March 4, 2012 11:44 pm

Hmmmn. Hansen. Published in 1987, last data date was 1985.
In that paper, he admits that selecting a 1200 km extrapolation range was “arbitrary”….. Got about a 0.5 correlation with his data.
And nobody, in all of the hundreds of billions spent worldwide since 1985 on his precious “climate change mythconceptions” has gotten any newer, and more accurate results other than James Hansen?
Have you looked at Figure 7? Hansen’s own temperature changes ARE latitude dependent: his plotted trends for 0-24 degrees, 24-44, 44-64, and 64-90 are very, very different across all years. Each latitude differs in trend, in peaks, and in degree-of-difference of its delta T. In Figure 8, his own data for Boxes 15 (south west US- Mexico) and Box 9 (Europe) are different (flat-lines almost!) than for the “regions” of the latitude bands they lay in.
But Hansen claims “his” NASA-GISS values DO generate a valid single number for the temperature difference. WUWT?
Quoting his paper on methodology:
“3. SPATIAL AVERAGING: BIAS METHOD
Our principal objective is to estimate the temperature
change of large regions. We would like to incorporate the
information from all of the relevant available station
records. The essence of the method which we use is shown
schematically in Figure 5 for two nearby stations, for which
we want the best estimate of the temperature change in
their mutual locale. We calculate the mean of both records
for the period in common, and adjust the entire second
record( T2) by the difference (bias) ST. The mean of the
resulting temperature records is the estimated temperature
change as a function of time. The zero point of the
temperature scale is arbitrary.”
Later, he goes on to “match” northern hemisphere stations with southern hemisphere stations – though he admits the southern hemisphere stations are lacking area coverage (over 80 percent of the southern hemisphere is ocean or Antarctic), date coverage (number of years measured in much much lower in the south), and spatial coverage (most regions of the south are much less covered than any of the north but Siberia and desert China). Nonetheless, he matches northern and southern stations by latitude and longitude and length of record. Ignoring coastlines, altitude, local climates, and development. WUWT?

Kasuha
March 4, 2012 11:46 pm

The things started when certain people said that world is warming up, and certain other people said the world is not warming up. For every place that has warmed up you can find another place that has cooled down so how to decide? Some clever people must have come and figured out the method how to find an answer – global temperature anomaly. Yes, when calculating that you give up on both location and actual temperature, but in the end it gives us the answer to the original question.
There is no place on the earth which maintains average temperature anomaly. So you may say that number is disconnected from reality. But in is not, the only thing that can be said about it is that this connection is unbalanced – quite tight in one direction and quite vague in the other direction. It’s just certain number obtained by processing real world measurements which gives us answers to certain kind of questions.
I believe it is quite settled nowadays that world has on average warmed up somewhat in the past 50 something years. That’s what the anomaly value tells us and there are few other ways how to settle disputes on that.
The problem is that people don’t understand the value and either underestimate, or overestimate its value.
For example, it is wrong to put straight line through anomaly values and assume these values will continue to follow that straight line in the future.
It is wrong to say that if anomaly value went up 0.1°C then the whole world has warmed up 0.1°C … or that half of the world warmed up 0.2°C and half has cooled down 0.1°C etc etc. That number does not tell us anything like such.
But it is also wrong to say the anomaly value is a complete nonsense.

Brian H
March 4, 2012 11:53 pm

McComberBoy says:
March 4, 2012 at 10:21 pm

It could easily be that there is a 5,000 foot peak at the mid-point and your imaginary point at ten miles away. Then your guess would still be a guess, but it would also be **** because it would be wrong and a guess at the same time. Stop trying to defend the indefensible.

Heh. Happens that the GISS temp for a high plateau lake in the Peruvian Andes is the midpoint average between a coastal station and another in the Amazon. (There is a station available on site, but the average is “good enough for government work”.)
:p

Brian H
March 5, 2012 12:04 am

David Cage says:
March 4, 2012 at 11:42 pm
The climate scientists are treated as being experts in their field. They are by no means the most skilled or qualified in it. On average across all the skills involved it may be true, but if you take each specific skill involved it is a false assumption.

they have made a bland and unreasonable assumption based on near total ignorance.
I am told by statisticians from the marketing field that this is equally true in the methods and conclusions of their statistical work.

Applies to virtually every contributing science from physics to hydrology to math. The Aus. skeptic Dr. Bob Carter (?) in a talk said there are about 100 specialties needed to master climate science, and no one person can be on top of more than one or two.
My sub-moniker for the Hokey Team is “Jackasses of All Sciences, Masters of None”.

Markus Fitzhenry
March 5, 2012 12:07 am

KR says: March 4, 2012 at 9:24 pm
Anomalies have extremely strong correlations over considerable distances, making them quite measurable, and reducing uncertainties with more data to the extent that a 2-STD variance of 0.05C is supported by the numbers. Dr. Brown has talked quite a lot, but it appears (IMO) nothing but a distraction from the observed trends. His claims of uncertainty are not supported by the data.””
I’ll tell you what the data supports, nothing. It is a blindingly stupid situation where the debate is now not only about the uncertainty as to the likely hood of a wrong hypotheses used in determining a ‘global’ mean, but also incorrectly placed and monitored stations but also whether the anomaly is 0.0C or 0.5C or 1.0C.
Who cares! My flesh and blood and the natural world can tolerate from -10C to 50C. Actually, those who live cooler climate should be up in arms about the attempt to slowdown their heating up since the Little Ice Age.
Just how balanced is this debate? Do we really still believe there is a catastrophe around the corner? Only somebody who is deluded wouldn’t think the catastrophic part of the debate is over. Next we will do the models. Then well do the insolation bazingas, then we’ll do Co2 radiativity, then we correct with unbiased data the screwed up sets of coupled non-Markovian Navier-Stokes equations, with the right drivers and feedbacks. Then well kiss this debate goodbye.
From my perspective it seems to be madness to think there are not benefits from a warming world. Opportunities to be plundered and good times to be had. Dr Brown is eminently correct. The physics principles of AGW are screwy.
“”All of the sites would almost certainly yield different results, results that depend on things like the albedo and emissivity of the point on the surface, the heat capacity and thermal conductivity of the surface matter, the latitude. Is it the “blackbody” temperature of the surface (the inferred temperature of the surface determined by measuring the outgoing full spectrum of radiated light)?””
Would you bet that the uncertainties are within a 2-STD variance of 0.05C. I’ll put a up a large one, and say your numbers do not support any thing like that certainty.

gallopingcamel
March 5, 2012 12:18 am

Reed Coray,
I became a supporter of the “Tarheels” in 1982. Eight years later I started working at the Duke university “Free Electron Laser Laboratory”. For the next twelve years my colleagues used to put light blue crying towels on my desk every time Duke beat UNC at basketball.
From 1990 to 2002 when I retired the Dukies had the better of it. The 1991-1992 “Back to Back” years were particularly difficult.
Then the improbable Duke win in Chapel Hill on February 8 when UNC had a ten point lead with only two minutes remaining. The crying towels were saturated.
What a change a few weeks can bring. UNC finally discovering how to defend the 3-ball effectively on the way to dominating Duke in the Cameron Indoor Stadium. At last the crying towels are dark blue!
I may be off topic here so let me point out that Richard Lindzen’s address to the UK House of Pariament had an excellent slide that pointed out the foolishness of hyper ventilating about a few tenths of a degree variation in the global average temperature given that temperatures vary by huge amounts in most locations from day to night and from season to season:
http://diggingintheclay.wordpress.com/2012/02/27/lindzen-at-the-house-of-commons/
Take a look at slide #15.

Gary Mount
March 5, 2012 12:48 am

There may be a solution to calculating an Earth average temperature.
But first let me use a simple example that may demonstrate the folly of calculating an average temperature.
Lets take two 1 meter cubes where one cube is a solid mass of iron and the other cube is a volume of gas, lets say of oxygen. On second thought, let both volumes be a solid metal, but of a different metals from each other. The mass of each cube should be different if the metals are of only one element, and not a mixture that may make the two cubes equal in mass. If the temperatures of the two cubes are identical, the average temperature is easy to calculate. What about when the temperatures are different? Lets make it as simple as possible and set the temperatures of each object to be the same throughout each cube (homogenous temperature), keeping in mind the criteria that the two cubes have different temperatures from each other.
Do you add the two temperatures together then divide by two? What would be the purpose of such an answer? The answers would be the same whether the cubes where gold, lead or aluminum. Or if the cubes where of identical material, or one was a volume of gas of any mixture.
What is the average temperature of a simple object such as the human body? We often take the temperature of the “core”, and we usually use only specific places where we stick the thermometer. The average temperature requires measuring temperatures throughout the extremities, the exterior surfaces, and interiors of these extremities, as well as the core temperature. And then what do you do with these measurements?
In real world applications the mass is an important quantity when dealing with temperatures, unless, like in the human body temperature example, you only need the measurement of one location, to check to see if it is anomalous. Does the patient have a fever?
Does the earth have a fever? Many locations show a nearly constant climate temperature, by which I mean no change in trends of long term temperatures at weather stations where the measurements have been made over many decades. Of course other stations have changes, and apparently in a recent dataset, 30 percent of stations show a cooling trend. Keep in mind we are talking about the current way of calculating average temperatures.
Getting to the solution: Measuring temperature allows one to calculate how much energy is contained in a mass. You have to calculate the mass as well to carry out the energy calculations. The atmosphere is problematic because the mass of a volume of air is different depending on how much moisture it contains amongst other difficulties. Cold air is also denser than hot air, which one quickly learns when taking flying lessons such as I have done. Your take off distances are much shorter at sea level in cold weather than say Denver airport on a hot summer day. At the Denver airport you may have to wait for the temperature to drop, or offload some baggage or an excess passenger or two.
So lets say that the solution is to calculate how much energy the surface of the earth contains, and track this quantity over a period of time, to see what we get, to see if this information is useful and see if it correlates with whatever (CO2, the sun, etc.).
What about potential energy? Part of the energy calculations should also contain that part of energy that may be transformed into potential energy, such as when a mass of water is transported high into the sky, or what about ice?
Calculating total global surface energy requires quantifying the energy contained in the ocean, lakes, earth solid surface, perhaps even objects on the surface such as trees, as well as the atmosphere. I leave this work as an exercise for the reader 😉 .
So my conclusion is, the way we are doing it now isn’t all that bad, but only when long times are considered, because of the noisy and highly inaccurate measurements that are currently being made. Once again I want to emphasize, long time frames, to account for the PDO and the other variances over decadal and other time frames. The Urban Heat Island Effect also might be tackled some time in the future when real climate scientists, and their enabling politicians want the true answer (or skeptical institutes get more funding), by doing research on this topic, which apparently no one has bothered to do yet. Thermal and fluid dynamics computer models come to mind as an aspect to explore the UHI effect. I understand that about 200 billion dollars world wide has been spent on Climate Change research, poorly done in my opinion. In fact, I think Climate Change science as has been practiced so far is one of the shoddiest sciences I have ever seen. It is like no other science that I have ever come across. Mind you, there are some really good Climate Scientists doing quality work, and I think you can guess who I mean.

March 5, 2012 1:12 am

Reply Nick Stokes: The CRU statement on errors is

How accurate are the hemispheric and global averages?
Annual values are approximately accurate to +/- 0.05°C (two standard errors) for the period since 1951. They are about four times as uncertain during the 1850s, with the accuracy improving gradually between 1860 and 1950 except for temporary deteriorations during data-sparse, wartime intervals. Estimating accuracy is a far from a trivial task as the individual grid-boxes are not independent of each other and the accuracy of each grid-box time series varies through time (although the variance adjustment has reduced this influence to a large extent). The issue is discussed extensively by Folland et al. (2001a,b) and Jones et al. (1997). Both Folland et al. (2001a,b) references extend discussion to the estimate of accuracy of trends in the global and hemispheric series, including the additional uncertainties related to homogeneity corrections.

John Marshall
March 5, 2012 2:28 am

Excellent post.
The Hadley Centre claim that their temperature record is the world’s longest. this is a questionable claim. early thermometers were of questionable quality and accuracy. Temperatures were not routinely taken hourly but when the owners thought about it. Days could go by between readings in some cases. It was not until the Stevenson Screen that some sort of standardisation happened. There is still siting problems with stations.
So we have a collection of data observed in a haphazard fashion from poorly spaced stations over 30% of the planet to produce the ‘average’ temperature. Satellites at least cover the whole planet and produce a better result. Better but not necessarily the correct one.

Agnostic
March 5, 2012 2:33 am

I have followed this debate for some time and the objections that Dr Brown point out are not new to me. Nor is the strong argument made by Nick Stokes that in fact looking at temperature anomalies is a satisfactory way around instrument and even siting bias. You are not actually recording exact temperature for all the good reasons Dr Brown makes plain, you are recording the daily change in temperature – that the next day is colder or warmer than the previous and by how much. I have no problem with that providing a meaningfully accurate indication of trend which is the important thing.
What I do have a problem with is how meaningful an average daily temperature really is and by extension the usefulness of measuring climate change by a change in average global temperature. Are the days getting hotter? Or are the days staying the same but the nights are getting hotter? Or are the days getting much hotter and the nights slightly cooler?
Or what about the mean temperature? Is it getting hotter earlier in the day and cooling later in the evening but the maximum temperature is unaffected?
…And many other questions relating to diurnal changes and mean temperature. And even then, what real significance does this have on physical climate? Does it make it rain more or less, windier less windy?
Some of these questions were addressed by Dr Lindzen in his presentation to UK parliament, and in Dr Christy’s work, and the result of which simply does not support being ‘alarmed’ enough to warrant the enormous and rapid changes and rushed investment being proposed or being carried out.
I also agree with Dr Brown that the most useful measurement of global temperature, in so far as that has any meaning for climate, is via the satellite record.

Mindert Eiting
March 5, 2012 2:55 am

Suppose there was a perfect temporal agreement between ‘true’ thermometer readings. Apart from a local constant and measurement error all thermometers would show the same time series. In that case it would not matter where the thermometers reside. Neither would it matter at what places we drop thermometers and where we introduce new ones. However, the earth is heterogeneous in this respect: whereas at some places temperatures increase at others they decrease. Because of this phenomenon the surface record can be manipulated. For the manipulation we may use the fact that for a single station the time series must show a cyclical pattern. The temperatures cannot increase or decrease indefinitely. If you want a warming world, drop those stations from the record that showed increasing temperatures for a number of recent years. This way of dropping stations is not random with respect to temporal pattern, to be distinguished from non-random drop-out with respect to location, which needs not to be a problem.
Is there a drop-out problem in the surface record? According to the GHCN base we have world-wide at the end of 1969 on duty 9644 stations. In the epoch 1970-99 we observe that 9434 dropped out. Included were 2918 new stations. Therefore, in thirty years the surface record almost completely changed. It should be shown that this change was random with respect to pattern. As far as I know this was never shown and I think it cannot be shown because of my own research, telling that the correlations between station time series increased in the course of time. Non-random change with respect to pattern may explain the divergence of the surface and satellite record.
Compared with the bias problem, the error problem is simple. Split the set of stations in a certain region into two subsets using a random number generator. Compute over time the correlation (r) between the subset means. Compute also over time the variance (v) of the complete set. The error variance (e) can be found as e = v(1-r)/(1+r).

CodeTech
March 5, 2012 3:17 am

Kasuha says:
March 4, 2012 at 11:46 pm

I believe it is quite settled nowadays that world has on average warmed up somewhat in the past 50 something years. That’s what the anomaly value tells us and there are few other ways how to settle disputes on that.

Not surprisingly, I strongly disagree: it’s not settled. My parents, who are in their mid 70s, and their peer group, some of whom are in their 90s, say there is NO difference in weather since their youth. No difference in local climate, which plants grow here, etc. I think what is settled is that people with an agenda have fiddled with past records… because I distinctly remember in the 1970s hearing about how the past was all so much warmer, and it was getting coooooolder.
I’ll take anecdotal evidence over pretty much anything instrumental before, say, 1979… or more reasonably, since the late 90s when people started putting the Internet Microscope onto the climate alarmists.
Because, you see, I don’t trust them, they have never given me a reason to trust them, and most of their actions tell me that I should not trust them. So I don’t.

LazyTeenager
March 5, 2012 3:47 am

I got fed up reading all this verbiage. It’s a big, fat strawman argument.
State the obvious 50 million times and make a bogus claim that the other guy doesn’t understand the obvious. And then hope LT can’t see though the confusing verbal fog. Fat chance.
You may as well claim that you can’t measure the temperature in your house, oh he actually did do that, so when you turn on the heater you can’t possibly know for sure that it’s warmer.
So you guys can try the experiment, turn on your heater and measure the temperature in each room of the house. If you claim it’s getting warmer, this “physicist” is going to accuse you of being full of shit.

Gras Albert
March 5, 2012 3:52 am

KR (and Nick Stokes, as usual) are being disingenuous at best, economical with the truth at worst

If you don’t trust the adjustments, then calculate the temperatures without them. You will find that you see essentially the same results – that we’re seeing warming at around 0.15-0.16ºC/decade right now.

.
HadCruT3 since 1997, no warming, no correlation with CO2

To be quite frank, I do not care what the absolute temperature accuracy is – but I care quite a bit about how the temperature is changing (which is the anomaly from the baseline). To which point you have made no significant arguments.

.
RSS pre & post 1998 El Nino, same trend?
.
Climate Science & Trends

Stephen Richards
March 5, 2012 4:16 am

Robert
Many years ago I made this very same argument at RealClimate. I got a very similar, but more rude, reply from them as you have from Stokes and Mosher. People from non-engineering and/or non Physics -ologies have not had the necessary tutolage to understand their lack of knowledge in those fields. You always get the “strawman” argumant back because they can’t understand measuring techniques and their limitations.

Markus Fitzhenry
March 5, 2012 4:31 am

Gras Albert says:
March 5, 2012 at 3:52 am
KR (and Nick Stokes, as usual) are being disingenuous at best, economical with the truth at worst
”If you don’t trust the adjustments, then calculate the temperatures without them. You will find that you see essentially the same results – that we’re seeing warming at around 0.15-0.16ºC/decade right now.”
————————————————-
Without knowing the correct drivers, no matter what they are measuring, can a prediction be made from it? Svensmark thinks he knows a driver and has correlated cosmic rays with temperature.
Look at this:
http://calderup.files.wordpress.com/2012/03/101.jpg
Cosmic ray intensity is in red and upside down, so that 1991 was a minimum, not a maximum. Fewer cosmic rays mean a warmer world, and the cosmic rays vary with the solar cycle. The blue curve shows the global mean temperature of the mid-troposphere as measured with balloons and collated by the UK Met Office (HadAT2).
In the upper panel the temperatures roughly follow the solar cycle. The match is much better when well-known effects of other natural disturbances (El Niño, North Atlantic Oscillation, big volcanoes) are removed, together with an upward trend of 0.14 deg. C per decade. The trend may be partly due to man-made greenhouse gases, but the magnitude of their contribution is debatable.
From 2000 to 2011 mid-tropospheric temperatures have remained pretty level, like those of the surface, despite the continuing increase in the gases – in “flat” contradiction to the warming predicted by the Intergovernmental Panel on Climate Change. Meanwhile the Sun is lazy, cosmic ray counts are high and the oceans are cooling.
‘Reference: Svensmark, H. and Friis-Christensen, E., “Reply to Lockwood and Fröhlich – The persistent role of the Sun in climate forcing”, Danish National Space Center Scientific Report 3/2007.’
Knowing that during the 20th century, solar activity increased in magnitude to a so-called grand maximum and It is probable that this high level of solar activity is at or near its end. Would you be gutsy enough to predict warming up to 2030.

Vince Causey
March 5, 2012 5:14 am

I’m with Dr Brown on this. He has made the argument that demolishes GISS with absolute clarity, and faultless logic. Indeed, the people at GISS are exposed as fools, in that the actually believe the nonsense they spout. The people here defending GISS sound like politicians caught with their hands in the cookie jars. Guys, your responses sound ridiculous – you are just turning people off.
Excellent work Dr Brown. This is the sort of thing that makes WUWT the best climate science blog in the world.

markx
March 5, 2012 5:41 am

Re “Global temperature”
The simple fact that the ‘measurers’ feel the need to adjust for past errors, add more stations, eliminate suspect stations, add higher resolution satellite instruments, add (more and more) Argo diving buoys, etc, etc, etc, to the picture, says everything about their own faith in the current state of accuracy.

beng
March 5, 2012 5:54 am

****
Robert Brown says:
March 4, 2012 at 2:34 pm
A more correct statement is that outbound heat loss is the result of an integral over a rather complicated spectrum containing lots of structure. In very, very approximate terms one can identify certain bands where the radiation is predominantly “directly” from the “surface” of the Earth — whatever that surface might be at that location — and has a spectrum understandable in terms of thermal radiation at an associated surface temperature. In other bands one can observe what appears very crudely to be thermal radiation coming “directly” from gases at or near the top of the troposphere, at a temperature that is reasonably associated with temperatures there. In other bands radiation is nonlinearly blocked in ways that cannot be associated with a thermal temperature at all.
****
Exactly. I don’t see why people have trouble w/this. One can obviously “see” the GH effect by just looking at the outbound IR spectrum from above the earth.
A picture (w/understanding) is truly worth a thousand words in this case.

David
March 5, 2012 6:17 am

Steven Mosher says:
March 4, 2012 at 1:41 pm
As nick notes this is a strawman argument….
===================================================
I think you and Nick misss the point and you both are in fact the ones with “strawman” arguements. Dr Brown made your point within the body of the article.
“That is why I think that we have precisely 33 years of reasonably reliable global temperature data, not in terms of accuracy (which is unknown and perhaps unknowable) but in terms of statistical precision and as the result of a reasonably uniform sampling of the actual globe. The UAH is what it is, is fairly precisely known, and is at least expected to be monotonically related to a “true average surface Global Temperature”. It is therefore good for determining actual trends in global temperature, not so good for making pronouncements about whether or not the temperature now is or is not the warmest that it has been in the Holocene.”
Yet he did also point out some obvious difficulties even with average anomalies…”We cannot compare even “anomalies” across such records — they simply don’t compare because of confounding variables, as the “Hide the Decline” and “Bristlecone Pine” problems clearly reveal in the hockey stick controversy. One cannot remove the effects of these confounding variables in any defensible way because one does not know what they are because things (e.g. annual rainfall and the details of local temperature and many other things) are not the same today as they were 100 years ago, and we lack the actual data needed to correct the proxies.”
So Mr Mosher, neither you or Nick addressed the climate reconstructuions from the past and the wild claims of unprecedented, “hottest in a thousands years, we are all going to die” CAGW claims, which are refuted here. One could clearly add that by constant adjustments of areas, dropping some thermometers, adding a few new ones here and there, as the article says here…”I don’t even know how to compute an average surface temperature for the 1/2 acre plot of land my own house sits on, today, right now, from any single thermometer sampling any single location. It is 50F, 52 F, 58 F, 55F, 61 F, depending on just where my thermometer is located,”” and via lowering the past and and raising the present, ( http://www.real-science.com/corruption-temperature-record ) especialy in areas of sparse reading where those sparse readings adjust large land mass,( http://www.real-science.com/new-giss-data-set-heating-arctic ) as well as likely underestimating the UHI adjustments, (read McKitricks paper) you could easily get a data base that shows .1 to .25 degrees C warming more then there actually is.
So the real strawman here was your arguement and Nicks, as neither of you addressed what was actually written.

wsbriggs
March 5, 2012 7:18 am

Agnostic says:
March 5, 2012 at 2:33 am
Without a measure of relative humidity, one would have to believe that 30 C in Houston with 80 % RH is the same as 30 C in Phoenix with 15 % RH, and your discussion of temps is dead on. Since the peak and trough of the temps are recorded without reference to time, or time gets lost later, the whole process is bogus. When a weak cold front with following dry air pulls into Houston, the total heat drops rapidly, even when the temp barely changes.
Anthony’s little USB logger, mounted on bicycles and using GPS would be a great method of profiling temperatures and humidity across urban areas. If they were gathered into a database it would provide an interesting picture of the local weather. As a motorcyclist in Phoenix, I know first hand the effect of riding into an citrus orchard on a pleasant night in Phoenix… jackets are called for.

Steve McIntyre
March 5, 2012 7:28 am

Your article is a reminder of the originality and importance of using oxygen infrared from satellite to measure tropospheric temperatures – a development for which Christy and Spencer should receive the highest honors from the climate scientific community.
However, neither is an AGU fellow, though AGU honors as fellows many climate scientists of modest credentials. http://www.agu.org/about/honors/fellows/alphaall.php#C

Blade
March 5, 2012 7:40 am

Steven Mosher [March 4, 2012 at 1:41 pm] says:
“I tell you that its 20C at my house and 24C 10 miles away. Estimate the temperature at 5 miles away. Now, you can argue that this estimate is not the “average”. You can hollar that you dont want to make this estimate.”

Why do that? Why estimate anything, ever? There is nothing wrong with having no data for a location, that would be the purely scientific thing to do. Backfilling empty records should be considered a high crime, and I could construct a hundred scenarios why you personally would not want this done to you in some way. For the life of me I cannot understand why all these people commenting on these blogs who are trying to portray themselves as intelligent scientists fail to question this. There appears to be no place these days that honors the critical importance of the evidentiary trail, the chain of custody or whatever we want to call it. I see a clear lack of scientific purity and integrity at work.

Steven Mosher [March 4, 2012 at 6:49 pm] says:
“First very few people in this debate understand how variable UHI is. Typically, they draw from very small samples to come up with the notion that UHI us huge: huge in all cases, huge at all times. It’s not. Here are some facts:
1. UHI varies across the urban landscape. That means in some places you find NEGATIVE UHI and in other places you find no UHI and in other places you find mild UHI and in other places you find large UHI. You really have to understand the last 100 meters.
2. UHI varies by Latitude; Its higher in the NH and lower in the SH.
3. UHI varies by season. Its present in some seasons and absent in others depending on the area.
4. UHI varies according to the weather. With winds blowing over 7m/sec it vanishes. in some places a 2m/sec breeze is all it takes.
So you can find UHI in BEST data, the tough thing is finding pervasive UHI. Several things work against this. most importantly the fraction of sites that re in large cities is very small.
Basically, you are looking for a small signal (UHI is less than .1C decade ) and many things can confound that.
There are no adjustments for UHI.
Your anecdote is interesting, but the problem is that studying many sites over long periods of time does not yield a similiar result.”

Anecdotes are more than interesting, they are evidence. When you say: “Basically, you are looking for a small signal (UHI is less than .1C decade ) and many things can confound that” you are doing what you always do, saying that an actual real-life effect gets lost in the sauce, the sandpapered homogenized averages. All this tells me is that the primary data collection process is completely corrupt. Compounding this, they’re taking this questionable data and then inputting into very questionable models. This is only a slight variation on the game of telephone most of us experienced as children, but for some of us the lesson appears to be completely lost.
Anyway, who said anything about adjusting (i.e., altering actual data) for UHI? That would be doing exactly the same thing that Hansen is doing which is corrupting the primary raw data record. No-one should be suggesting that. It sounds like a perfect strawman to beat up. What should be learned from the Surface Stations Project is that there is a real problem with location of stations, and much more thought needs to put into the locations of next generation equipment. Moreover, UHI does need to be acknowledged, without this ‘barely detectable’ nonsense. Then, it needs to be annotated clearly on output graphs as an ‘NB’ where applicable. It’s true that the overall UHI effect may never be properly ascertained thanks mostly to the averaging of averaging of averages, but at some point in the far off future there will be measuring stations everywhere, and the people of that time looking at a realtime plot of temps with immense UHI variation at 100 foot resolution will laugh at the statistical contortions occurring today. Hopefully by that time the concept shoveling all the data into a blender and hitting ‘puree’ will have died a well-deserved death.
Regardless of Steve’s above laundry list of reasons to disregard UHI, let’s just remember that there actually is a UHI effect, warmth radiating from heat sinks made of concrete, asphalt, metal, etc. All these materials have differing rates and magnitude of absorption and radiation, so it is a potpourri of signals. But one thing is certain, when all those materials occur in the same place it is strong, and Steve’s list becomes completely irrelevant. You don’t need IR night vision gear to know this, although I suspect we may need to issue them to the warmies and lukewarmies sometime soon. When I am standing at 2am on the concrete jungle that is the Las Vegas strip, there is an enormous difference than when I am a mile or three away in the desert. This is a human detectable variation, which means at least 5° to 10° F or much more. It is similar but obviously not quite as extreme in Los Angeles and NYC versus their suburbs. It matters not what latitude or hemisphere, and forget the ‘negative’ UHI idea, just build a Las Vegas or Dubai and bring your thermometer. Concrete, metal, asphalt etc., versus sand, dirt, trees. Sheesh, there is no need to complicate this!
IMHO, when Lukewarmers or warmie trolls or BEST try to diminish station placement or UHI and say that ‘it’s okay since it is lost in the averages’ what I hear them say is: ‘empirical evidence be damned, we know what we’re doing, trust us, we’re here to help. Many of us have seen this before, it’s called: ‘Close enough for government work’.

David
March 5, 2012 7:58 am

Robert Brown says:
March 4, 2012 at 4:25 pm
Damn, I’m going to just give up. Mousepad disease. Looks like I’ve completely lost two hourlong posts.
Grrr.
==========================
Yep, got to remember to copy and hold any long posts at several points. It must be frustrating, but expected to have someone like William Conneley write comments that ignore entire sections of your article which directly address their criticism, but to have Nick Stokes, and especially Steven Mosher do the same is worse.

Theo Goodwin
March 5, 2012 8:03 am

Robert Brown says:
March 4, 2012 at 2:34 pm
The “right” thing to do is just make the measurements and do the integrals and then try to understand the results as they stand alone, not to try to make a theoretical pronouncement of some particular variation based on an idealized notion of blackbody temperature.
What an absolute thrill it is to have a genuine scientist point out in luxurious detail the scandalous degree to which AGW fails to address the details, whether “a priori” or empirical, fails to connect with observable fact, and fails to break free from its “a priori” assumptions. There can be no doubt that climate science is in its infancy and that AGW proponents go to great extremes to hide that fact.

RockyRoad
March 5, 2012 8:04 am

Blade says:
March 5, 2012 at 7:40 am

Steven Mosher [March 4, 2012 at 1:41 pm] says:
“I tell you that its 20C at my house and 24C 10 miles away. Estimate the temperature at 5 miles away. Now, you can argue that this estimate is not the “average”. You can hollar that you dont want to make this estimate.”
Why do that? Why estimate anything, ever? There is nothing wrong with having no data for a location, that would be the purely scientific thing to do. Backfilling empty records should be considered a high crime, and I could construct a hundred scenarios why you personally would not want this done to you in some way.

You are absolutely correct, Blade. This should be shouted from the rooftops. Were it so, the whole charade of AGW would collapse that much sooner.

beng
March 5, 2012 8:14 am

DirkH says:
****
March 4, 2012 at 1:18 pm
Nick Stokes says:
March 4, 2012 at 12:43 pm
“Of course, anomalies are widely calculated and published. And they prove to have a lot of consistency. That is because they are the result of “following some fixed and consistent rule that goes from a set of data to a result”. And the temporal variation of that process is much more meaningful than the notion of a global average temperature.”
But the rules of that process change all the time. So the rules are not fixed. For instance, every time they kill a thermometer, they hange the rules.

****
Thanks, Dirk, you responded to this ridiculous statement early on. Take a snapshot of the surroundings of most any “official” site (mostly airports) & see how much urbanization has occurred over even a few yrs, let alone decades. One can argue that such changes are small area-wise, but guess where most of the sites are located — right in the middle of where the changes are! Stokes & Mosher shoot themselves in the foot repeatedly.

Somebody
March 5, 2012 9:05 am

The law of large numbers was mentioned. I’ve also seen invoked the central limit theorem with the same line of reasoning, related with the AGW pseudo-science.
Before invoking those, please look careful when they apply, how, and what’s the proof.
For example, for both mentioned above, the independency is an important thing. It usually appears in the proof. Now, let’s check that spatially and temporarily. I say that there is a correlation, both spatially and temporarily, or else one cannot define temperature even locally (there wouldn’t be a cvasiequilibrium to be able to define it), which would be a bad thing. The heat equation couldn’t work, to mention a consequence. Another assumption is that the random variables have the same distribution. Do they really? Let’s check that for a point at equator, versus a point at the North Pole, when it’s polar night. Ok, let us notice that they have different means. Really. It’s hot at the equator, and cold at the North Pole. Check the real data. The expected value is not the same at different points on Earth. If you compare different points on Earth you might also learn that they have different variances. So no, the variables do not have the same distribution. So the proofs for the law of large numbers and central limit theorem are not valid for the temperatures on the Earth. So, unless the AGW pseudo-scientists provide a proof that does not rely on the independence of variables and/or the identity/similarity (in the sense of equal means and/or variances and so on), the law of large numbers/central limit theorem does not apply to temperatures (unless you apply them for example to measuring the same physical temperature many times, which is another, very different thing).

Dolphinhead
March 5, 2012 9:16 am

I have long been of the opinion that the concept of a global average temperature is mostly meaningless. There seems to be strong support for that opinion expressed on this blog. How ludicrous that we continue to spend billions on climate models that attempt to predict this meaningless metric. The world is trully [snipped]

March 5, 2012 9:25 am

Clive Best says: March 5, 2012 at 1:12 am
“Reply Nick Stokes: The CRU statement on errors is…”

Yes, but errors in what? Averages of what? Read carefully, and you’ll find it is average anomaly.
Again, if you think someone is claiming to know the average temperature to within 0.05C, ask (for I will) what is the figure? What is that average temperature? You won’t find such a claimed number at CRU.
The difference is not semantic. This post has described all sorts of well-known difficulties of measuring an average temperature. That’s why no-one does it. Calculating anomalies, and averaging that, overcomes most of the problems.

Reply to  Nick Stokes
March 5, 2012 11:20 am

Stokes:
“Yes, but errors in what? Averages of what? Read carefully, and you’ll find it is average anomaly.”
Yes – I know they are quoting 0.05 degrees C. as the error on the annual “average anomaly”. Lets look for a moment at exactly how all these anomalies are calculated :
– The anomaly at one single station is the measured temperature per month minus the monthly “normals” for that station. (The normals are calculated by averaging measured temperatures for the station from 1960-1999).
– Next all “anomalies” are binned into a 5×5 degree grid for each month. All station anomalies within the same grid point are averaged together. The assumption here is that within one grid point (~300,000 km^2) the anomaly is constant. Note also systematic problems : 1) Many grid points are actually empty – especially for early years. 2) The distribution of grid points with latitude is highly asymmetric with over 80 percent of all stations outside the tropics.
– The monthly grid time series is then converted to an annual series by averaging the grid points over each 12 month period. The result is a grid series (36,72,160) ie. 160 years of data.
– Finally the yearly global temperature anomalies are calculated by taking an area weighted average of all the populated grid points in each year. The formula for this is $Weight = cos( $Lat * PI/ 180 ) where $Lat is the value in degrees of the middle of each grid point. All empty grid points are excluded from this average.
The systematic errors are the main problem. You can view all the individual station data here. There are clearly coverage problems with the early data. I repeated the exactly same calculation as above but changed the way the normals are calculated. Instead of calculating a “per station” monthly average, I calculated a “per grid point” monthly average. All other steps are the same. The new result is shown here .
I conclude that there are probably systematic errors on temperature anomalies before 1900 of about 0.5 degrees C. Deriving one annual global temperature anomaly is anyway based on the premise that there is a single global effect (CO2 forcing) which can then be identified by subtracting two large numbers.

Slartibartfast
March 5, 2012 10:01 am

When you take the difference of two random variables, is the result more or less uncertain than either of the two numbers you are differencing? It depends on the nature of the error component, I suppose. If the assumption is that the majority of the error is fixed bias, then the result has less uncertainty. Otherwise, not so much.

Edim
March 5, 2012 10:33 am

No meat grinders are necessary. Take only the absolutely top quality stations (no changes, moves, as far as possible from human influence (even rural influence)…), from all continents. Since it’s a temperature INDEX and not a temperature, grids are unnecessary. No adjustments! Make the process open and transparent. Plot all the single stations together (spaghetti) to see what it looks like for a start. Then do the analysis.

March 5, 2012 11:08 am

Dr. Brown. What happened to Duke when they played North Carolina? My sister is an avid Duke fan and I had to suffer her disposition during the game. Please tell coach K to win all remaining games; it makes my life easier.
As a long time Duke fan, beating Carolina wouldn’t be so sweet if it weren’t for the annoying fact that they sometimes beat us.
But hey, we made Roy cry…;-) Mike mans up a little better when Duke loses.
And there is still the tournament. And maybe the tournament after that. Duke is a team that still hasn’t quite come together to be as good as they can be — they have the talent and sometimes brilliance, but they don’t have the consistency and the go-to player who will not let them lose, so far. They could win the ACC tournament or go home in the first game, damn near ditto in the NCAA. I’m not seeing them do six in a row for the latter, but they could do four, and then anything can happen.
rgb
(Apologies for an OT reply, but hey, there is climate change and then there is important stuff like Duke Basketball…;-)

bill
March 5, 2012 11:32 am

Once you face the (fairly obvious) fact that we don’t know the global temp for the last couple of hundred years in any meaningful way, then climate science and its scary forecasts collapse. Admittedly, taking into account all the technical, political, spatial, historic reasons why the global climate record is imperfect, we could probably agree we have a rough idea of global temperature. However, what use is a rough idea for people who are claiming that calamity starts at 3 degrees away from the average (which remember we don’t actually know) and who crow over fractions of degrees as proof of the fulfilling of their prophesies? Their argument of certainty rests on uncertainty. Not only does that uncertainty invalidate their certainty, it renders risible the policy solutions which might save us from ‘certain’ disaster.

LucVC
March 5, 2012 11:58 am

Not to be posted
Robert Brown says:
March 4, 2012 at 6:09 pm
is worthy of turning into an article too. In the same line as Willis question of where they got that accuracy in Argo. This seems even more absurd

Wayne Delbeke
March 5, 2012 12:17 pm

Dr Brown: I have enjoyed your posts as you clearly have an open and enquiring mind.
I just listened to one of your Canadian Colleaques, Dr. W. R. Peltier of the University of Toronto berate the scientists who wrote an “opposing” article to the WSJ. He repeatedly and with considerable vehemence called them deniers on our public radio system on a program called Quirks and Quarks – with a very warmist host, Bob MacDonald. http://www.cbc.ca/quirks/
Interesting on this page was notification of the shutting down of the Canadian weather station at Eureka, Nunavit, Canada – which is a bit sad given it is a good arctic weather station. There were excellent discussions with this person on WUWT about their efforts to acquire good data and how wind direction affected their temperature readings. A very rational and good discussion as compared to Dr. Peltier’s repeated use of the word “denier” as an epithet in his interview when discussion his fellow scientists who wrote the WSJ article.
Very unprofessional considering he was belittling him on National Radio that is heard not only in Canada but a good part of the USA. I was embarrassed for him and his fellow warming scientists but I suppose when facts fail you, throwing epithets is the only option left to the uneducated.
Sadly, this was related to his winning an award with a 1 million dollar prize associated with it:
“Dr. Richard Peltier, University Professor of Physics at the University of Toronto and founding director of U of T’s Centre for Global Change Science, is this year’s winner of Canada’s highest prize for science, the Gerhard Herzberg Canada Gold Medal for Science and Engineering.”
What bothers me even more – I am an Engineer, and in our Association of Professional Engineers Geologists and Geophysicist Association, there has been considerable rational debate on the issue of global warming without this type of nasty attribution in our letters to the editor and other articles (at least in the ones I have read).
I embarrassed to see the word “Engineering” in the name of the award that he received as given the way he used the words “deniers” in his he used in his interview, a Professional Engineer might be subject to disciplinary action for making this kind of accusation against his peers.
I am in total shock that such a person would make such strong statements although it may be I have been sensitized by my own biases.
Listen here: http://www.cbc.ca/quirks/media/2011-2012/qq-2012-03-03_05.mp3
Sadly, he will use his 1 million dollar award to hire graduate students and post doctorate fellows to prove out his holistic earth “MODEL” to “Make Projections”. In other words, it appears he want them go look for data that will support his conclusions and “TUNE” his models to match reality as opposed to the real science of analyzing data and developing a conclusion.
As far as Dr. Peltier is concerned, it seems, he considers the science is settled. He is a modeler. And we all know about GIGO. So he is really a garbage collector. He needs to take the garbage out …. so we can get back to science.
He wants to develop models to project/predict client 100 years out.
The interview sounds fairly reasonable until he gets to the denier comments except where he claims the “ensemble of independent models” is very accurate. Another theory of averages – average the models and get an accurate result. Amazing. You can make bad data good simply by averaging.
He really goes on about forcings versus feedbacks. He also wants to “decarbonize” the economy.
But perhaps I am overreacting.
It would be nice to have some third party comments on his interview.
Too bad. Kind of sad

Frank K.
March 5, 2012 12:24 pm

Well gosh – nobody answered my question. Is a correlation coefficient of 0.5 considered “good” or “strong”? What about 0.6? 0.7? 0.8? How good is good enough?
Did anyone read Hansen’s paper 1987?
Once we’ve established the “goodness” of the station correlations, we can then talk about the “reference station method” that they use…

Wayne Delbeke
March 5, 2012 12:33 pm

Shoot, auto correct nailed what I posted several times. APEGGA is the Association of Professional Engineers Geologists and Geophysicists of Alberta (6th paragraph)
“discussion” should be “discussing” 3rd paragraph.
7th paragraph – duplicate words – “way he used the words “deniers” in his he used in his interview” should read “way he used the words “deniers” in his interview.”
“models to project/predict client 100 years out.” – “client” was supposed to be “climate”.
My apologies for the typos. Very unprofessional of ME not to have proof read this better before running out the door this morning.
Apologies.

March 5, 2012 12:36 pm

Stephen Richards says:
March 5, 2012 at 4:16 am
Robert
Many years ago I made this very same argument at RealClimate. I got a very similar, but more rude, reply from them as you have from Stokes and Mosher. People from non-engineering and/or non Physics -ologies have not had the necessary tutolage to understand their lack of knowledge in those fields. You always get the “strawman” argumant back because they can’t understand measuring techniques and their limitations.

This is perhaps one of the reasons the average person on the street understands the limitations of these measurements much better than these “climate scientists”, because in their every day jobs and activities they are constantly confronted by measurement errors and the difficulty of making accurate measurements.
The machinist quickly learns that metals move around a lot due to temperature changes that are not obvious at all. You measure a part just as the lunch whistle blows and when you come back from lunch you re-measure it and find it is a different size, because it cooled while you were eating. The steel worker that learns that a beam will fit easily into a location in the morning but after a few hours it now has to be forcibly coaxed into position because it is an 1/8 of an inch longer due to a hardly noticeable temperature change. Shooters learn that their rifle shoots to a different point of aim after the first couple shots have warmed the cold bore, and if the cartridges are left out in the sun during a match they will shoot high as the powder will burn faster since the cartridge case is warmer producing higher pressures. Photographers see changes in exposure that the human eye totally ignored but the camera meter detected and the resulting images are obviously different, yet at the time he was completely unaware a thin cloud had blocked a few percent of the sunlight. The house wife that discovers her favorite bread recipe does not bake properly at Denver’s altitude due to the lower pressure and she needs to change how she prepares bread dough. Mechanics discover that certain auto parts are not really round and it makes a difference where and how you measure them (pistons are actually oval shaped) etc. etc.
People with real jobs run into measurement issues every day at far courser resolution than the scientists are claiming and once the issue is pointed out to the person on the street, it is simply unreasonable that they can achieve the precision they claim given the highly questionable data they are working with.
Larry

March 5, 2012 12:51 pm

bill says: March 5, 2012 at 11:32 am
“Once you face the (fairly obvious) fact that we don’t know the global temp for the last couple of hundred years in any meaningful way, then climate science and its scary forecasts collapse.”

No, our ability to measure variations in global temperature is unrelated to the fact that we are putting large amounts of CO2 into the air, and CO2 is a greenhouse gas. The earth will heat regardless of our skill at thermometry.

Edim
March 5, 2012 1:05 pm

“No, our ability to measure variations in global temperature is unrelated to the fact that we are putting large amounts of CO2 into the air, and CO2 is a greenhouse gas. The earth will heat regardless of our skill at thermometry.”
Will it also heat regardless of our skill at physics?

March 5, 2012 1:15 pm

Your article is a reminder of the originality and importance of using oxygen infrared from satellite to measure tropospheric temperatures – a development for which Christy and Spencer should receive the highest honors from the climate scientific community.
However, neither is an AGU fellow, though AGU honors as fellows many climate scientists of modest credentials. http://www.agu.org/about/honors/fellows/alphaall.php#C

Damn skippy! It is brilliant, and the record they are developing is as important as the work of the rest of the GISS and HADCRUT put together — an objective, completely independent more or less direct measure of global temperature that cannot be easily tweaked, twiddled, cherrypicked, trended, detrended, or corrected on any sort of heuristic basis. What corrections there are (for e.g. drift in the detectors) are made on the basis of direct evidence for the need, well supported by multiple independent instruments.
As I said, in 100 years people will still do microwave soundings from above TOA and derive troposphere temperatures that can be directly and accurately compared to their work. Try doing that in 100 years with GISS or HADCRUT. The data sources (where in the case of HADCRUT, last time I looked nobody but the elite few knew exactly what they were or could access them) will be different, locations that survive will have different instrumentation, the physical environment of those locations will continue to change and introduce noise and drift that cannot be corrected for as it cannot be separated from the signal one wishes to measure. A temperature anomaly doesn’t come with a fractionated causal label.
I don’t, actually, think that GISS is completely useless. I just don’t think it is as useful or conclusive as it is alleged to be regarding predicting CAGW. For one thing, I think the error estimates are egregious, as I’ve now pointed out several times.
I notice, however, that Nick Stokes still hasn’t addressed the believability of error estimates only a factor of 2 greater for 1885 than they are for today, or the GISS elephant in the room — the very models that predict CAGW also predict that that the lower troposphere should warm more, faster, than the ground. The actual numbers go the other way, really quite substantially. Which is wrong — the GCMs (in which case why should we believe them when they predict catastrophe when they can’t even get the relative slope of atmospheric warming right) — or GISS which might be systematically exaggerrating the growth of the surface anomaly? No winners either way — and they could both be wrong. I doubt that the UAH numbers are wrong, though.
I am also quite unconvinced that the El Nino spike has anything whatsoever to do with the overall trend in the data, any more than Mt Pinatubo cooling does. Yes, the various climate oscillations are important modulators, as are natural events, as is solar state. On what grounds, then, do you decide what is signal and what is noise? Either include everything and correct for nothing and let the data speak for itself or open up the field to empirically fit all the plausible hypothesized causes and see what works best the explain the data. Stop playing the omitted variable game, whether or not it is really “fraud”.
The latter is anathema to the CAGW crowd, however, because the more possible causes one admits for observed warming trends, the less warming that remains to be explained by increased CO_2 and the lower the chances of catastrophe. Curiously, nobody in the CAGW camp ever seems to be cheered by the possibility that we aren’t headed for catastrophe after all. When the UAH temperature fails to actually continue to increase post the 1998 peak and might even be decreasing, this should be great news! The sky may not be falling after all! At the very least, it is something else to understand, a suggestion from nature to stop omitting important variables and focusing “only” on CO_2.
After all, if solar state is a more important control variable than “just” variable direct TOA insolation, then suddenly the solar state in the latter part of the 20th century becomes relevant, because the sun was in a Grand Maximum that at least one researcher alleges on the basis of proxy evidence was the most intense and prolonged such event in around 10,000 years — the last such event might have been associated with the actual end of the ice age and helped initiate the Holocene. A plausible physical mechanism for this has been proposed (modulation of albedo), and there is a fair bit of corroborating evidence, including the direct evidence that the Earth’s albedo has increased measurably over the last fifteen years, large due to changes in cloud formation patterns.
This sounds like it might be actual science. There is an observation — solar state and the Earth’s climate have varied in a correlated way over thousands of years in a way that is too strong to be explained by just variation in surface brightness of the sun. There are papers on this that present convincing data. There is a hypothesis — the solar magnetic field modulates Galactic Cosmic Rays that impact the Earth’s atmosphere. There is more evidence, again, totally convincing, that cosmic-ray derived neutron rates countervary with solar activity and so on, in addition to variations in various radioactive atmospheric components that allow us to trace this back over historical time via proxies. There is a plausible explanatory hypothesis — that particle cascades through supersaturated air can nucleate clouds. This hypothesis has some direct evidence, although questions remain. There is the indirect evidence that as the Sun’s magnetic activity has recently sharply diminished, the Earth’s albedo has increased, and that the increase has been due to increased cloud formation. Finally, there is secondary evidence that may be connected but that (as yet) lacks a complete explanation — during that same interval stratospheric H_2 O has dropped by roughly twice as much as the albedo has increased.
A change in albedo of 0.01 is equivalent to roughly 1 K. If the Earth’s bond albedo went from 0.30 to 0.31, one would (all things being equal) expect the global average temperature to drop by roughly 1 K. The observed variation has been easily of this magnitude (the Earthlight-derived increase was around 6% or 0.02, and the albedo is currently holding close to constant (retaining the increase). This could cause the Earth to gradually cool by as much as 2 degrees K over 20-30 years if the GCR modulation hypothesis is correct and the sun remains magnetically quiescent — much as was observed during the Maunder Minimum and associated Little Ice Age.
This is not rogue science! It is quite legitimate. It deserves to be taken quite seriously, not suppressed as contrary to “the cause”, especially not without bothering to even wait for experiments and evidence to be undertaken to help support or undermine it. One can understand every step of the physics every bit as well as one can understand the GHE — arguably better as it is higher in the food chain of solar energy flow than the GHE. It is already known that modulating nucleation aerosols modulates cloud formation rates, albedo, and global temperature — it simply provides a secondary solar-modulated pathway of modulating the nucleation process, one that we may well not fully understand yet but that evidence suggests exists. Given that this has profound implications for our planet’s climatological future and that a 2 K decrease in average global temperature might well be catastrophic, perhaps we should work on this and not dismiss the entire hypothesis with a sniff.
Other confounding causes are the phases of e.g. the PDO (beloved by Spencer) and/or NAO, changes in ocean currents. In other words, one might well be able to explain at least some fraction of the warming and cooling trends of the last thirty years without recourse to CO_2 and the GHE. We aren’t close to untangling the morass of cause and effect on patterns of warming and cooling here, although there is tantalizing evidence from the previous warmings of the Arctic in the period following 1920 that coupling between the NAO and currents split off from the Gulf Stream were major factors. Even the change in cloud cover and albedo could have a completely different cause from GCR modulation, because in spite of its importance as a GHG our understanding of water in the atmosphere is really rather poor.
In time, satellite-derived data will let us answer most if not all of these questions (including the ones involving the albedo and modulation of the actual GHE itself as visible in TOA IR spectra, which IMO is the only thing that ultimately really matters regarding power flow through the Earth as an open system), and answer them far more soundly than ground based thermometric data and with far less opportunity for systematic biases (intentional or not) to produce misleading results. Christy and Spencer absolutely deserve recognition for their contribution, and Spencer deserves further recognition for his steadfast refusal to buy into the CAGW hypothesis as “proven” in spite of considerable pressure to do so, but I don’t know that they will get it. They represent a major embarrassment for GISS and the GCMs and the UAH/RSS data substantially weakens the case for “catastrophe”.
rgb

March 5, 2012 1:20 pm

Nick says :

“No, our ability to measure variations in global temperature is unrelated to the fact that we are putting large amounts of CO2 into the air, and CO2 is a greenhouse gas. The earth will heat regardless of our skill at thermometry.”

So this now sounds more like act of faith ….. CO2 is a greenhouse gas …. it is increasing …. the earth will warm. But supposing net feedbacks are negative and the Earth warms by 0.5 degrees C. Should we continue dismantling our industries just based on belief ?

Slartibartfast
March 5, 2012 1:25 pm

The earth will heat regardless of our skill at thermometry.

Or…not. Our skill at thermometry determines how well, if at all, we can determine the magnitude of said heating.
At some point, you have to validate the models with data. If not, why bother taking data?

March 5, 2012 1:30 pm

– Finally the yearly global temperature anomalies are calculated by taking an area weighted average of all the populated grid points in each year. The formula for this is $Weight = cos( $Lat * PI/ 180 ) where $Lat is the value in degrees of the middle of each grid point. All empty grid points are excluded from this average.
Oh, please don’t tell me that GISS is using latitude and longitude based grid cells. Seriously, guys. Didn’t anybody working there actually ever study numerical integration on spherical manifolds? I mean, there are papers on this where people work out decent ways to do it…
Oh, my sweet lord.
You’d think that somebody at Goddard might be smart enough to use an adaptive icosahedral grid instead of a rectangular Mercator projection, with or without the cosine, which does horrible things at the poles (and equally horrible things at the equator).
You’d also think that somebody might learn about splines or kriging and, I dunno, use it in something like this where it clearly matters, instead of assuming some “anomaly range” and smearing out the data accordingly.
One day if I ever have infinite time I’ll have to work through all this myself. The tragic thing is it isn’t all that difficult to do this correctly. A job for a graduate student or two, or even a bright and well directed undergrad. Undergrad computer science students often have to build icosahedral tesselations of a sphere just as an exercise…
rgb

LazyTeenager
March 5, 2012 1:41 pm

Physics Major on March 4, 2012 at 3:26 pm said:
Stokes
The little Climate Widget in the sidebar shows a February global temperature anomaly of -0.12 K. This would imply an accuracy somewhat greater than 0.05 K
————-
No it doesnt because it is temperature ANOMALY. It is a change in temperature relative a baseline. There is a big difference between claiming an accuracy figure for an anomaly and claiming an accuracy figure for global temperature.

LazyTeenager
March 5, 2012 1:48 pm

Robert Brown says
Oh, please don’t tell me that GISS is using latitude and longitude based grid cells. Seriously, guys.
———-
You’re confused. They say how the weighting factors are calculated. They don’t say they are using a particular grid cell.
Please read more carefully before you try to demonstrate your superiority. If you don’t it tends to be embarrassing.

bill
March 5, 2012 1:52 pm

Nick Stokes @ 12.51:
“No, our ability to measure variations in global temperature is unrelated to the fact that we are putting large amounts of CO2 into the air, and CO2 is a greenhouse gas. The earth will heat regardless of our skill at thermometry”.
But Nick, since don’t really know what the temperature was, we don’t know by how much it has gone up, do we? Even assuming we’re happy with the theory, all we can say is that at some point we will hit the 3 deg C ‘tipping point’, we can’t say that its in any sense imminent (except at a ‘maybe’ level). So to say that at some point the world will get jolly warm if we carry on putting this amount of GHGs into the atmosphere, doesn’t necessarily imply policy responses now. If in our ignorance, we applied policy solutions now, they might not have practical effects on the future, and might be very disadvantageous over a 50 year scenario. Perhaps the precautionary principle should be recast as ‘do nothing’, or ‘wait and see’?

March 5, 2012 2:03 pm

No, our ability to measure variations in global temperature is unrelated to the fact that we are putting large amounts of CO2 into the air, and CO2 is a greenhouse gas. The earth will heat regardless of our skill at thermometry.
CO_2 is a saturated greenhouse gas in that the atmosphere is already opaque in the CO_2 band up to the top of the troposphere. The CO_2 part of the GHE is strictly determined by the temperature at which the top-of-troposphere CO_2 radiates to space at the point where the atmosphere becomes sufficiently transparent to in-band IR to permit the radiation to escape. The temperature itself can be measured in TOA IR spectroscopy. The only way the GHE will be enhanced is if the CO_2 ceiling lifts so that radiation occurs at still cooler temperatures.
There is no evidence that I’m aware of that TOA IR spectroscopy is detecting a mean cooling/lifting of the CO_2 band (and hence an enhanced GHE) that is functional on concentration. While the tropopause does move up and down — IIRC up during El Nino, down during La Nina, causing an actual real-time increase in the GHE during El Nino (and hence global warming) and a decrease during La Nina — as in right now, where the UAH anomaly is currently -0.1 — I don’t believe I’ve heard of any solid evidence for it moving in a trended way.
Indeed, what has recently happened is that the stratosphere has become more transparent because of decreased H_2O, which permits GHG’s to radiate their energy from further down where it is warmer, leading to net cooling. We are in a period where the GHE itself is not increasing (or at least, is not increasing much) quite independent of what CO_2 is doing.
While there may be participants in this thread that wish to “deny” that the GHE is a real thing that contributes to the warming of the Earth, I am not one of them. However, a glib reply that the GHE is real and hence we are headed for catastrophe is not good science, and it is not reasoned argumentation.
Warming can have many causes. If you like “denial”, stop denying that some of those causes may be independent of and commensurate with CO_2; stop asserting that anybody who thinks otherwise is an unreasoning fool deserving of pejorative and dismissive labels. Natural climate variation is absolutely capable of explaining a significant fraction of the temperature anomaly. You may think that fraction is negligible. I do not. I think it quite possible that the CO_2 attributable fraction may be “negligible” in the sense that it is less than half of the total, including all feedbacks. I also think it is quite likely that the GISS computations of the surface temperature, whether it is of the temperature “anomaly” or the absolute temperature, are fraught with errors. Using a Mercator “degrees” based grid to cover a sphere, for example. Assuming that temperature anomalies in the relatively small fraction of the Earth’s surface sampled extrapolate to cover the whole thing, for example. How, exactly, would you prove this, back in 1880 when Antarctica was completely unexplored, when the Pacific was a great big black hole (that covers almost half of the Earth) as far as data is concerned, when the Australian Outback was raw frontier?
So the question, my friend, is not whether or not global warming has occurred or is occurring. Nor is it whether or not global cooling has occurred or is occurring. It is what fraction of the observed warming or cooling anomaly (to use your own referent) can be attributed to changes in the concentration of CO_2, what fraction of the changes in CO_2 concentration can reasonably be attributed to “human activity”, what other human activities have contributed to climate change (and how), what are the feedbacks and drivers of all contributors to climate change, and is there good reason, based on evidence and sound evidence supported arguments, to fear “catastrophe”.
At the moment, there is scant evidence of any impending catastrophe.
rgb

LazyTeenager
March 5, 2012 2:08 pm

Robert Brown says
Damn skippy! It is brilliant, and the record they are developing is as important as the work of the rest of the GISS and HADCRUT put together —
————–
Hyperbole, but I agree. And since the satellite data agrees with the surface temperature data it gives us some confidence that the surface data set analyses are ok.
The satellite measurements are indirect measurements, have problems of there own and are techically demanding. The analysis produces an average temperature just like the surface temperature data sets, so obviously people like Roy Spencer think temperature averages make sense, unlike some people here.

March 5, 2012 2:19 pm

rgb (Robert Brown) said,
“[ . . . ] Curiously, nobody in the CAGW camp ever seems to be cheered by the possibility that we aren’t headed for catastrophe after all. When the UAH temperature fails to actually continue to increase post the 1998 peak and might even be decreasing, this should be great news! The sky may not be falling after all! At the very least, it is something else to understand, a suggestion from nature to stop omitting important variables and focusing “only” on CO_2.”

rgb,
First, thank you for a very readable science post! I hope you post here at Anthony’s place more often.
Indeed, why aren’t scientists who consistently spread CAGW relieved that we are now becoming more and more confident that their original catastrophes cannot occur? They seem to need catastrophic possibilities for their happiness.
Your well stated alternate view criticalof a government (bureactic) science institution’s position (GISS ) does cause me to reflect on bias in the scientific findings of gov’t institutions like GISS. Lack of balance in those gov’t scientific institutions only seems to serve the alarming messaage of the IPCC centric climate scientific community, it seems never to provide a platform that includes a balance of the views of independent (aka skeptic) thinkers.
John

Andrejs Vanags
March 5, 2012 2:59 pm

Why is the temperature used for the average? We should be using the average of Heat flux or H=sigma*T^4. Then, if desired, an equivalent temperature could be calculated as Tave = (H_ave/sigma)^1/4. It is easy to prove that if the heatflux has a delta variation(lets say a sine function) over the globe with an average of zero the average of all global heat fluxes will be H, but the average of all the temperatures will be T + deltaerror. This error will depend on how the heatfluxes are redistributed over the globe. The globe could be just redistributing its heat, and NOT warming NOR cooling, and the average temperature would still give you a false signal, just form heat redistribution alone. Over a degree or so it doesnt matter, but I assume the global temperature is calculated as the average of day and night temperatures, in which case the heat redistribution error could easily account for over 0.6 degrees we are talking about. Since the data is readily available, one should be able to get all the raw temperature data, convert to heat flux, average it, convert to global temperature, doit for all the years of interest and see if it changes the ‘trend’ at all. Could any one do that?

Reply to  Andrejs Vanags
March 6, 2012 1:55 am

Reply Andrejs:
I have actually done the comparison between T^4 averaging and simple linear averaging using the CRUTEM3 data. You can see the resulte for the full range here and a detailed look at the range from 1950 till 2011 here.There are some subtle effects.
The coverage in stations is poor until 1950 and this effects the global average. One main reason for this is the lack of stations in the Tropics. The CRU/GISS people argue that the anomalies (deltas) . However since 1950 we have a stable set of stations and the trends are very interesting:
1. T^4 and T are the same for the Southern Hemisphere. However the energy flux value (T^4) is higher for the Northern Hemisphere
2. There has been ZERO warming from 1950 in the Southern Hemisphere.
3. All observed temperature rises are in the Northern Hemisphere. NOTE that the GLOBAL: value is just (NH+SH)/2 and adds no new information.
Conclusion: Two effects occur in the NH. A) T^4 values are higher 2) Temperatures have risen since 1950

Edim
March 5, 2012 3:24 pm

Andrejs, I agree in general. But since the heat transfer at the surface is multi-modal (evaporation, radiation, convection), one cannot convert the temperature data to heat flux. Surface temperature is not enough.
I agree with your point about redistributing heat (energy).

March 5, 2012 3:55 pm

Robert Brown says: March 5, 2012 at 2:03 pm
“CO_2 is a saturated greenhouse gas in that the atmosphere is already opaque in the CO_2 band up to the top of the troposphere.”

This is an ancient controversy, and you are echoing Angstrom 1907. But more accurate measurement showed it to be not so.
“There is no evidence that I’m aware of that TOA IR spectroscopy is detecting a mean cooling/lifting of the CO_2 band (and hence an enhanced GHE) that is functional on concentration.”
There’s no evidence either way – we just don’t have the measurement capacity at the moment.
“Indeed, what has recently happened is that the stratosphere has become more transparent because of decreased H_2O, which permits GHG’s to radiate their energy from further down where it is warmer, leading to net cooling.”
No, the temp gradient in the stratosphere goes the other way. Lower is cooler, and leads to warming.
“Using a Mercator “degrees” based grid to cover a sphere, for example.”
There’s actually nothing wrong with this. They have a noisy signal with spatial correlation. There’s no reason to believe a fancier grid with better spatial resolution would help. Their main grid issue is with the variable number of stations reporting each month, and the problem of empty cells. And for various reasons, it’s undesirable to change the grid from month to month to overcome that.
“So the question, my friend, is not whether or not global warming has occurred or is occurring. Nor is it whether or not global cooling has occurred or is occurring. It is what fraction of the observed warming or cooling anomaly (to use your own referent) can be attributed to changes in the concentration of CO_2, …”
I disagree completely. I think the important question is exactly “whether or not global warming has occurred or is occurring”. Whether there is noise in the temperature, or whether its measurement is defective, simply detracts (if true, disputed) from one way we have to confirm that warming.

March 5, 2012 4:34 pm

Robert Brown says:
March 5, 2012 at 2:03 pm
Indeed, what has recently happened is that the stratosphere has become more transparent because of decreased H_2O, which permits GHG’s to radiate their energy from further down where it is warmer, leading to net cooling. We are in a period where the GHE itself is not increasing (or at least, is not increasing much) quite independent of what CO_2 is doing.

= = = = =
rgb,
I am interested in your comment about lower GHE in recent years due to lower stratospheric water vapor.
Is the mechanism causing decreased water vapor (H2O) in the stratosphere primarily caused by decreased production of water vapor from oxidation of methane (CH4) due to reduced amounts of stratospheric methane?
Or is the decrease in stratospheric water vapor a result of increased photo-dissociation of water vapor in the upper troposphere and/or lower stratosphere caused by increases in UV in the SSI caused by recent lower solar cycle activity/strength (low cycle 24 activity in particular). Note; my info is that the lower (less energetic end) of the UV spectrum from the SSI are the cause of the water vapor photo disassociation.
Or a combination of both?
Any other mechanisms for reduced stratospheric water vapor?
Thanks.
John

March 5, 2012 4:46 pm

Can I put that post (the Guest Post by Dr. Robert Brown) on a bumper sticker?
Okay … how about as an appendix to Dick Tracy’s “Crimestoppers’ Textbook” ?
Great summation of the issues, the scenario, Dr. Brown.
.

March 5, 2012 4:48 pm

Nick Stokes says on March 5, 2012 at 3:55 pm:

And then the lawyers descend (or is it ‘ascend’?) to dissemble …
.

Werner Brozek
March 5, 2012 5:36 pm

Is GISS more accurate?
I have read that GISS is the only record that is accurate since it adequately considers what happens in the polar regions, unlike other data sets. I have done some “back of the envelope calculations” to see if this is a valid assumption. I challenge any GISS supporter to challenge my assumptions and/or calculations and show that I am way out to lunch. If you cannot do this, I will assume it is the GISS calculations that are out to lunch.
Here are my assumptions and/or calculations: (I will generally work to 2 significant digits.)
1. The surface area of Earth is 5.1 x 10^8 km squared.
2. The RSS data is only good to 82.5 degrees.
3. It is almost exclusively the northern Arctic that is presumably way warmer and not Antarctica. For example, we always read about the northern ice melting and not what the southern areas are gaining in ice.
4. The circumference of Earth is 40,000 km.
5. I will assume the area between 82.5 degrees and 90 degrees can be assumed to be a flat circle so spherical trigonometry is not needed.
6. The area of a circle is pi r squared.
7. The distance between 82.5 degrees and 90.0 degrees is 40,000 x 7.5/360 = 830 km
8. The area in the north polar region above 82.5 degrees is 2.2 x 10^6 km squared.
9. The ratio of the area between the whole earth and the north polar region above 82.5 degrees is 5.1 x 10^8 km squared/2.2 x 10^6 km squared = 230.
10. People wondered if the satellite record for 2010 would be higher than for 1998. Let us compare these two between RSS and GISS.
11. According to GISS, the difference in anomaly was 0.07 degrees C higher for 2010 versus 1998.
12. According to RSS, it was 0.04 degrees C higher for 1998 versus 2010.
13. The net difference between 1998 and 2010 between RSS and GISS is 0.11 degrees C.
14. If we are to assume the only difference between these is due to GISS accurately accounting for what happens above 82.5 degrees, then this area had to be 230 x 0.11 = 25 degrees warmer in 2010 than 1998.
15. If we assume the site at http://ocean.dmi.dk/arctic/meant80n.uk.php can be trusted for temperatures above 80 degrees north, we see very little difference between 1998 and 2010. The 2010 seems slightly warmer, but nothing remotely close to 25 degrees warmer as an average for the whole year.
Readers may disagree with some assumptions I used, but whatever issue anyone may have, does it affect the final conclusion about the lack of superiority of GISS data to any real extent?

March 5, 2012 5:49 pm

Werner Brozek says: March 5, 2012 at 5:36 pm
“Is GISS more accurate?
I challenge any GISS supporter to challenge my assumptions and/or calculations and show that I am way out to lunch.”

14 is wrong, at least. The main difference between GISS and RSS is that they are measuring different things. One is surface temp, the other is tropospheric. One big downside of current satellite measurement is that it aggregates different levels of the atmosphere.

March 5, 2012 6:03 pm

Resending my below comment again, but this time with correct html tags.

Robert Brown says:
March 5, 2012 at 2:03 pm
Indeed, what has recently happened is that the stratosphere has become more transparent because of decreased H_2O, which permits GHG’s to radiate their energy from further down where it is warmer, leading to net cooling. We are in a period where the GHE itself is not increasing (or at least, is not increasing much) quite independent of what CO_2 is doing.

= = = = =
rgb,
I am interested in your comment about lower GHE in recent years due to lower stratospheric water vapor.
Is the mechanism causing decreased water vapor (H2O) in the stratosphere primarily caused by decreased production of water vapor from oxidation of methane (CH4) due to reduced amounts of stratospheric methane?
Or is the decrease in stratospheric water vapor a result of increased photo-dissociation of water vapor in the upper troposphere and/or lower stratosphere caused by increases in UV in the SSI caused by recent lower solar cycle activity/strength (low cycle 24 activity in particular). Note; my info is that the lower (less energetic end) of the UV spectrum from the SSI are the cause of the water vapor photo disassociation.
Or a combination of both?
Any other mechanisms for reduced stratospheric water vapor?
Thanks.
John

Werner Brozek
March 5, 2012 6:23 pm

Nick Stokes says:
March 5, 2012 at 5:49 pm
14 is wrong, at least. The main difference between GISS and RSS is that they are measuring different things.

Fair enough. However the relative difference between Hadcrut3 and GISS is also around the same (0.12).

Vincent Gray
March 5, 2012 7:07 pm

It is quite impossible to measure the average temperature of the earth’s surface, let alone of the earth’s atmosphere, and this article is full of basic errors. Temperature is an INTENSIVE property, so it does not even exist unless it pervades the whole substance that is being measured.
It is theoretically possible to divide the earth’s surface, or even the whole atmosphere, into three dimensional infinitesimal increments, all of which possess temperature. Each of them will change continuously with time. If it were possible to sensor temperature for each of these increments and integrate them, it is possible to envisage some sort of average, but since its distribution curve is likely to be skewed there is a choice of several kinds of average. Only when this system is in operation for many years would it be possible to judge how it changes over time.
We obviously do not possess such a system and any claims that we know of an average figure for the temperature of the earth’s surface, or any part of the atmosphere, are false.
The IPCC does not claim to have measured average temperature, but instead they place far too much emphasis on what they call “The Mean Global Temperature Anomaly Record”. This is based on multiple averaging and subtraction of large and a varying number of miscellaneous maximum and minimum measurements in unrepresentative places on the earth surface, for which no estimate of the undoubtedly high inaccuracy is provided.. It is not a scientifically or statistically acceptable record of the earth”s surface temperature.
Despite all this, this record does seem to be influenced by such natural events as changes in the sun, volcanic eruptions, ocean oscillations, and cosmic rays, and it is also influenced by urbanisation and land change. There is no evidence that it is influenced by emissions of so-called greenhouse gases.

eyesonu
March 5, 2012 7:23 pm

Dr. Brown, your open participation in the CAGW debate is very much appreciated. I’m sure that I speak for many. I expect to see many more with the knowledge and insight such as yours to come forward in the near future. You do not need a supporting ‘consensus’ as you are doing an excellent job without any help. I’m sure you are an inspiration to many that we will soon hear from.
Thank you as I look forward to your posts. A true beacon of reality in the endless propaganda of CAGW.

George M
March 5, 2012 7:31 pm

NIck Stokes March 4, 2012 1:43pm
“anybody who claims to know the annualized average temperature of the Earth, or the Ocean, to 0.05 K is, as the saying goes, full of shit up to their eyebrows.”
This is a strawman argument. Who makes such a claim? Not GISS!. Nor anyone else that I can think of. Can anyone point to such a calc? What is the number?”
Here is bit of Gistemp anomaly data, collected on March 5 2012
Year Ann.Mean 5yr. Mean
1998 0.58 0.40
1999 0.33 0.43
2000 0.35 0.46
2001 0.48 0.46
2002 0.56 0.49
2003 0.55 0.54
2004 0.48 0.55
2005 0.62 0.56
2006 0.55 0.53
2007 0.58 0.55
2008 0.44 0.55
2009 0.57 0.55
2010 0.63 *
2011 0.51 *
2012 * *
So GISS is taking degC temperatures in a range an apporximate range of -50degC to +40 degC to hundredths of a degree. They are plotted to show a temperature anomaly range of from -2 deg.C to +.5 deg C as a presumably physically significant change in temperature. Given the kinds of temperature differences seen daily and the kinds of confounding influences mentioned above this looks ridiculous. It is taking the data for a ride.
Max Hugoson identifies the mistake above.
March 4, 2012 at 1:40 pm
As I have noted TIME AFTER TIME AFTER TIME…an 86 F day in MN with 60% RH is 38 BTU/Ft^3, and 110 F day in PHX at 10% RH is 33 BTU/Ft^3…
Which is “HOTTER”? HEAT = ENERGY? MN of course, while the temp is lower.
All this temperature twiddling is trying to make do with a total lack of the data really needed to analyze the problem.
George M

John another
March 5, 2012 7:50 pm

I thought I had read all of the responses but it seems I missed how the best accuracy of global temperature is presumed by IPCC at aprox. 2 degrees C while the anomaly of global temperature is measured in tenths or less. How can I know the change in volume of my gas tank in ounces but not know the actual volume to within pints? How will we know if our abandonment of fossil fuel has accomplished the required 2 degree drop the planet requires to survive evil mankind if that’s the best we can measure?

March 5, 2012 8:11 pm

John another says: March 5, 2012 at 7:50 pm
“How can I know the change in volume of my gas tank in ounces but not know the actual volume to within pints?”

Measure the flow in the fuel line.

Werner Brozek
March 5, 2012 8:24 pm

RSS for February just came out at woodfortrees. It came in at -0.121 C. So the combined January-February average is -0.09 placing it the 26th warmest so far. (UAH was also 26th warmest on its set after February.) For RSS, it is now 15 years and 3 months, since December, 1996, that the slope has no trend. (slope = -0.000234717 per year) See
http://www.woodfortrees.org/plot/rss/from:1994/plot/rss/from:1996.9/trend

John another
March 5, 2012 8:37 pm

Nick Stokes says: March 5, 2012 at 8:11 pm
John another says: March 5, 2012 at 7:50 pm
“How can I know the change in volume of my gas tank in ounces but not know the actual volume to within pints?”
Measure the flow in the fuel line.
If the flow meter can only measure in pints how does it measure flow rate to less than an ounce? Same instrument for different metrics.
But right now I’m much more interested in your response to both parts of George M says: March 5, 2012 at 7:31 pm immediately above my hokey post.

AlexS
March 5, 2012 8:58 pm

“I tell you that its 20C at my house and 24C 10 miles away. Estimate the temperature at 5 miles away”
This part of a Steven Mosher post shows the problem. If the Station is 10 miles away what will be the “correlation”? Good? Bad?

March 5, 2012 8:59 pm

John another says: March 5, 2012 at 8:37 pm
“If the flow meter can only measure in pints how does it measure flow rate to less than an ounce? Same instrument for different metrics.
But right now I’m much more interested in your response to both parts of George M says”

Well, at the gas station you pay to the nearest cent. That’s a pretty good indicator of volume change.
I’ve already responded frequently to people how show lists of anomalies. That’s different, and it’s validity is based on correlation. If you want to dispute that, you need to cite some statistics.

John another
March 5, 2012 9:25 pm

Nick Stokes says: March 5, 2012 at 8:59 pm
So the pump can tell me to the nearest penny what went into my tank. With the same flow meter on the line out I should be able to tell (to the nearest penny) whats in the tank.
Now about that difference in energy content of a cubic foot of air that George referred to with Max Hugoson’s discussion of Minn. and AZ temps and humidities? Sounds like temp by itself is a pretty useless metric for energy.

John another
March 5, 2012 9:36 pm

Nick
Are you saying that George M’s list of GISS temperature anomalies that report in hundredths of a degree are not GISS temperatures.

March 5, 2012 9:54 pm

” With the same flow meter on the line out I should be able to tell (to the nearest penny) whats in the tank.”
No, that’s just another measure of change. In fact, you probably really don’t know to the nearest pint what’s in your tank. Even though you paid to the nearest penny.
George correctly labelled his table as Gistemp anomaly data. They are not “globalized annual temperature”.
As to the energy content of air, there’s only so many things you can usefully object to. GISS is measuring temperature anomaly. That’s all they claim. If you want total energy content, you’ll have to work with humidity too.

John another
March 5, 2012 10:44 pm

Nick, if I measure pennies in and pennies out, I know (to the penny) what is in the tank at any given time.
You are measuring with the exact same instruments the global temperature and the anomaly of said temperature. If the accuracy of the baseline is 1 degree either way how does one know if a change of one tenth of a degree is a change in real temperature?
“As to the energy content of air, there’s only so many things you can usefully object to. GISS is measuring temperature anomaly. That’s all they claim.”
Please report that immediately to the media.
Thanks Nick for your responses but I’ll look elsewhere for a little less convoluted education. I really am interested in the truth.

Alan D McIntire
March 6, 2012 5:51 am

k scott denison says:
March 4, 2012 at 2:04 pm
mosher
“While your argument contains some sense, using only 7,000 measurements for the entire surface of the earth isn’t exactly enough to say we hav a measurement every 10 miles, is it? I think your analogy is not only inaccurate but also disingenuous. ”
Earth has a surface area of 197 million square miles. Land makes up about 30% of this, or 59 million square miles. 59,000,000/7000 measuring sites = 1 measuring site for every 8400 square miles. Most of those sites are in relatively urbanized ares.
t

March 6, 2012 6:09 am

I am interested in your comment about lower GHE in recent years due to lower stratospheric water vapor.
I don’t know. The measurements are still fairly recent, and I haven’t read of any explanation. If I had to guess, increased cloud formation in the troposphere is causing more moisture to precipitate out before it gets to the stratosphere, creating a cooling negative feedback.
rgb

March 6, 2012 6:13 am

Wayne Delbeke says: March 5, 2012 at 12:17 pm
Interesting on [abovementioned] page was notification of the shutting down of the Canadian weather station at Eureka, Nunavit, Canada – which is a bit sad given it is a good arctic weather station.
Attention please someone! I seem to remember Eureka was already being vastly overused to show the temperature of great swathes of the far north?? and if it goes does that give GISS this year’s heist?
If I’ve mis-remembered, apologies.

March 6, 2012 7:13 am

Measure the flow in the fuel line.
This isn’t even a strained metaphor for GISS, it is just false. GISS is trying to measure levels in the tank, not flow in versus flow out.
Which you just agreed could not be done. We cannot, or at least have not, been able to actually measure and integrate TOA outgoing power versus incoming power with sufficient precision to actually measure the “flow in the fuel line” — ins versus outs. Although to be honest, that’s ultimately the only way to really resolve the issue — directly measure, and study, the full TOA spectrum over a considerable amount of time to high precision.
We can, on the other hand, measure the flow in pretty accurately, and we can also measure the direct reflection of the flow in pretty accurately. The flow in has reduced. What the mechanism is of the reduction one can argue about, but the direct measurements on insolation and albedo are harder to argue with.
The flow out is the eternal problem, of course. As I said, IMO there is no question that there is a GHE that helps warm the atmosphere. There is no convincing evidence that the GHE is as sensitive to CO_2 concentration as the IPCC claims, and such evidence as there is is both dubious (MBH, the earlier part of GISS) and suffers from the assumption that CO_2 is the only important factor in late 20th century warming in spite of the fact that the natural variability of the climate is sufficient to explain at the very least an unknown fraction of it. Finally, the arguments for strong feedback from e.g. H_2O that will multiply any CO_2 based warming by factors of 3 to 5 are weak.
So the mere fact of warming is not sufficient to conclude either that there is a catastrophe looming or that the bulk of the warming is anthropogenic in origin. Nor are simulations based on GCMs that neglect variables that almost certainly have significant influence on the climate (such as the ones that were determining the natural variability in the climate that MBH attempted to erase with their hockey stick analysis).
Nick, the assertions of CAGW are serious business. You are talking about diverting not just tens, but hundreds of billions of dollars, money that can be put to use in many ways. You assert that the science is sound (at least, that’s what I think you are doing) but if I were to ask you to explain why stratospheric H_2O is decreasing, why the Earth was (apparently) warmer at the Holocene Optimum than it is now and was nearly as warm during the Medieval Optimum, why the Earth’s temperature appears to be strongly correlated with solar state in ways that cannot be explained by just variations in surface brightness, why the GCMs predict more atmospheric warming than surface warming but the satellite data confounds this, why the GCMs don’t work very well to forecast or hindcast, why the Earth’s cloud albedo appears to be increasing as we move out of the Grand Solar Maximum of the 20th century — could you provide explanations for all of these things?
Are you not just making a complex modern version of the religious argument known as Pascal’s Wager? It’s definitely warming, and we might be responsible for some or even all of it, and if a bunch of unproven assumptions are true it might lead us to a climate catastrophe, so we should invest enormous amounts of resources that could all be used to make the world better in many other ways in things that even the proponents acknowledge will not prevent the catastrophe — if they are right about it occurring in the first place? With, perhaps, the added sauce of a general agreement that some of those measures are (in your opinion) things that are good things to do anyway…
My own belief is that the data shows that there is little reason to panic — the higher levels of feedback required for catastrophe are simply implausible in a generally stable climactic system, and if one asserts that the system is not stable, then one is back having to deal with the natural variability issue once again. There is time to a) get the science right; and b) let other science catch up to where one can fix the problem (if it eventually proves that we do have a problem) far more cheaply than we can today.
One of the things I’ve worked quite a bit on is supercomputing. Physicists have always loved supercomputers. Back in the old days, if you wanted to get a big computation done you needed e.g. a Cray to do it. The only problem was that a Cray Y-MP might cost ten or twelve million dollars, plus a few million a year to keep it housed and fed and to pay its priesthood. To get time on a Cray you had to apply to the priests and if they found your offerings good you were granted a budget of so and so many hours.
Of course physicists love to publish papers, so many grants were written and lots of work was done at very great expense on these enormously expensive machines. But in the meantime, Moore’s Law was at work.
Moore’s Law, once beowulf-style homemade supercomputers were invented, was pretty brutal — I’ve worked out the economics of it (and published it in various places to the only people that matter) several times. Given a five year grant and a doubling time of 18 months in CPU speed, when should you invest your money to get the most work done? If you spend it all in year 1, it takes you 5 years to do some task (say). If you take a holiday for 18 months, and buy in then, you get three and a half years with twice the speed, and actually get 7 years worth of work done in five. If you wait three years and buy in then, you get 2 years worth of work with four times the speed for 8 units of work in five years. If you wait until four and a half years and buy systems that are 8 times faster, you almost match five years worth of work in half a year, and match the 8 if you can run just a bit longer.
We’ve ridden that curve to the point where stuff I did with hundreds of hours of Cray time at great expense I could now do on the laptop I’m typing this on in the time it takes to type this reply, to where I could reproduce years worth of work I did years after that (with a compute cluster) in months with a very small compute cluster with modern CPUs. Huge amounts of research that were done at great expense in the 70s, 80s, 90s, and 2000s could now be done at very little expense now.
How “burning” was our need to do the work back then? Not terribly. Most of the results of work on the Crays of the world were not terribly useful in the sense that they had a direct payback to society. On the other hand, it wasn’t very expensive (not compared to fighting wars and other real expenses) and there were a lot of indirect benefits, including supporting the best research and educational system in the world that is directly responsible for the bulk of our social and economic wealth. But still, a brutally honest cost benefit analysis would in many cases have told people seeking grants to come back in three years with a proposal 1/4 the size to accomplish the same thing, a bit later.
There were many cases where I (acting as a consultant) advised just that, especially when a few million dollars was being allocated to build a custom hand-made computer based on custom ASICs where anybody could see that over the counter computers would be able to match its eventual projected design speed by the time it was finished.
We are doing exactly the same thing with respect to CAGW, only on even shakier ground since the catastrophic bit in that hasn’t been proven at all, and the anthropogenic bit is at most a fraction of the total. We can invest our money heavily now as if it is an “emergency” with the net effect of making ourselves poor and not solving the problem, or we can invest our money a lot more reasonably now in the science and technology that might both objectively — non-politically — study the climate and eventually become real remedies should they ever be needed.
In the meantime, take heart! There has been basically no warming since 1998, and the current solar cycle is going to be the lowest in 130 years, from the looks of it. The next one is forecast (with a fair bit of uncertainty, I admit) to be even weaker. The last time this happened the climate cooled. Are you willing to tell me that you are certain that the temperature is about to just shoot right up and rejoin the various predictions that were made a decade ago for how hot it was supposed to be outside right now? Or is there just a bare smidgen of a chance that — those predictions were wrong, because they left out the effect of several major drivers of the climate and put it all on CO_2.
But no, somehow the planet’s refusal to actually cooperate by getting warmer as predicted seems to anger those that believe in the prediction. The discovery that things are more complex (in good ways!) should come as good new, but it never seems to actually do so.
Why is that?
rgb

Frank K.
March 6, 2012 7:51 am

Nick Stokes says:
March 5, 2012 at 8:59 pm
I’ve already responded frequently to people how show lists of anomalies. That’s different, and it’s validity is based on correlation. If you want to dispute that, you need to cite some statistics.
There you go again, Nick. Can you PLEASE back up your claim “its validity is based on correlation,” What correlation? How are computing the correlation coefficients for the anomalies (the Hansen paper does NOT show this, of course)? KR claims “strong correlation”. What does this mean? How strong is “strong”. Until you can be more precise about your claims, they just a bunch of scientific-sounding mumbo jumbo…

March 6, 2012 8:34 am

Robert Brown
Thanks for your recent series of posts – particularly your post at 3/5 1:15 pm which is clearly brilliant since it so clearly expresses my own views on the main climate drivers pretty much exactly.
You may be interested to know that Svalgaard is leading a group to improve the proxy record of solar wind changes back through the holocene .Clearly when this record is established we should be able to use wavelet analysis to pick out the key frequencies in the GCR ( and then Albedo ) record and relate these to the proxy temperture record. See
http://www.leif.org/research/Svalgaard_ISSI_Proposal_Base.pdf

March 6, 2012 1:30 pm

You may be interested to know that Svalgaard is leading a group to improve the proxy record of solar wind changes back through the holocene .
Svalgaard may turn out to be right or he may turn out to be wrong. However, at this particular instant in the history of the world nobody knows for sure which one it will turn out to be.
Not Svalgaard himself. Not Mann. Not Jones. Not Nick. Not K.R. Not you. Not me.
That’s why they call it scientific reasearch, an ongoing process of discovery.
There are systematic correlations already visible in the paleoclimate record. There are extremely strong systematic correlations from the last 500 years where we begin to have moderately accurate and scientific observations (which had to wait for the scientific era and the Enlightenment). Svalgaard has a hypothesis that could explain the correlations. It is physically plausible, supported by evidence already, but has missing detail (like nearly everything in climate research). However, it is a bone simple hypothesis, actually much simpler than the GHE, and one that is very definitely falsifiable and to a reasonable extent verifiable as well. The connection between weak solar magnetic field and higher levels of GCR impacts on the upper atmosphere is, I think, at this point beyond question. The only question is whether or not higher GCR rates can modulate nucleation of clouds. If the answer is yes, then the hypothesis has considerable explanatory power. If it is no, we are still left with the original correlations that must then be explained in some other way!
What isn’t an option is to pretend that the correlations themselves don’t exist, and don’t represent a serious problem with the CO_2-is-king theory of GW. That problem is there no matter what, because one of the fundamental assumptions of AGW is that the 20th century was “normal” so that the warming observed in it was “abnormal” and required explanation, and the only explanation given normal conditions was CO_2.
However, if the 20th century was abnormal — if, for example, the sun was in a 11,000 year Grand Maximum of solar activity for much of the century (or even a much more moderate 1000-2000 year Grand Maximum) then if the data themselves do not lie, that solar activity must at least be considered and modelled as a possible cause for at least part of the 20th century warming.
This is anathema to the IPCC and its ARs. They do not want to admit even the possibility of a confounding co-factor to the warming. There have been numerous open complaints about this, including complaints from AR reviewers. When e.g. Soon and Baliunas publish work that suggests a solar influence, the climategate letters clearly indicate the extremes at least the hockey team is willing to go to to discredit them. When Svalgaard emerges with a highly coherent hypothesis with supporting experimental data, they trip over themselves to dismiss it.
After all, the more of the 20th century warming that was natural, the less that is left over to attribute to CO_2 (anthropogenic or not, since if the 20th century warming was natural, a large part of the additional CO_2 was almost certainly released from the warming ocean, not from anthropogenic sources).
This is anathema to the political agenda of the collaboration between the IPCC, the rabid anti-civilization “green” environmentalists that want to create what amounts to a new secular religion, and normal people that dislike and distrust the multinational global energy industry for the very sound reason that it manipulates world politics to its economic advantage as standard operating procedure. Without a monolithic, human cause for all of the observed warming, how can one justify spending billions and billions of dollars regulating CO_2 emissions? Without the further assertion that CO_2 is leading us to a global catastrophe how can one justify the massive redistribution of wealth associated with Carbon Taxes?
I personally think that a lot of the things being done to address the real problems that do exist with fuel based energy resources are good things. But I’m not prepared to lie or mislead myself or the general public in order to coerce agreement with my opinions. Willis, for example, clearly disagrees with me about the importance and probable future history of solar power, but I wouldn’t lie to Willis and try to scare him with threats of global catastrophe to get him to politically support the continued development of solar energy. He has to make up his own mind about that, given his own best analysis of the situation. There is room for people to disagree and vote as they wish based on a fair appraisal of risk and benefit.
What I don’t like is de facto deceiving the public about an entire raft of things that would weaken the case of the CAGW enthusiasts. The warming is exaggerated at every turn. The past thermal history of the world is falsely reconstructed and, when corrected, the corrections are never made as public as the original mistakes (e.g. the Hockey Stick). Politicians with vested interests manipulate public opinion with pictures of polar bears on melting icebergs, without telling you that it is midsummer and the icebergs in question are a few meters from shore and that the bears are playing on the icebergs as they do every summer and aren’t in any danger whatsoever. Problems with the GCMs, their failure to either predict future temperatures or even predict the correct relationship between surface warming and lower troposphere warming are carefully elided from any public presentation of the terrible dangers of CAGW that those same GCMs are the primary evidence for. Error bars that are openly absurd are attached to e.g. the GISS reconstructions because only with small errors can one be certain that the alleged warming has even taken place.
If all of this was honestly and publicly published, not in the back of AR4 or AR5, but on national news where normal humans could see it, it is true that CAGW might not get the political support it needs to convince people to make the sacrifices it demands. This is just tough! The whole point of democracy is that the people themselves need to make informed choices, and then live with the consequences of those choices. Misinforming them to manipulate the choices so that they come out to what you think they should be, even “for their own good”, is shameful.
rgb

kakatoa
March 6, 2012 2:07 pm

Dr. Brown,
Loved the analogy of Cray’s (supercomputers) and technological development (with the costs and benefits over time) and the effort to cure the theoretical CO2 problem. In honor of Chippewa Falls, home of the original Cray Research center, I would like to offer you a virtual Leinenkugel draft beer from the same city. If you had a tip jar I would make the virtual draft more lifelike.

March 6, 2012 2:19 pm

As for climate there are not enough reliable data yet, and the reliable data we do have have limited validity for calculating climate trends, what if anything is there still to do for “climate science”?
As climate is a long-term activity by definition and “consistent and reliable” data-gathering is still in its infancy, in my opinion all climate science could do right now is mainly concentrate on collecting data for many many years to come and be very very patient.
Being very patient is clearly very difficult for ambitious career oriented and politically motivated scientists. So if you are in that province don’t choose climate science.
If still you keep going on with validating “settled” climate science as it is being sold to the public right now, remember where you position yourself as a scientist… there are lies, damned lies, statistics and, worse still… there is “settled climate science”… Maybe you can earn some money with it, or even some authority with the public, but that’s where you stand as a scientist. Because right now in my opinion you have nothing really substantial to work with, nothing… So the only thing you could do is trickery with numbers. That’s your only show.
Well for that I would prefer say Joseph Madachy, or Sam Loyd, or Martin Gardner, to name but a few… much more fun… and very proficient at their trade…

March 6, 2012 3:47 pm

Robert Brown says: March 6, 2012 at 7:13 am
“This isn’t even a strained metaphor for GISS, it is just false. GISS is trying to measure levels in the tank, not flow in versus flow out.”

I don’t think it is a particularly good metaphor; I didn’t choose it. John a’s criticism that a different instrument is used has some merit. But not this. I’ve quoted GISS on the levels in the tank:
“For the global mean, the most trusted models produce a value of roughly 14°C, i.e. 57.2°F, but it may easily be anywhere between 56 and 58°F and regionally, let alone locally, the situation is even worse.”
They’re not even trying to measure global annualizeed temperature; they have to model. And there’s no claim of accuracy there.
I know the issue of AGW is serious. It’s potentially a critical problem, which is why governments might choose to spend a lot of money to alleviate or avoid it. You try to shift the burden of proof to one side, but the costs of not acting may also be large.
The fact is that we’ve burnt 350 GT C and increased the C in the air by about 30%. That has had a modest effect.
But there are 3000 Gt that we could easily burn, and without contrary action, probably will over the next century. Possibly over half of that in the next fifty years. Now worries about the effect of that change to the atmosphere are very well justified. And if we want to make a collective decision to attempt to control our fate there, we’ll have to get a move on.

March 6, 2012 3:50 pm

Robert Brown – Re your last post Here! Here! but are you confusing Svalgaard with Svensmark perhaps?

March 7, 2012 9:13 am

The fact is that we’ve burnt 350 GT C and increased the C in the air by about 30%. That has had a modest effect.
But there are 3000 Gt that we could easily burn, and without contrary action, probably will over the next century. Possibly over half of that in the next fifty years. Now worries about the effect of that change to the atmosphere are very well justified. And if we want to make a collective decision to attempt to control our fate there, we’ll have to get a move on.

Now you are starting to sound reasonable. Let’s continue. It has had a “modest” effect, but if I asked you to positively quantify that effect, you couldn’t do so within a factor of three because the observed warming is not only due to the CO_2 — we have ample evidence that global temperature is multifactorial and that there is a range of variation sufficient to explain as much as 100% of the warming observed since the LIA in terms of natural, non-anthropogenic drivers. Since we cannot predict what past temperatures were, and cannot predict what the current temperature should be, we cannot tell what fraction of the current temperature known as an “anomaly” or not is due to natural, non-anthropogenic factors.
Your fears are predicated upon observed variation in a quantity that is not known to better than around 1% and that has many factors that modulate its actual value. You don’t know what those factors are, or how they contribute (fractionally) to the total. Those factors are complex and their dynamical contribution has many time scales, some of them as long as millennial (there are clear thousand year signals in the holocene thermal record, probably tied to some combination of solar variation and the equilibration time for the ocean). The variations that concern you are a few tenths of a percent of the total, unknown value, that you think you know because of a complex fit of data from a tiny fraction of the Earth’s surface.
Yet the modest degree of uncertainty you are finally expressing is nearly completely absent in the AR reports. It is there in the research, to be sure, but it is somehow always omitted from the final reports. They are always certain that CO_2 is a problem, and without exception exaggerate its projected consequences by pure multiplication of its projected effect by a fudge factor since by itself it would not lead to any sort of catastrophe.
This is real money we are talking about. It affects people’s lives right now. It is causing a certain amount of human misery as it is trying deal with CO_2 in a panic under the pretext that we are certain that human civilization as we know it will end (or be profoundly damaged) if we don’t.
The data itself suggest that in fact we don’t need to “get a move on”. Events are moving along on their own in ways that will ultimately ameliorate the CO_2 “problem”, if any such problem emerges from the uncertainty of climate science once the current horrendous level of confirmation biased research and alarmism passes. I’m all for research into renewable energy, simply because human civilization cannot persist indefinitely on non-renewable energy; we are gifted with a planet that has enough readily available fossil fuel resources to bootstrap the process of advancing to a steady state civilization but that doesn’t mean we should squander those resources by utilizing them to their limit. Some of them, like oil, are “black gold” in the form of raw material for organic chemistry and manufacturing, and all of them are scarce enough to want to preserve them for millennial time scales (for our children, as it were).
I fully expect research and development into renewable (and non-carbon fuel based) energy resources to yield not only cost-effective alternatives to carbon based energy resources, but cost advantageous resources, long before CO_2 emerges as a critical problem (if in fact it eventually works out that it is). I also expect that the development of these resources will cost only a few billions of dollars a year at the outside compared to the tens of billions on up that active control of carbon now with immature technology is costing.
Sometimes the right thing to do really is wait and see, and work at a modest scale on things that are interesting and useful in their own right in the meantime against the small chance that CAGW is a true hypothesis. Right now the data seems to suggest that it is not, that the huge positive feedbacks proposed by Hansen and others are egregious and that the real feedback is probably neutral to negative. It also suggests that the “A” in CAGW is the smaller part of the observed 20th century warming.
rgb

March 7, 2012 9:15 am

Robert Brown – Re your last post Here! Here! but are you confusing Svalgaard with Svensmark perhaps?
Probably. I suck at names. It took me a dozen rounds of reading their paper for me to actually remember Nikolov and Zeller’s names well enough to not write them as N&Z.
Sorry.
rgb

Brian H
April 3, 2012 12:11 am

Norman Page says:
March 6, 2012 at 3:50 pm
Robert Brown – Re your last post Here! Here! but are you confusing Svalgaard with Svensmark perhaps?

rgb’s not the only confused one! It’s “Hear! Hear!”, a traditional cry of support in the British Parliament, short for “Hear the man!”
:p

Brian H
April 3, 2012 12:13 am

Norman;
but I absolutely support the sentiment! I’ve saved every one of his posts in this thread to permanent storage.