Global annualized temperature – "full of [snip] up to their eyebrows"

Guest Post by Dr. Robert Brown,

Physics Dept. Duke University [elevated from comments]

Dr. Brown mentions “global temperature” several times. I’d like to know what he thinks of this.

Dr. Brown thinks that this is a very nice piece of work, and is precisely the reason that he said that anybody who claims to know the annualized average temperature of the Earth, or the Ocean, to 0.05 K is, as the saying goes, full of [snip] up to their eyebrows.

What I think one can define is an average “Global Temperature” — noting well the quotes — by following some fixed and consistent rule that goes from a set of data to a result. For example, the scheme that is used to go from satellite data to the UAH lower troposphere temperature. This scheme almost certainly does not return “the average Global Temperature of the Earth” in degrees absolute as something that reliably represents the coarse-grain averaged temperature of (say) the lowest 5 kilometers of the air column, especially not the air column as its height varies over an irregular terrain that is itself sometimes higher than 5 kilometers. It does, however, return something that is likely to be close to what this average would be if one could sample and compute it, and one at least hopes that the two would co-vary monotonically most of the time.

The accuracy of the measure is very likely not even 1K (IMO, others may disagree) where accuracy is |T_{LTT} - T_{TGT}| — the absolute difference between lower troposphere temperature and the “true global temperature” of the lower troposphere. The various satellites that contribute to temperature have (IIRC) a variance on this order so the data itself is probably not more accurate than that. The “precision” of the data is distinct — that’s a measure of how much variance there is in the data sources themselves, and is a quantity that can be systematically improved by more data, where accuracy, especially in a situation like this where one is indirectly inferring a quantity that is not exactly the same as what is being measured cannot be improved by more or more precise measurements, it can only be improved by figuring out the map between the data one is using and the actual quantity you are making claims about.

Things are not better for (land) surface measurements — they are worse. There the actual data is (again, in my opinion) hopelessly corrupted by confounding phenomena and the measurement errors are profound. Worse, the measurement errors tend to have a variable monotonic bias compared to the mythical “true average surface Global Temperature” one wishes to measure.

One is in trouble from the very beginning. The Moon has no atmosphere, so its “global average temperature” can be defined without worrying about measuring its temperature at all. When one wishes to speak of the surface temperature at a given point, what does one use as a definition? Is it the temperature an actual high precision thermometer would read (say) 1 cm below the surface at that point? 5 mm? 1 mm? 1 meter? All of these would almost certainly yield different results, results that depend on things like the albedo and emissivity of the point on the surface, the heat capacity and thermal conductivity of the surface matter, the latitude. Is it the “blackbody” temperature of the surface (the inferred temperature of the surface determined by measuring the outgoing full spectrum of radiated light)?

Even inferring the temperature from the latter — probably the one that is most relevant to an airless open system’s average state — is not trivial, because the surface albedo varies, the emissivity varies, and the outgoing radiation from any given point just isn’t a perfect blackbody curve as a result.

How much more difficult is it to measure the Earth’s comparable “surface temperature” at a single point on the surface? For one thing, we don’t do anything of the sort. We don’t place our thermometers 1 meter, 1 cm, 1 mm deep in — what, the soil? The grass or trees? What exactly is the “surface” of a planet largely covered with living plants? We place them in the air some distance above the surface. That distance varies. The surface itself is being heated directly by the sun part of the time, and is radiatively cooling directly to space (in at least some frequencies) all of the time. Its temperature varies by degrees K on a time scale of minutes to hours as clouds pass between the location and the sun, as the sun sets, as it starts to rain. It doesn’t just heat or cool from radiation — it is in tight thermal contact with a complex atmosphere that has a far greater influence on the local temperature than even local variations in insolation.

Yesterday it was unseasonably warm in NC, not because the GHE caused the local temperature to be higher by trapping additional heat but because the air that was flowing over the state came from the warm wet waters of the ocean to the south, so we had a relatively warm rain followed by a nighttime temperature that stayed warm (low overnight of maybe 46F) because the sky was cloudy. Today it is almost perfectly seasonal — high 50’s with a few scattered clouds, winds out of the WSW still carrying warm moisture from the Gulf and warm air from the south central US, but as the day progresses the wind is going to shift to the NW and it will go down to solidly freeze (30F) tonight. Tomorrow it will be seasonal but wet, but by tomorrow night the cooler air that has moved in from the north will make it go down to 25F overnight. The variation in local temperature is determined far more by what is going on somewhere else than it is by actual insolation and radiation here.

If a real cold front comes down from Canada (as they frequently do this time of year) we could have daytime highs in the 30’s or low 40’s and nighttime lows down in the the low 20s. OTOH, if the wind shifts to the right quarter, the temperature outside could reach the low 80s high and low 50s low. We can, and do, have both extremes within a single week.

Clearly surface temperatures are being driven as strongly by the air and moisture flowing over or onto them as they are by the “ideal” picture of radiative energy warming the surface and radiation cooling it. The warming of the surface at any given point isn’t solely responsible for the warming or cooling of the air above it, the temperature of the surface is equally dependent on the temperature of the air as determined by the warming of the surface somewhere else, as determined by the direct warming and cooling of the air itself via radiation, as determined by phase changes of water vapor in the air and on the surface, as determined by factor of ten modulations of insolation as clouds float around over surface and the lower atmosphere alike.

Know the true average surface Global Temperature to within 1K? I don’t even know how one would define a “true” average surface Global Temperature. It was difficult enough for the moon without an atmosphere, assuming one can agree on the particular temperature one is going to “average” and how one is going to perform the average. For the Earth with a complex, wet, atmosphere, there isn’t any possibility of agreeing on a temperature to average! One cannot even measure the air temperature in a way that is not sensitive to where the sun is and what it is doing relative to the measurement apparatus, and the air temperature can easily be in the 40s or 50s while there is snow covering the ground so that the actual surface temperature of the ground is presumably no higher than 32F — depending on the depth one is measuring.

And then oops — we forgot the Oceans, that cover 70% of the surface of the planet.

What do we count as the “temperature” of a piece of the ocean? There is the temperature of the air above the surface of the ocean. In general this temperature differs from the actual temperature of the water itself by order of 5-10K. The air temperature during the day is often warmer than the temperature of the water, in most places. The air temperature at night is often cooler than the temperature of the water.

Or is it? What exactly is “the temperature of the water”? Is it the temperature of the top 1 mm of the surface, where the temperature is dominated by chemical potential as water molecules are constantly being knocked off into the air, carrying away heat? Is it the temperature 1 cm deep? 10 cm? 1 m? 10 m? 50 m? 100m? 1 km?

Is it the average over a vertical column from the surface to the bottom (where the actual depth of the bottom varies by as much as 10 km)? This will bias the temperature way, way down for deep water and make the global average temperature of the ocean very nearly 4K very nearly everywhere, dropping the estimate of the Earth’s average Global Temperature by well over 10K. Yet if we do anything else, we introduce a completely arbitrary bias into our average. Every value we might use as a depth to average over has consequences that cause large variations in the final value of the average. As anyone who swims knows, it is quite easy for the top meter or so of water to be warm enough to be comfortable while the water underneath that is cold enough to take your breath away.

Even if one defines — arbitrarily, as arbitrary in its own way as the definition that one uses for T_{LT} or the temperature you are going to assign to a particular point on the surface on the basis of a “corrected” or “uncorrected” thermometer with location biases that can easily exceed several degrees K compared to equally arbitrary definitions for what the thermometer “should” be reading for the unbiased temperature and how that temperature is supposed to relate to a “true” temperature for the location — a sea surface temperature SST to go with land surface temperature LST and then tries to take the actual data for both and turn them into a average global temperature, one has a final problem to overcome. One’s data is (with the possible exception of modern satellite derived data) sparse! Very sparse.

In particular, it is sparse compared to the known and observed granularity of surface temperature variations, for both LST and SST. Furthermore, it has obvious sampling biases. We have lots and lots of measurements where people live. We have very few measurements (per square kilometer of surface area) where people do not live. Surface temperatures can easily vary by 1K over a kilometer in lateral distance (e.g. at terrain features where one goes up a few hundred meters over a kilometer of grade). They can and do vary by 1 K over order of 5-10 kilometers variations routinely.

I can look at e.g. the Weather Underground’s weather map readings from weather stations scattered around Durham at a glance, for example. At the moment I’m typing this there is a 13 F variation from the coldest to the warmest station reading within a 15 km radius of where I’m sitting. Worse, nearly all of these weather station readings are between 50 and 55 F, but there are two outliers. One of them is 46.5 F (in a neighborhood in Chapel Hill), and the other is Durham itself, the “official” reading for Durham (probably downtown somewhere) which is 59.5 F!

Guess which one will end up being the temperature used to compute the average surface temperature for Durham today, and assigned to an entirely disproportionate area of the surface of the planet in a global average surface temperature reconstruction?

Incidentally, the temperature outside of my house at this particular moment is 52F. This is a digital electronic thermometer in the shade of the north side of the house, around a meter off of the ground. The air temperature on the other side of the house is almost certainly a few degrees warmer as the house sits on a southwest-facing hill with pavement and green grass absorbing the bright sunlight. The temperature back in the middle of the cypresses behind my house (dense shade all day long, but with decent airflow) would probably be no warmer than 50 F. The temperature a meter over the driveway itself (facing and angled square into the sun, and with the house itself reflecting additional heat and light like a little reflector oven) is probably close to 60 F. I’m guessing there is close to 10F variation between the air flowing over the southwest facing dark roof shingles and the northeast facing dark roof shingles, biased further by loss of heat from my (fairly well insulated) house.

I don’t even know how to compute an average surface temperature for the 1/2 acre plot of land my own house sits on, today, right now, from any single thermometer sampling any single location. It is 50F, 52 F, 58 F, 55F, 61 F, depending on just where my thermometer is located. My house is on a long hill (over a km long) that rises to an elevation perhaps 50-100 m higher than my house at the top — we’re in the piedmont in between Durham and Chapel Hill, where Chapel Hill really is up on a hill, or rather a series of hills that stretch past our house. I’d bet a nickel that it is a few degrees different at the top of the hill than it is where my house is today. Today it is windy, so the air is well mixed and the height is probably cooler. On a still night, the colder air tends to settle down in the hollows at the bottoms of hills, so last frost comes earlier up on hilltops or hillsides; Chapel Hill typically has spring a week or so before Durham does, in contradiction of the usual rule that higher locations are cooler.

This is why I am enormously cynical about Argo, SSTs, GISS, and so on as reliable estimates of average Global Temperature. They invariably claim impossible accuracy and impossible precision. Mere common sense suffices to reject their claim otherwise. If they disagree, they can come to my house and try to determine what the “correct” average temperature is for my humble half acre, and how it can be inferred from a single thermometer located on the actual property, let alone from a thermometer located in some weather station out in Duke Forest five kilometers away.

That is why I think that we have precisely 33 years of reasonably reliable global temperature data, not in terms of accuracy (which is unknown and perhaps unknowable) but in terms of statistical precision and as the result of a reasonably uniform sampling of the actual globe. The UAH T_{LT} is what it is, is fairly precisely known, and is at least expected to be monotonically related to a “true average surface Global Temperature”. It is therefore good for determining actual trends in global temperature, not so good for making pronouncements about whether or not the temperature now is or is not the warmest that it has been in the Holocene.

Hopefully the issues above make it just how absurd any such assertion truly is. We don’t know the actual temperature of the globe now, with modern instrumentation and computational methodology to an accuracy of 1 K in any way that can be compared apples-to-apples to any temperature reconstruction, instrument based or proxy based, from fifty, one hundred, one thousand, or ten thousand years ago. 1 K is the close order of all of the global warming supposedly observed since the invention of the thermometer itself (and hence the start of the direct instrumental record). We cannot compare even “anomalies” across such records — they simply don’t compare because of confounding variables, as the “Hide the Decline” and “Bristlecone Pine” problems clearly reveal in the hockey stick controversy. One cannot remove the effects of these confounding variables in any defensible way because one does not know what they are because things (e.g. annual rainfall and the details of local temperature and many other things) are not the same today as they were 100 years ago, and we lack the actual data needed to correct the proxies.

A year with a late frost, for example, can stunt the growth of a tree for a whole year by simply damaging its new leaves or can enhance it by killing off its fruit (leaving more energy for growth that otherwise would have gone into reproduction) completely independent of the actual average temperature for the year.

To conclude, one of many, many problems with modern climate research is that the researchers seem to take their thermal reconstructions far too seriously and assign completely absurd measures of accuracy and precision, with a very few exceptions. In my opinion it is categorically impossible to “correct” for things like the UHI effect — it presupposes a knowledge of the uncorrected temperature that one simply cannot have or reliably infer from the data. The problem becomes greater and greater the further back in time one proceeds, with big jumps (in uncertainty) 250, 200, 100 and 40 odd years ago. The proxy-derived record from more than 250 years ago is uncertain in the extreme, with the thermal record of well over 70% of the Earth’s surface completely inaccessible and with an enormously sparse sampling of highly noisy and confounded proxies elsewhere. To claim accuracy greater than 2-3 K is almost certainly sheer piffle, given that we probably don’t know current “true” global average temperatures within 1 K, and 5K is more likely.

I’m certain that some paleoclimatologists would disagree with such a pessimistic range. Surely, they might say, if we sample Greenland or Antarctic ice cores we can obtain an accurate proxy of temperatures there 1000 or 2000 years ago. Why aren’t those comparable to the present?

The answer is because we cannot be certain that the Earth’s primary climate drivers distributed its heat the same way then as now. We can clearly see how important e.g. the decadal oscillations are in moving heat around and causing variations in global average temperature. ENSO causes spikes and seems responsible for discrete jumps in global average temperature over the recent (decently thermometric) past that are almost certainly jumps from one poincare’ attractor to another in a complex turbulence model. We don’t even know if there was an ENSO 1000 years ago, or if there was if it was at the same location and had precisely the same dependences on e.g. solar state. As a lovely paper Anthony posted this morning clearly shows, major oceanic currents jump around on millennial timescales that appear connected to millennial scale solar variability and almost certainly modulate the major oscillations themselves in nontrivial ways. It is quite possible for temperatures in the antarctic to anticorrelate with temperatures in the tropics for hundreds of years and then switch so that they correlate again. When an ocean current is diverted, it can change the way ocean average temperatures (however one might compute them, see above) vary over macroscopic fractions of the Earth’s surface all at once.

To some extent one can control for this by looking at lots of places, but “lots” is in practice highly restricted. Most places simply don’t have a good proxy at all, and the ones that do aren’t always easy to accurately reconstruct over very long time scales, or lose all sorts of information at shorter time scales to get the longer time scale averages one can get. I think 2-3 K is a generous statement of the probable real error in most reconstructions for global average temperature over 1000 years ago, again presuming one can define an apples-to-apples global average temperature to compare to which I doubt. Nor can one reliably compare anomalies over such time scales, because of the confounding variables and drift.

This is a hard problem, and calling it settled science is obviously a political statement, not a scientific one. A good scientist would, I truly believe, call this unsettled science, science that is understood far less than physics, chemistry, even biology. It is a place for utter honesty, not egregious claims of impossibly accurate knowledge. In my own utterly personal opinion, informed as well or as badly as chance and a fair bit of effort on my part have thus far informed it, we have 33 years of a reasonably precise and reliable statement of global average temperature, one which is probably not the true average temperature assuming any such thing could be defined in the first place but which is as good as any for the purposes of identifying global warming or cooling trends and mechanisms.

Prior to this we have a jump in uncertainty (in precision, not accuracy) compared to the ground-based thermometric record that is strictly apples-to-oranges compared to the satellite derived averages, with error bars that rapidly grow the further back one goes in the thermometric record. We then have a huge jump in uncertainty (in both precision and accuracy) as we necessarily mount the multiproxy train to still earlier times, where the comparison has unfortunately been between modern era apples, thermometric era oranges, and carefully picked cherries. Our knowledge of global average temperatures becomes largely anecdotal, with uncertainties that are far larger than the observed variation in the instrumental era and larger still than the reliable instrumental era (33 year baseline).

Personally, I think that this is an interesting problem and one well worth studying. It is important to humans in lots of ways; we have only benefitted from our studies of the weather and our ability to predict it is enormously valuable as of today in cash money and avoided loss of life and property. It is, however, high time to admit the uncertainties and get the damn politics out of the science. Global climate is not a “cause”! It is the object of scientific study. For the conclusions of that science to be worth anything at all, they have to be brutally honest — honest in a way that is utterly stripped of bias and that acknowledges to a fault our own ignorance and the difficulty of the problem. Pretending that we know and can measure global average temperatures from a sparse and short instrumental record where it would be daunting to assign an accurate, local average temperature to any given piece of ground based on a dense sampling of temperatures from different locations and environments on that piece of ground does nothing to actually help out the science — any time one claims impossible accuracy for a set of experimentally derived data one is openly inviting false conclusions to be drawn from the analysis. Pretending that we can model what is literally the most difficult problem in computational fluid dynamics we have ever attempted with a handful of relatively simple parametric differential forms and use the results over centennial and greater timescales does nothing for the science, especially when the models, when tested, often fail (and are failing, badly, over the mere 33 years of reliable instrumentation and a uniform definition of at least one of the global average temperatures).

It’s time to stop this, and just start over. And we will. Perhaps not this year, perhaps not next, but within the decade the science will finally start to catch up and put an end to the political foolishness. The problem is that no matter what one can do to proxy reconstructions, no matter how much you can adjust LSTs for UHI and other estimated corrections that somehow always leave things warmer than they arguably should be, no matter what egregious claims are initially made for SSTs based on Argo, the UAH T_{LT} will just keep on trucking, unfutzable, apples to apples to apples. The longer that record gets, the less one can bias an “interpretation” of the record.

In the long run that record will satisfy all properly skeptical scientists, and the “warmist” and “denier” labels will end up being revealed as the pointless political crap that they are. In the long run we might actually start to understand some of the things that contribute to that record, not as hypotheses in models that often fail but in models that actually seem to work, that capture the essential longer time scale phenomena. But that long run might well be centennial in scale — long enough to detect and at least try to predict the millennial variations, something utterly impossible with a 33 year baseline.

rgb

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

224 Comments
Inline Feedbacks
View all comments
John another
March 5, 2012 8:37 pm

Nick Stokes says: March 5, 2012 at 8:11 pm
John another says: March 5, 2012 at 7:50 pm
“How can I know the change in volume of my gas tank in ounces but not know the actual volume to within pints?”
Measure the flow in the fuel line.
If the flow meter can only measure in pints how does it measure flow rate to less than an ounce? Same instrument for different metrics.
But right now I’m much more interested in your response to both parts of George M says: March 5, 2012 at 7:31 pm immediately above my hokey post.

AlexS
March 5, 2012 8:58 pm

“I tell you that its 20C at my house and 24C 10 miles away. Estimate the temperature at 5 miles away”
This part of a Steven Mosher post shows the problem. If the Station is 10 miles away what will be the “correlation”? Good? Bad?

March 5, 2012 8:59 pm

John another says: March 5, 2012 at 8:37 pm
“If the flow meter can only measure in pints how does it measure flow rate to less than an ounce? Same instrument for different metrics.
But right now I’m much more interested in your response to both parts of George M says”

Well, at the gas station you pay to the nearest cent. That’s a pretty good indicator of volume change.
I’ve already responded frequently to people how show lists of anomalies. That’s different, and it’s validity is based on correlation. If you want to dispute that, you need to cite some statistics.

John another
March 5, 2012 9:25 pm

Nick Stokes says: March 5, 2012 at 8:59 pm
So the pump can tell me to the nearest penny what went into my tank. With the same flow meter on the line out I should be able to tell (to the nearest penny) whats in the tank.
Now about that difference in energy content of a cubic foot of air that George referred to with Max Hugoson’s discussion of Minn. and AZ temps and humidities? Sounds like temp by itself is a pretty useless metric for energy.

John another
March 5, 2012 9:36 pm

Nick
Are you saying that George M’s list of GISS temperature anomalies that report in hundredths of a degree are not GISS temperatures.

March 5, 2012 9:54 pm

” With the same flow meter on the line out I should be able to tell (to the nearest penny) whats in the tank.”
No, that’s just another measure of change. In fact, you probably really don’t know to the nearest pint what’s in your tank. Even though you paid to the nearest penny.
George correctly labelled his table as Gistemp anomaly data. They are not “globalized annual temperature”.
As to the energy content of air, there’s only so many things you can usefully object to. GISS is measuring temperature anomaly. That’s all they claim. If you want total energy content, you’ll have to work with humidity too.

John another
March 5, 2012 10:44 pm

Nick, if I measure pennies in and pennies out, I know (to the penny) what is in the tank at any given time.
You are measuring with the exact same instruments the global temperature and the anomaly of said temperature. If the accuracy of the baseline is 1 degree either way how does one know if a change of one tenth of a degree is a change in real temperature?
“As to the energy content of air, there’s only so many things you can usefully object to. GISS is measuring temperature anomaly. That’s all they claim.”
Please report that immediately to the media.
Thanks Nick for your responses but I’ll look elsewhere for a little less convoluted education. I really am interested in the truth.

Alan D McIntire
March 6, 2012 5:51 am

k scott denison says:
March 4, 2012 at 2:04 pm
mosher
“While your argument contains some sense, using only 7,000 measurements for the entire surface of the earth isn’t exactly enough to say we hav a measurement every 10 miles, is it? I think your analogy is not only inaccurate but also disingenuous. ”
Earth has a surface area of 197 million square miles. Land makes up about 30% of this, or 59 million square miles. 59,000,000/7000 measuring sites = 1 measuring site for every 8400 square miles. Most of those sites are in relatively urbanized ares.
t

March 6, 2012 6:09 am

I am interested in your comment about lower GHE in recent years due to lower stratospheric water vapor.
I don’t know. The measurements are still fairly recent, and I haven’t read of any explanation. If I had to guess, increased cloud formation in the troposphere is causing more moisture to precipitate out before it gets to the stratosphere, creating a cooling negative feedback.
rgb

March 6, 2012 6:13 am

Wayne Delbeke says: March 5, 2012 at 12:17 pm
Interesting on [abovementioned] page was notification of the shutting down of the Canadian weather station at Eureka, Nunavit, Canada – which is a bit sad given it is a good arctic weather station.
Attention please someone! I seem to remember Eureka was already being vastly overused to show the temperature of great swathes of the far north?? and if it goes does that give GISS this year’s heist?
If I’ve mis-remembered, apologies.

March 6, 2012 7:13 am

Measure the flow in the fuel line.
This isn’t even a strained metaphor for GISS, it is just false. GISS is trying to measure levels in the tank, not flow in versus flow out.
Which you just agreed could not be done. We cannot, or at least have not, been able to actually measure and integrate TOA outgoing power versus incoming power with sufficient precision to actually measure the “flow in the fuel line” — ins versus outs. Although to be honest, that’s ultimately the only way to really resolve the issue — directly measure, and study, the full TOA spectrum over a considerable amount of time to high precision.
We can, on the other hand, measure the flow in pretty accurately, and we can also measure the direct reflection of the flow in pretty accurately. The flow in has reduced. What the mechanism is of the reduction one can argue about, but the direct measurements on insolation and albedo are harder to argue with.
The flow out is the eternal problem, of course. As I said, IMO there is no question that there is a GHE that helps warm the atmosphere. There is no convincing evidence that the GHE is as sensitive to CO_2 concentration as the IPCC claims, and such evidence as there is is both dubious (MBH, the earlier part of GISS) and suffers from the assumption that CO_2 is the only important factor in late 20th century warming in spite of the fact that the natural variability of the climate is sufficient to explain at the very least an unknown fraction of it. Finally, the arguments for strong feedback from e.g. H_2O that will multiply any CO_2 based warming by factors of 3 to 5 are weak.
So the mere fact of warming is not sufficient to conclude either that there is a catastrophe looming or that the bulk of the warming is anthropogenic in origin. Nor are simulations based on GCMs that neglect variables that almost certainly have significant influence on the climate (such as the ones that were determining the natural variability in the climate that MBH attempted to erase with their hockey stick analysis).
Nick, the assertions of CAGW are serious business. You are talking about diverting not just tens, but hundreds of billions of dollars, money that can be put to use in many ways. You assert that the science is sound (at least, that’s what I think you are doing) but if I were to ask you to explain why stratospheric H_2O is decreasing, why the Earth was (apparently) warmer at the Holocene Optimum than it is now and was nearly as warm during the Medieval Optimum, why the Earth’s temperature appears to be strongly correlated with solar state in ways that cannot be explained by just variations in surface brightness, why the GCMs predict more atmospheric warming than surface warming but the satellite data confounds this, why the GCMs don’t work very well to forecast or hindcast, why the Earth’s cloud albedo appears to be increasing as we move out of the Grand Solar Maximum of the 20th century — could you provide explanations for all of these things?
Are you not just making a complex modern version of the religious argument known as Pascal’s Wager? It’s definitely warming, and we might be responsible for some or even all of it, and if a bunch of unproven assumptions are true it might lead us to a climate catastrophe, so we should invest enormous amounts of resources that could all be used to make the world better in many other ways in things that even the proponents acknowledge will not prevent the catastrophe — if they are right about it occurring in the first place? With, perhaps, the added sauce of a general agreement that some of those measures are (in your opinion) things that are good things to do anyway…
My own belief is that the data shows that there is little reason to panic — the higher levels of feedback required for catastrophe are simply implausible in a generally stable climactic system, and if one asserts that the system is not stable, then one is back having to deal with the natural variability issue once again. There is time to a) get the science right; and b) let other science catch up to where one can fix the problem (if it eventually proves that we do have a problem) far more cheaply than we can today.
One of the things I’ve worked quite a bit on is supercomputing. Physicists have always loved supercomputers. Back in the old days, if you wanted to get a big computation done you needed e.g. a Cray to do it. The only problem was that a Cray Y-MP might cost ten or twelve million dollars, plus a few million a year to keep it housed and fed and to pay its priesthood. To get time on a Cray you had to apply to the priests and if they found your offerings good you were granted a budget of so and so many hours.
Of course physicists love to publish papers, so many grants were written and lots of work was done at very great expense on these enormously expensive machines. But in the meantime, Moore’s Law was at work.
Moore’s Law, once beowulf-style homemade supercomputers were invented, was pretty brutal — I’ve worked out the economics of it (and published it in various places to the only people that matter) several times. Given a five year grant and a doubling time of 18 months in CPU speed, when should you invest your money to get the most work done? If you spend it all in year 1, it takes you 5 years to do some task (say). If you take a holiday for 18 months, and buy in then, you get three and a half years with twice the speed, and actually get 7 years worth of work done in five. If you wait three years and buy in then, you get 2 years worth of work with four times the speed for 8 units of work in five years. If you wait until four and a half years and buy systems that are 8 times faster, you almost match five years worth of work in half a year, and match the 8 if you can run just a bit longer.
We’ve ridden that curve to the point where stuff I did with hundreds of hours of Cray time at great expense I could now do on the laptop I’m typing this on in the time it takes to type this reply, to where I could reproduce years worth of work I did years after that (with a compute cluster) in months with a very small compute cluster with modern CPUs. Huge amounts of research that were done at great expense in the 70s, 80s, 90s, and 2000s could now be done at very little expense now.
How “burning” was our need to do the work back then? Not terribly. Most of the results of work on the Crays of the world were not terribly useful in the sense that they had a direct payback to society. On the other hand, it wasn’t very expensive (not compared to fighting wars and other real expenses) and there were a lot of indirect benefits, including supporting the best research and educational system in the world that is directly responsible for the bulk of our social and economic wealth. But still, a brutally honest cost benefit analysis would in many cases have told people seeking grants to come back in three years with a proposal 1/4 the size to accomplish the same thing, a bit later.
There were many cases where I (acting as a consultant) advised just that, especially when a few million dollars was being allocated to build a custom hand-made computer based on custom ASICs where anybody could see that over the counter computers would be able to match its eventual projected design speed by the time it was finished.
We are doing exactly the same thing with respect to CAGW, only on even shakier ground since the catastrophic bit in that hasn’t been proven at all, and the anthropogenic bit is at most a fraction of the total. We can invest our money heavily now as if it is an “emergency” with the net effect of making ourselves poor and not solving the problem, or we can invest our money a lot more reasonably now in the science and technology that might both objectively — non-politically — study the climate and eventually become real remedies should they ever be needed.
In the meantime, take heart! There has been basically no warming since 1998, and the current solar cycle is going to be the lowest in 130 years, from the looks of it. The next one is forecast (with a fair bit of uncertainty, I admit) to be even weaker. The last time this happened the climate cooled. Are you willing to tell me that you are certain that the temperature is about to just shoot right up and rejoin the various predictions that were made a decade ago for how hot it was supposed to be outside right now? Or is there just a bare smidgen of a chance that — those predictions were wrong, because they left out the effect of several major drivers of the climate and put it all on CO_2.
But no, somehow the planet’s refusal to actually cooperate by getting warmer as predicted seems to anger those that believe in the prediction. The discovery that things are more complex (in good ways!) should come as good new, but it never seems to actually do so.
Why is that?
rgb

Frank K.
March 6, 2012 7:51 am

Nick Stokes says:
March 5, 2012 at 8:59 pm
I’ve already responded frequently to people how show lists of anomalies. That’s different, and it’s validity is based on correlation. If you want to dispute that, you need to cite some statistics.
There you go again, Nick. Can you PLEASE back up your claim “its validity is based on correlation,” What correlation? How are computing the correlation coefficients for the anomalies (the Hansen paper does NOT show this, of course)? KR claims “strong correlation”. What does this mean? How strong is “strong”. Until you can be more precise about your claims, they just a bunch of scientific-sounding mumbo jumbo…

March 6, 2012 8:34 am

Robert Brown
Thanks for your recent series of posts – particularly your post at 3/5 1:15 pm which is clearly brilliant since it so clearly expresses my own views on the main climate drivers pretty much exactly.
You may be interested to know that Svalgaard is leading a group to improve the proxy record of solar wind changes back through the holocene .Clearly when this record is established we should be able to use wavelet analysis to pick out the key frequencies in the GCR ( and then Albedo ) record and relate these to the proxy temperture record. See
http://www.leif.org/research/Svalgaard_ISSI_Proposal_Base.pdf

March 6, 2012 1:30 pm

You may be interested to know that Svalgaard is leading a group to improve the proxy record of solar wind changes back through the holocene .
Svalgaard may turn out to be right or he may turn out to be wrong. However, at this particular instant in the history of the world nobody knows for sure which one it will turn out to be.
Not Svalgaard himself. Not Mann. Not Jones. Not Nick. Not K.R. Not you. Not me.
That’s why they call it scientific reasearch, an ongoing process of discovery.
There are systematic correlations already visible in the paleoclimate record. There are extremely strong systematic correlations from the last 500 years where we begin to have moderately accurate and scientific observations (which had to wait for the scientific era and the Enlightenment). Svalgaard has a hypothesis that could explain the correlations. It is physically plausible, supported by evidence already, but has missing detail (like nearly everything in climate research). However, it is a bone simple hypothesis, actually much simpler than the GHE, and one that is very definitely falsifiable and to a reasonable extent verifiable as well. The connection between weak solar magnetic field and higher levels of GCR impacts on the upper atmosphere is, I think, at this point beyond question. The only question is whether or not higher GCR rates can modulate nucleation of clouds. If the answer is yes, then the hypothesis has considerable explanatory power. If it is no, we are still left with the original correlations that must then be explained in some other way!
What isn’t an option is to pretend that the correlations themselves don’t exist, and don’t represent a serious problem with the CO_2-is-king theory of GW. That problem is there no matter what, because one of the fundamental assumptions of AGW is that the 20th century was “normal” so that the warming observed in it was “abnormal” and required explanation, and the only explanation given normal conditions was CO_2.
However, if the 20th century was abnormal — if, for example, the sun was in a 11,000 year Grand Maximum of solar activity for much of the century (or even a much more moderate 1000-2000 year Grand Maximum) then if the data themselves do not lie, that solar activity must at least be considered and modelled as a possible cause for at least part of the 20th century warming.
This is anathema to the IPCC and its ARs. They do not want to admit even the possibility of a confounding co-factor to the warming. There have been numerous open complaints about this, including complaints from AR reviewers. When e.g. Soon and Baliunas publish work that suggests a solar influence, the climategate letters clearly indicate the extremes at least the hockey team is willing to go to to discredit them. When Svalgaard emerges with a highly coherent hypothesis with supporting experimental data, they trip over themselves to dismiss it.
After all, the more of the 20th century warming that was natural, the less that is left over to attribute to CO_2 (anthropogenic or not, since if the 20th century warming was natural, a large part of the additional CO_2 was almost certainly released from the warming ocean, not from anthropogenic sources).
This is anathema to the political agenda of the collaboration between the IPCC, the rabid anti-civilization “green” environmentalists that want to create what amounts to a new secular religion, and normal people that dislike and distrust the multinational global energy industry for the very sound reason that it manipulates world politics to its economic advantage as standard operating procedure. Without a monolithic, human cause for all of the observed warming, how can one justify spending billions and billions of dollars regulating CO_2 emissions? Without the further assertion that CO_2 is leading us to a global catastrophe how can one justify the massive redistribution of wealth associated with Carbon Taxes?
I personally think that a lot of the things being done to address the real problems that do exist with fuel based energy resources are good things. But I’m not prepared to lie or mislead myself or the general public in order to coerce agreement with my opinions. Willis, for example, clearly disagrees with me about the importance and probable future history of solar power, but I wouldn’t lie to Willis and try to scare him with threats of global catastrophe to get him to politically support the continued development of solar energy. He has to make up his own mind about that, given his own best analysis of the situation. There is room for people to disagree and vote as they wish based on a fair appraisal of risk and benefit.
What I don’t like is de facto deceiving the public about an entire raft of things that would weaken the case of the CAGW enthusiasts. The warming is exaggerated at every turn. The past thermal history of the world is falsely reconstructed and, when corrected, the corrections are never made as public as the original mistakes (e.g. the Hockey Stick). Politicians with vested interests manipulate public opinion with pictures of polar bears on melting icebergs, without telling you that it is midsummer and the icebergs in question are a few meters from shore and that the bears are playing on the icebergs as they do every summer and aren’t in any danger whatsoever. Problems with the GCMs, their failure to either predict future temperatures or even predict the correct relationship between surface warming and lower troposphere warming are carefully elided from any public presentation of the terrible dangers of CAGW that those same GCMs are the primary evidence for. Error bars that are openly absurd are attached to e.g. the GISS reconstructions because only with small errors can one be certain that the alleged warming has even taken place.
If all of this was honestly and publicly published, not in the back of AR4 or AR5, but on national news where normal humans could see it, it is true that CAGW might not get the political support it needs to convince people to make the sacrifices it demands. This is just tough! The whole point of democracy is that the people themselves need to make informed choices, and then live with the consequences of those choices. Misinforming them to manipulate the choices so that they come out to what you think they should be, even “for their own good”, is shameful.
rgb

kakatoa
March 6, 2012 2:07 pm

Dr. Brown,
Loved the analogy of Cray’s (supercomputers) and technological development (with the costs and benefits over time) and the effort to cure the theoretical CO2 problem. In honor of Chippewa Falls, home of the original Cray Research center, I would like to offer you a virtual Leinenkugel draft beer from the same city. If you had a tip jar I would make the virtual draft more lifelike.

Jurgen
March 6, 2012 2:19 pm

As for climate there are not enough reliable data yet, and the reliable data we do have have limited validity for calculating climate trends, what if anything is there still to do for “climate science”?
As climate is a long-term activity by definition and “consistent and reliable” data-gathering is still in its infancy, in my opinion all climate science could do right now is mainly concentrate on collecting data for many many years to come and be very very patient.
Being very patient is clearly very difficult for ambitious career oriented and politically motivated scientists. So if you are in that province don’t choose climate science.
If still you keep going on with validating “settled” climate science as it is being sold to the public right now, remember where you position yourself as a scientist… there are lies, damned lies, statistics and, worse still… there is “settled climate science”… Maybe you can earn some money with it, or even some authority with the public, but that’s where you stand as a scientist. Because right now in my opinion you have nothing really substantial to work with, nothing… So the only thing you could do is trickery with numbers. That’s your only show.
Well for that I would prefer say Joseph Madachy, or Sam Loyd, or Martin Gardner, to name but a few… much more fun… and very proficient at their trade…

March 6, 2012 3:47 pm

Robert Brown says: March 6, 2012 at 7:13 am
“This isn’t even a strained metaphor for GISS, it is just false. GISS is trying to measure levels in the tank, not flow in versus flow out.”

I don’t think it is a particularly good metaphor; I didn’t choose it. John a’s criticism that a different instrument is used has some merit. But not this. I’ve quoted GISS on the levels in the tank:
“For the global mean, the most trusted models produce a value of roughly 14°C, i.e. 57.2°F, but it may easily be anywhere between 56 and 58°F and regionally, let alone locally, the situation is even worse.”
They’re not even trying to measure global annualizeed temperature; they have to model. And there’s no claim of accuracy there.
I know the issue of AGW is serious. It’s potentially a critical problem, which is why governments might choose to spend a lot of money to alleviate or avoid it. You try to shift the burden of proof to one side, but the costs of not acting may also be large.
The fact is that we’ve burnt 350 GT C and increased the C in the air by about 30%. That has had a modest effect.
But there are 3000 Gt that we could easily burn, and without contrary action, probably will over the next century. Possibly over half of that in the next fifty years. Now worries about the effect of that change to the atmosphere are very well justified. And if we want to make a collective decision to attempt to control our fate there, we’ll have to get a move on.

March 6, 2012 3:50 pm

Robert Brown – Re your last post Here! Here! but are you confusing Svalgaard with Svensmark perhaps?

March 7, 2012 9:13 am

The fact is that we’ve burnt 350 GT C and increased the C in the air by about 30%. That has had a modest effect.
But there are 3000 Gt that we could easily burn, and without contrary action, probably will over the next century. Possibly over half of that in the next fifty years. Now worries about the effect of that change to the atmosphere are very well justified. And if we want to make a collective decision to attempt to control our fate there, we’ll have to get a move on.

Now you are starting to sound reasonable. Let’s continue. It has had a “modest” effect, but if I asked you to positively quantify that effect, you couldn’t do so within a factor of three because the observed warming is not only due to the CO_2 — we have ample evidence that global temperature is multifactorial and that there is a range of variation sufficient to explain as much as 100% of the warming observed since the LIA in terms of natural, non-anthropogenic drivers. Since we cannot predict what past temperatures were, and cannot predict what the current temperature should be, we cannot tell what fraction of the current temperature known as an “anomaly” or not is due to natural, non-anthropogenic factors.
Your fears are predicated upon observed variation in a quantity that is not known to better than around 1% and that has many factors that modulate its actual value. You don’t know what those factors are, or how they contribute (fractionally) to the total. Those factors are complex and their dynamical contribution has many time scales, some of them as long as millennial (there are clear thousand year signals in the holocene thermal record, probably tied to some combination of solar variation and the equilibration time for the ocean). The variations that concern you are a few tenths of a percent of the total, unknown value, that you think you know because of a complex fit of data from a tiny fraction of the Earth’s surface.
Yet the modest degree of uncertainty you are finally expressing is nearly completely absent in the AR reports. It is there in the research, to be sure, but it is somehow always omitted from the final reports. They are always certain that CO_2 is a problem, and without exception exaggerate its projected consequences by pure multiplication of its projected effect by a fudge factor since by itself it would not lead to any sort of catastrophe.
This is real money we are talking about. It affects people’s lives right now. It is causing a certain amount of human misery as it is trying deal with CO_2 in a panic under the pretext that we are certain that human civilization as we know it will end (or be profoundly damaged) if we don’t.
The data itself suggest that in fact we don’t need to “get a move on”. Events are moving along on their own in ways that will ultimately ameliorate the CO_2 “problem”, if any such problem emerges from the uncertainty of climate science once the current horrendous level of confirmation biased research and alarmism passes. I’m all for research into renewable energy, simply because human civilization cannot persist indefinitely on non-renewable energy; we are gifted with a planet that has enough readily available fossil fuel resources to bootstrap the process of advancing to a steady state civilization but that doesn’t mean we should squander those resources by utilizing them to their limit. Some of them, like oil, are “black gold” in the form of raw material for organic chemistry and manufacturing, and all of them are scarce enough to want to preserve them for millennial time scales (for our children, as it were).
I fully expect research and development into renewable (and non-carbon fuel based) energy resources to yield not only cost-effective alternatives to carbon based energy resources, but cost advantageous resources, long before CO_2 emerges as a critical problem (if in fact it eventually works out that it is). I also expect that the development of these resources will cost only a few billions of dollars a year at the outside compared to the tens of billions on up that active control of carbon now with immature technology is costing.
Sometimes the right thing to do really is wait and see, and work at a modest scale on things that are interesting and useful in their own right in the meantime against the small chance that CAGW is a true hypothesis. Right now the data seems to suggest that it is not, that the huge positive feedbacks proposed by Hansen and others are egregious and that the real feedback is probably neutral to negative. It also suggests that the “A” in CAGW is the smaller part of the observed 20th century warming.
rgb

March 7, 2012 9:15 am

Robert Brown – Re your last post Here! Here! but are you confusing Svalgaard with Svensmark perhaps?
Probably. I suck at names. It took me a dozen rounds of reading their paper for me to actually remember Nikolov and Zeller’s names well enough to not write them as N&Z.
Sorry.
rgb

Brian H
April 3, 2012 12:11 am

Norman Page says:
March 6, 2012 at 3:50 pm
Robert Brown – Re your last post Here! Here! but are you confusing Svalgaard with Svensmark perhaps?

rgb’s not the only confused one! It’s “Hear! Hear!”, a traditional cry of support in the British Parliament, short for “Hear the man!”
:p

Brian H
April 3, 2012 12:13 am

Norman;
but I absolutely support the sentiment! I’ve saved every one of his posts in this thread to permanent storage.

1 7 8 9