Guest Post by Dr. Robert Brown,
Physics Dept. Duke University [elevated from comments]
Dr. Brown mentions “global temperature” several times. I’d like to know what he thinks of this.
Dr. Brown thinks that this is a very nice piece of work, and is precisely the reason that he said that anybody who claims to know the annualized average temperature of the Earth, or the Ocean, to 0.05 K is, as the saying goes, full of [snip] up to their eyebrows.
What I think one can define is an average “Global Temperature” — noting well the quotes — by following some fixed and consistent rule that goes from a set of data to a result. For example, the scheme that is used to go from satellite data to the UAH lower troposphere temperature. This scheme almost certainly does not return “the average Global Temperature of the Earth” in degrees absolute as something that reliably represents the coarse-grain averaged temperature of (say) the lowest 5 kilometers of the air column, especially not the air column as its height varies over an irregular terrain that is itself sometimes higher than 5 kilometers. It does, however, return something that is likely to be close to what this average would be if one could sample and compute it, and one at least hopes that the two would co-vary monotonically most of the time.
The accuracy of the measure is very likely not even 1K (IMO, others may disagree) where accuracy is — the absolute difference between lower troposphere temperature and the “true global temperature” of the lower troposphere. The various satellites that contribute to temperature have (IIRC) a variance on this order so the data itself is probably not more accurate than that. The “precision” of the data is distinct — that’s a measure of how much variance there is in the data sources themselves, and is a quantity that can be systematically improved by more data, where accuracy, especially in a situation like this where one is indirectly inferring a quantity that is not exactly the same as what is being measured cannot be improved by more or more precise measurements, it can only be improved by figuring out the map between the data one is using and the actual quantity you are making claims about.
Things are not better for (land) surface measurements — they are worse. There the actual data is (again, in my opinion) hopelessly corrupted by confounding phenomena and the measurement errors are profound. Worse, the measurement errors tend to have a variable monotonic bias compared to the mythical “true average surface Global Temperature” one wishes to measure.
One is in trouble from the very beginning. The Moon has no atmosphere, so its “global average temperature” can be defined without worrying about measuring its temperature at all. When one wishes to speak of the surface temperature at a given point, what does one use as a definition? Is it the temperature an actual high precision thermometer would read (say) 1 cm below the surface at that point? 5 mm? 1 mm? 1 meter? All of these would almost certainly yield different results, results that depend on things like the albedo and emissivity of the point on the surface, the heat capacity and thermal conductivity of the surface matter, the latitude. Is it the “blackbody” temperature of the surface (the inferred temperature of the surface determined by measuring the outgoing full spectrum of radiated light)?
Even inferring the temperature from the latter — probably the one that is most relevant to an airless open system’s average state — is not trivial, because the surface albedo varies, the emissivity varies, and the outgoing radiation from any given point just isn’t a perfect blackbody curve as a result.
How much more difficult is it to measure the Earth’s comparable “surface temperature” at a single point on the surface? For one thing, we don’t do anything of the sort. We don’t place our thermometers 1 meter, 1 cm, 1 mm deep in — what, the soil? The grass or trees? What exactly is the “surface” of a planet largely covered with living plants? We place them in the air some distance above the surface. That distance varies. The surface itself is being heated directly by the sun part of the time, and is radiatively cooling directly to space (in at least some frequencies) all of the time. Its temperature varies by degrees K on a time scale of minutes to hours as clouds pass between the location and the sun, as the sun sets, as it starts to rain. It doesn’t just heat or cool from radiation — it is in tight thermal contact with a complex atmosphere that has a far greater influence on the local temperature than even local variations in insolation.
Yesterday it was unseasonably warm in NC, not because the GHE caused the local temperature to be higher by trapping additional heat but because the air that was flowing over the state came from the warm wet waters of the ocean to the south, so we had a relatively warm rain followed by a nighttime temperature that stayed warm (low overnight of maybe 46F) because the sky was cloudy. Today it is almost perfectly seasonal — high 50’s with a few scattered clouds, winds out of the WSW still carrying warm moisture from the Gulf and warm air from the south central US, but as the day progresses the wind is going to shift to the NW and it will go down to solidly freeze (30F) tonight. Tomorrow it will be seasonal but wet, but by tomorrow night the cooler air that has moved in from the north will make it go down to 25F overnight. The variation in local temperature is determined far more by what is going on somewhere else than it is by actual insolation and radiation here.
If a real cold front comes down from Canada (as they frequently do this time of year) we could have daytime highs in the 30’s or low 40’s and nighttime lows down in the the low 20s. OTOH, if the wind shifts to the right quarter, the temperature outside could reach the low 80s high and low 50s low. We can, and do, have both extremes within a single week.
Clearly surface temperatures are being driven as strongly by the air and moisture flowing over or onto them as they are by the “ideal” picture of radiative energy warming the surface and radiation cooling it. The warming of the surface at any given point isn’t solely responsible for the warming or cooling of the air above it, the temperature of the surface is equally dependent on the temperature of the air as determined by the warming of the surface somewhere else, as determined by the direct warming and cooling of the air itself via radiation, as determined by phase changes of water vapor in the air and on the surface, as determined by factor of ten modulations of insolation as clouds float around over surface and the lower atmosphere alike.
Know the true average surface Global Temperature to within 1K? I don’t even know how one would define a “true” average surface Global Temperature. It was difficult enough for the moon without an atmosphere, assuming one can agree on the particular temperature one is going to “average” and how one is going to perform the average. For the Earth with a complex, wet, atmosphere, there isn’t any possibility of agreeing on a temperature to average! One cannot even measure the air temperature in a way that is not sensitive to where the sun is and what it is doing relative to the measurement apparatus, and the air temperature can easily be in the 40s or 50s while there is snow covering the ground so that the actual surface temperature of the ground is presumably no higher than 32F — depending on the depth one is measuring.
And then oops — we forgot the Oceans, that cover 70% of the surface of the planet.
What do we count as the “temperature” of a piece of the ocean? There is the temperature of the air above the surface of the ocean. In general this temperature differs from the actual temperature of the water itself by order of 5-10K. The air temperature during the day is often warmer than the temperature of the water, in most places. The air temperature at night is often cooler than the temperature of the water.
Or is it? What exactly is “the temperature of the water”? Is it the temperature of the top 1 mm of the surface, where the temperature is dominated by chemical potential as water molecules are constantly being knocked off into the air, carrying away heat? Is it the temperature 1 cm deep? 10 cm? 1 m? 10 m? 50 m? 100m? 1 km?
Is it the average over a vertical column from the surface to the bottom (where the actual depth of the bottom varies by as much as 10 km)? This will bias the temperature way, way down for deep water and make the global average temperature of the ocean very nearly 4K very nearly everywhere, dropping the estimate of the Earth’s average Global Temperature by well over 10K. Yet if we do anything else, we introduce a completely arbitrary bias into our average. Every value we might use as a depth to average over has consequences that cause large variations in the final value of the average. As anyone who swims knows, it is quite easy for the top meter or so of water to be warm enough to be comfortable while the water underneath that is cold enough to take your breath away.
Even if one defines — arbitrarily, as arbitrary in its own way as the definition that one uses for or the temperature you are going to assign to a particular point on the surface on the basis of a “corrected” or “uncorrected” thermometer with location biases that can easily exceed several degrees K compared to equally arbitrary definitions for what the thermometer “should” be reading for the unbiased temperature and how that temperature is supposed to relate to a “true” temperature for the location — a sea surface temperature SST to go with land surface temperature LST and then tries to take the actual data for both and turn them into a average global temperature, one has a final problem to overcome. One’s data is (with the possible exception of modern satellite derived data) sparse! Very sparse.
In particular, it is sparse compared to the known and observed granularity of surface temperature variations, for both LST and SST. Furthermore, it has obvious sampling biases. We have lots and lots of measurements where people live. We have very few measurements (per square kilometer of surface area) where people do not live. Surface temperatures can easily vary by 1K over a kilometer in lateral distance (e.g. at terrain features where one goes up a few hundred meters over a kilometer of grade). They can and do vary by 1 K over order of 5-10 kilometers variations routinely.
I can look at e.g. the Weather Underground’s weather map readings from weather stations scattered around Durham at a glance, for example. At the moment I’m typing this there is a 13 F variation from the coldest to the warmest station reading within a 15 km radius of where I’m sitting. Worse, nearly all of these weather station readings are between 50 and 55 F, but there are two outliers. One of them is 46.5 F (in a neighborhood in Chapel Hill), and the other is Durham itself, the “official” reading for Durham (probably downtown somewhere) which is 59.5 F!
Guess which one will end up being the temperature used to compute the average surface temperature for Durham today, and assigned to an entirely disproportionate area of the surface of the planet in a global average surface temperature reconstruction?
Incidentally, the temperature outside of my house at this particular moment is 52F. This is a digital electronic thermometer in the shade of the north side of the house, around a meter off of the ground. The air temperature on the other side of the house is almost certainly a few degrees warmer as the house sits on a southwest-facing hill with pavement and green grass absorbing the bright sunlight. The temperature back in the middle of the cypresses behind my house (dense shade all day long, but with decent airflow) would probably be no warmer than 50 F. The temperature a meter over the driveway itself (facing and angled square into the sun, and with the house itself reflecting additional heat and light like a little reflector oven) is probably close to 60 F. I’m guessing there is close to 10F variation between the air flowing over the southwest facing dark roof shingles and the northeast facing dark roof shingles, biased further by loss of heat from my (fairly well insulated) house.
I don’t even know how to compute an average surface temperature for the 1/2 acre plot of land my own house sits on, today, right now, from any single thermometer sampling any single location. It is 50F, 52 F, 58 F, 55F, 61 F, depending on just where my thermometer is located. My house is on a long hill (over a km long) that rises to an elevation perhaps 50-100 m higher than my house at the top — we’re in the piedmont in between Durham and Chapel Hill, where Chapel Hill really is up on a hill, or rather a series of hills that stretch past our house. I’d bet a nickel that it is a few degrees different at the top of the hill than it is where my house is today. Today it is windy, so the air is well mixed and the height is probably cooler. On a still night, the colder air tends to settle down in the hollows at the bottoms of hills, so last frost comes earlier up on hilltops or hillsides; Chapel Hill typically has spring a week or so before Durham does, in contradiction of the usual rule that higher locations are cooler.
This is why I am enormously cynical about Argo, SSTs, GISS, and so on as reliable estimates of average Global Temperature. They invariably claim impossible accuracy and impossible precision. Mere common sense suffices to reject their claim otherwise. If they disagree, they can come to my house and try to determine what the “correct” average temperature is for my humble half acre, and how it can be inferred from a single thermometer located on the actual property, let alone from a thermometer located in some weather station out in Duke Forest five kilometers away.
That is why I think that we have precisely 33 years of reasonably reliable global temperature data, not in terms of accuracy (which is unknown and perhaps unknowable) but in terms of statistical precision and as the result of a reasonably uniform sampling of the actual globe. The UAH is what it is, is fairly precisely known, and is at least expected to be monotonically related to a “true average surface Global Temperature”. It is therefore good for determining actual trends in global temperature, not so good for making pronouncements about whether or not the temperature now is or is not the warmest that it has been in the Holocene.
Hopefully the issues above make it just how absurd any such assertion truly is. We don’t know the actual temperature of the globe now, with modern instrumentation and computational methodology to an accuracy of 1 K in any way that can be compared apples-to-apples to any temperature reconstruction, instrument based or proxy based, from fifty, one hundred, one thousand, or ten thousand years ago. 1 K is the close order of all of the global warming supposedly observed since the invention of the thermometer itself (and hence the start of the direct instrumental record). We cannot compare even “anomalies” across such records — they simply don’t compare because of confounding variables, as the “Hide the Decline” and “Bristlecone Pine” problems clearly reveal in the hockey stick controversy. One cannot remove the effects of these confounding variables in any defensible way because one does not know what they are because things (e.g. annual rainfall and the details of local temperature and many other things) are not the same today as they were 100 years ago, and we lack the actual data needed to correct the proxies.
A year with a late frost, for example, can stunt the growth of a tree for a whole year by simply damaging its new leaves or can enhance it by killing off its fruit (leaving more energy for growth that otherwise would have gone into reproduction) completely independent of the actual average temperature for the year.
To conclude, one of many, many problems with modern climate research is that the researchers seem to take their thermal reconstructions far too seriously and assign completely absurd measures of accuracy and precision, with a very few exceptions. In my opinion it is categorically impossible to “correct” for things like the UHI effect — it presupposes a knowledge of the uncorrected temperature that one simply cannot have or reliably infer from the data. The problem becomes greater and greater the further back in time one proceeds, with big jumps (in uncertainty) 250, 200, 100 and 40 odd years ago. The proxy-derived record from more than 250 years ago is uncertain in the extreme, with the thermal record of well over 70% of the Earth’s surface completely inaccessible and with an enormously sparse sampling of highly noisy and confounded proxies elsewhere. To claim accuracy greater than 2-3 K is almost certainly sheer piffle, given that we probably don’t know current “true” global average temperatures within 1 K, and 5K is more likely.
I’m certain that some paleoclimatologists would disagree with such a pessimistic range. Surely, they might say, if we sample Greenland or Antarctic ice cores we can obtain an accurate proxy of temperatures there 1000 or 2000 years ago. Why aren’t those comparable to the present?
The answer is because we cannot be certain that the Earth’s primary climate drivers distributed its heat the same way then as now. We can clearly see how important e.g. the decadal oscillations are in moving heat around and causing variations in global average temperature. ENSO causes spikes and seems responsible for discrete jumps in global average temperature over the recent (decently thermometric) past that are almost certainly jumps from one poincare’ attractor to another in a complex turbulence model. We don’t even know if there was an ENSO 1000 years ago, or if there was if it was at the same location and had precisely the same dependences on e.g. solar state. As a lovely paper Anthony posted this morning clearly shows, major oceanic currents jump around on millennial timescales that appear connected to millennial scale solar variability and almost certainly modulate the major oscillations themselves in nontrivial ways. It is quite possible for temperatures in the antarctic to anticorrelate with temperatures in the tropics for hundreds of years and then switch so that they correlate again. When an ocean current is diverted, it can change the way ocean average temperatures (however one might compute them, see above) vary over macroscopic fractions of the Earth’s surface all at once.
To some extent one can control for this by looking at lots of places, but “lots” is in practice highly restricted. Most places simply don’t have a good proxy at all, and the ones that do aren’t always easy to accurately reconstruct over very long time scales, or lose all sorts of information at shorter time scales to get the longer time scale averages one can get. I think 2-3 K is a generous statement of the probable real error in most reconstructions for global average temperature over 1000 years ago, again presuming one can define an apples-to-apples global average temperature to compare to which I doubt. Nor can one reliably compare anomalies over such time scales, because of the confounding variables and drift.
This is a hard problem, and calling it settled science is obviously a political statement, not a scientific one. A good scientist would, I truly believe, call this unsettled science, science that is understood far less than physics, chemistry, even biology. It is a place for utter honesty, not egregious claims of impossibly accurate knowledge. In my own utterly personal opinion, informed as well or as badly as chance and a fair bit of effort on my part have thus far informed it, we have 33 years of a reasonably precise and reliable statement of global average temperature, one which is probably not the true average temperature assuming any such thing could be defined in the first place but which is as good as any for the purposes of identifying global warming or cooling trends and mechanisms.
Prior to this we have a jump in uncertainty (in precision, not accuracy) compared to the ground-based thermometric record that is strictly apples-to-oranges compared to the satellite derived averages, with error bars that rapidly grow the further back one goes in the thermometric record. We then have a huge jump in uncertainty (in both precision and accuracy) as we necessarily mount the multiproxy train to still earlier times, where the comparison has unfortunately been between modern era apples, thermometric era oranges, and carefully picked cherries. Our knowledge of global average temperatures becomes largely anecdotal, with uncertainties that are far larger than the observed variation in the instrumental era and larger still than the reliable instrumental era (33 year baseline).
Personally, I think that this is an interesting problem and one well worth studying. It is important to humans in lots of ways; we have only benefitted from our studies of the weather and our ability to predict it is enormously valuable as of today in cash money and avoided loss of life and property. It is, however, high time to admit the uncertainties and get the damn politics out of the science. Global climate is not a “cause”! It is the object of scientific study. For the conclusions of that science to be worth anything at all, they have to be brutally honest — honest in a way that is utterly stripped of bias and that acknowledges to a fault our own ignorance and the difficulty of the problem. Pretending that we know and can measure global average temperatures from a sparse and short instrumental record where it would be daunting to assign an accurate, local average temperature to any given piece of ground based on a dense sampling of temperatures from different locations and environments on that piece of ground does nothing to actually help out the science — any time one claims impossible accuracy for a set of experimentally derived data one is openly inviting false conclusions to be drawn from the analysis. Pretending that we can model what is literally the most difficult problem in computational fluid dynamics we have ever attempted with a handful of relatively simple parametric differential forms and use the results over centennial and greater timescales does nothing for the science, especially when the models, when tested, often fail (and are failing, badly, over the mere 33 years of reliable instrumentation and a uniform definition of at least one of the global average temperatures).
It’s time to stop this, and just start over. And we will. Perhaps not this year, perhaps not next, but within the decade the science will finally start to catch up and put an end to the political foolishness. The problem is that no matter what one can do to proxy reconstructions, no matter how much you can adjust LSTs for UHI and other estimated corrections that somehow always leave things warmer than they arguably should be, no matter what egregious claims are initially made for SSTs based on Argo, the UAH will just keep on trucking, unfutzable, apples to apples to apples. The longer that record gets, the less one can bias an “interpretation” of the record.
In the long run that record will satisfy all properly skeptical scientists, and the “warmist” and “denier” labels will end up being revealed as the pointless political crap that they are. In the long run we might actually start to understand some of the things that contribute to that record, not as hypotheses in models that often fail but in models that actually seem to work, that capture the essential longer time scale phenomena. But that long run might well be centennial in scale — long enough to detect and at least try to predict the millennial variations, something utterly impossible with a 33 year baseline.
rgb
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
KR says:
March 4, 2012 at 8:58 pm
“The correlations are quite strong, and I will note that (a) variances from the correlation are both positive and negative (hence evening out variances) and (b) most stations are much less than 1200km from each other. And hence the correlation is considerably higher than 0.5 for the vast majority of stations.”
Please define “strong”. 0.5 is strong? Just wondering…
Thanks.
For certain time scales, the slopes can often be very similar between different data sets, but over the last 15 years, there are often huge differences. And it is not a satellite versus land issue. See:
http://www.woodfortrees.org/plot/hadcrut3gl/from:1997.08/trend/plot/uah/from:1997.08/trend/plot/rss/from:1997.08/trend/plot/gistemp/from:1997.08/trend/plot/hadsst2gl/from:1997.08/trend
Whilst we are fighting among ourselves, may I remind those that think an opportunity has presented itself:
Someone embedded it, once.
Stephen Mosher says: “I tell you that its 20C at my house and 24C 10 miles away. Estimate the temperature at 5 miles away. Now, you can argue that this estimate is not the “average”. You can hollar that you dont want to make this estimate. But, if do decide to make an estimate based only on the data you have what is your best guess? 60C? -100C?. If I guessed 22C and then I looked and found that it was 22C, what would you conclude about my estimation procedure? would you conclude that it worked?
Here is another way to think about it. Step outside. It’s 14C in 2012. We have evidence ( say documentary evidence ) that the LIA was colder than now. What’s that mean? That means that if you had to estimate the temperature in the same spot 300 years ago, you would estimate that it would be…………thats right…… colder than it is now. Now, chance being a crazy thing, it might actually be warmer in that exact spot, but your best estimate, one that minimizes the error, is that it was colder.”
Mosh,
Sometimes you hit the nail on the head, other times you just smash your finger(s). Today it’s fingers. All of them. Eight times in two paragraphs you champion the cause of estimating temperatures based on guesses. Isn’t that Dr. Brown’s complaint about the way that this data is presented to the public? Explain to us all again how we’re going to get even a one degree accuracy from guesses and extrapolations. Your guess about whether the temperature was 22C is still a guess, even if it turned out to be right. It could easily be that there is a 5,000 foot peak at the mid-point and your imaginary point at ten miles away. Then your guess would still be a guess, but it would also be **** because it would be wrong and a guess at the same time. Stop trying to defend the indefensible.
Amazingly, they’re still at it…
Now the target has moved. Instead of trying to justify the obvious inaccuracy of the mythical “global temperature” they’re defending the accuracy of the equally meaningless “anomaly”.
Well, here’s a tip for free: your “anomaly” is STILL only an anomaly for the fraction of the planet being measured. Very small fraction. Microscopic fraction, actually. And yeah, it’s easy for me to say this, living in a geographical area that has experienced some of the coolest summers in decades that somehow, magically, show up as warming.
Anomaly, my [snip]. You’re dealing with fantasy numbers, and if you don’t know it then you really ARE delusional.
[REPLY: We didn’t do it for you. This is a family blog. Think of the children! -REP]
Thanks for keeping that [snip] evil away from my soul [REP].
“There may be many ways to define a global temperature, but there needn’t be a unique way to make many of them useful. http://www.realclimate.org/index.php/archives/2007/03/does-a-global-temperature-exist/ ” – Daniel Packman, NCAR.
How does this Real Climate dot org article comments, the paper they reference and the other articles they link to fit into this post?
The climate scientists are treated as being experts in their field. They are by no means the most skilled or qualified in it. On average across all the skills involved it may be true, but if you take each specific skill involved it is a false assumption. The climate models it is clear from the Climategate data are just a tiny step above pure amateur by ignoring half the required information on the pattern and behaviour of the natural world part of the CO2 cycle.
Without a clear cut proof that man’s CO2 is not just treated as a tiny add on to a system that is self balancing they have made a bland and unreasonable assumption based on near total ignorance.
I am told by statisticians from the marketing field that this is equally true in the methods and conclusions of their statistical work. The most serious accusation in this area is that since the results have a strong regional bias there is no justification whatever for taking averages at all.
Similarly in the data acquisition the claim for the results to be scientific when they are heavily relying on a statistical distribution rather than accurate results is unsound.
In a small suburban garden I have five thermometers which read the same when placed in the same point to 0.1 of a degree plus or minus 0.1 but when placed around the garden give a five degree or more variation. There is no one point where a single thermometer can give this average reading consistent to 0.5 degree. What is the true value for measurements covering 100,000 sq km?
What trust should we have in peer review when those who pointed this out within the profession had their grants removed and left to join engineering over a decade ago?
Hmmmn. Hansen. Published in 1987, last data date was 1985.
In that paper, he admits that selecting a 1200 km extrapolation range was “arbitrary”….. Got about a 0.5 correlation with his data.
And nobody, in all of the hundreds of billions spent worldwide since 1985 on his precious “climate change mythconceptions” has gotten any newer, and more accurate results other than James Hansen?
Have you looked at Figure 7? Hansen’s own temperature changes ARE latitude dependent: his plotted trends for 0-24 degrees, 24-44, 44-64, and 64-90 are very, very different across all years. Each latitude differs in trend, in peaks, and in degree-of-difference of its delta T. In Figure 8, his own data for Boxes 15 (south west US- Mexico) and Box 9 (Europe) are different (flat-lines almost!) than for the “regions” of the latitude bands they lay in.
But Hansen claims “his” NASA-GISS values DO generate a valid single number for the temperature difference. WUWT?
Quoting his paper on methodology:
“3. SPATIAL AVERAGING: BIAS METHOD
Our principal objective is to estimate the temperature
change of large regions. We would like to incorporate the
information from all of the relevant available station
records. The essence of the method which we use is shown
schematically in Figure 5 for two nearby stations, for which
we want the best estimate of the temperature change in
their mutual locale. We calculate the mean of both records
for the period in common, and adjust the entire second
record( T2) by the difference (bias) ST. The mean of the
resulting temperature records is the estimated temperature
change as a function of time. The zero point of the
temperature scale is arbitrary.”
Later, he goes on to “match” northern hemisphere stations with southern hemisphere stations – though he admits the southern hemisphere stations are lacking area coverage (over 80 percent of the southern hemisphere is ocean or Antarctic), date coverage (number of years measured in much much lower in the south), and spatial coverage (most regions of the south are much less covered than any of the north but Siberia and desert China). Nonetheless, he matches northern and southern stations by latitude and longitude and length of record. Ignoring coastlines, altitude, local climates, and development. WUWT?
The things started when certain people said that world is warming up, and certain other people said the world is not warming up. For every place that has warmed up you can find another place that has cooled down so how to decide? Some clever people must have come and figured out the method how to find an answer – global temperature anomaly. Yes, when calculating that you give up on both location and actual temperature, but in the end it gives us the answer to the original question.
There is no place on the earth which maintains average temperature anomaly. So you may say that number is disconnected from reality. But in is not, the only thing that can be said about it is that this connection is unbalanced – quite tight in one direction and quite vague in the other direction. It’s just certain number obtained by processing real world measurements which gives us answers to certain kind of questions.
I believe it is quite settled nowadays that world has on average warmed up somewhat in the past 50 something years. That’s what the anomaly value tells us and there are few other ways how to settle disputes on that.
The problem is that people don’t understand the value and either underestimate, or overestimate its value.
For example, it is wrong to put straight line through anomaly values and assume these values will continue to follow that straight line in the future.
It is wrong to say that if anomaly value went up 0.1°C then the whole world has warmed up 0.1°C … or that half of the world warmed up 0.2°C and half has cooled down 0.1°C etc etc. That number does not tell us anything like such.
But it is also wrong to say the anomaly value is a complete nonsense.
Heh. Happens that the GISS temp for a high plateau lake in the Peruvian Andes is the midpoint average between a coastal station and another in the Amazon. (There is a station available on site, but the average is “good enough for government work”.)
:p
Applies to virtually every contributing science from physics to hydrology to math. The Aus. skeptic Dr. Bob Carter (?) in a talk said there are about 100 specialties needed to master climate science, and no one person can be on top of more than one or two.
My sub-moniker for the Hokey Team is “Jackasses of All Sciences, Masters of None”.
KR says: March 4, 2012 at 9:24 pm
Anomalies have extremely strong correlations over considerable distances, making them quite measurable, and reducing uncertainties with more data to the extent that a 2-STD variance of 0.05C is supported by the numbers. Dr. Brown has talked quite a lot, but it appears (IMO) nothing but a distraction from the observed trends. His claims of uncertainty are not supported by the data.””
I’ll tell you what the data supports, nothing. It is a blindingly stupid situation where the debate is now not only about the uncertainty as to the likely hood of a wrong hypotheses used in determining a ‘global’ mean, but also incorrectly placed and monitored stations but also whether the anomaly is 0.0C or 0.5C or 1.0C.
Who cares! My flesh and blood and the natural world can tolerate from -10C to 50C. Actually, those who live cooler climate should be up in arms about the attempt to slowdown their heating up since the Little Ice Age.
Just how balanced is this debate? Do we really still believe there is a catastrophe around the corner? Only somebody who is deluded wouldn’t think the catastrophic part of the debate is over. Next we will do the models. Then well do the insolation bazingas, then we’ll do Co2 radiativity, then we correct with unbiased data the screwed up sets of coupled non-Markovian Navier-Stokes equations, with the right drivers and feedbacks. Then well kiss this debate goodbye.
From my perspective it seems to be madness to think there are not benefits from a warming world. Opportunities to be plundered and good times to be had. Dr Brown is eminently correct. The physics principles of AGW are screwy.
“”All of the sites would almost certainly yield different results, results that depend on things like the albedo and emissivity of the point on the surface, the heat capacity and thermal conductivity of the surface matter, the latitude. Is it the “blackbody” temperature of the surface (the inferred temperature of the surface determined by measuring the outgoing full spectrum of radiated light)?””
Would you bet that the uncertainties are within a 2-STD variance of 0.05C. I’ll put a up a large one, and say your numbers do not support any thing like that certainty.
Reed Coray,
I became a supporter of the “Tarheels” in 1982. Eight years later I started working at the Duke university “Free Electron Laser Laboratory”. For the next twelve years my colleagues used to put light blue crying towels on my desk every time Duke beat UNC at basketball.
From 1990 to 2002 when I retired the Dukies had the better of it. The 1991-1992 “Back to Back” years were particularly difficult.
Then the improbable Duke win in Chapel Hill on February 8 when UNC had a ten point lead with only two minutes remaining. The crying towels were saturated.
What a change a few weeks can bring. UNC finally discovering how to defend the 3-ball effectively on the way to dominating Duke in the Cameron Indoor Stadium. At last the crying towels are dark blue!
I may be off topic here so let me point out that Richard Lindzen’s address to the UK House of Pariament had an excellent slide that pointed out the foolishness of hyper ventilating about a few tenths of a degree variation in the global average temperature given that temperatures vary by huge amounts in most locations from day to night and from season to season:
http://diggingintheclay.wordpress.com/2012/02/27/lindzen-at-the-house-of-commons/
Take a look at slide #15.
There may be a solution to calculating an Earth average temperature.
But first let me use a simple example that may demonstrate the folly of calculating an average temperature.
Lets take two 1 meter cubes where one cube is a solid mass of iron and the other cube is a volume of gas, lets say of oxygen. On second thought, let both volumes be a solid metal, but of a different metals from each other. The mass of each cube should be different if the metals are of only one element, and not a mixture that may make the two cubes equal in mass. If the temperatures of the two cubes are identical, the average temperature is easy to calculate. What about when the temperatures are different? Lets make it as simple as possible and set the temperatures of each object to be the same throughout each cube (homogenous temperature), keeping in mind the criteria that the two cubes have different temperatures from each other.
Do you add the two temperatures together then divide by two? What would be the purpose of such an answer? The answers would be the same whether the cubes where gold, lead or aluminum. Or if the cubes where of identical material, or one was a volume of gas of any mixture.
What is the average temperature of a simple object such as the human body? We often take the temperature of the “core”, and we usually use only specific places where we stick the thermometer. The average temperature requires measuring temperatures throughout the extremities, the exterior surfaces, and interiors of these extremities, as well as the core temperature. And then what do you do with these measurements?
In real world applications the mass is an important quantity when dealing with temperatures, unless, like in the human body temperature example, you only need the measurement of one location, to check to see if it is anomalous. Does the patient have a fever?
Does the earth have a fever? Many locations show a nearly constant climate temperature, by which I mean no change in trends of long term temperatures at weather stations where the measurements have been made over many decades. Of course other stations have changes, and apparently in a recent dataset, 30 percent of stations show a cooling trend. Keep in mind we are talking about the current way of calculating average temperatures.
Getting to the solution: Measuring temperature allows one to calculate how much energy is contained in a mass. You have to calculate the mass as well to carry out the energy calculations. The atmosphere is problematic because the mass of a volume of air is different depending on how much moisture it contains amongst other difficulties. Cold air is also denser than hot air, which one quickly learns when taking flying lessons such as I have done. Your take off distances are much shorter at sea level in cold weather than say Denver airport on a hot summer day. At the Denver airport you may have to wait for the temperature to drop, or offload some baggage or an excess passenger or two.
So lets say that the solution is to calculate how much energy the surface of the earth contains, and track this quantity over a period of time, to see what we get, to see if this information is useful and see if it correlates with whatever (CO2, the sun, etc.).
What about potential energy? Part of the energy calculations should also contain that part of energy that may be transformed into potential energy, such as when a mass of water is transported high into the sky, or what about ice?
Calculating total global surface energy requires quantifying the energy contained in the ocean, lakes, earth solid surface, perhaps even objects on the surface such as trees, as well as the atmosphere. I leave this work as an exercise for the reader 😉 .
So my conclusion is, the way we are doing it now isn’t all that bad, but only when long times are considered, because of the noisy and highly inaccurate measurements that are currently being made. Once again I want to emphasize, long time frames, to account for the PDO and the other variances over decadal and other time frames. The Urban Heat Island Effect also might be tackled some time in the future when real climate scientists, and their enabling politicians want the true answer (or skeptical institutes get more funding), by doing research on this topic, which apparently no one has bothered to do yet. Thermal and fluid dynamics computer models come to mind as an aspect to explore the UHI effect. I understand that about 200 billion dollars world wide has been spent on Climate Change research, poorly done in my opinion. In fact, I think Climate Change science as has been practiced so far is one of the shoddiest sciences I have ever seen. It is like no other science that I have ever come across. Mind you, there are some really good Climate Scientists doing quality work, and I think you can guess who I mean.
Reply Nick Stokes: The CRU statement on errors is
Excellent post.
The Hadley Centre claim that their temperature record is the world’s longest. this is a questionable claim. early thermometers were of questionable quality and accuracy. Temperatures were not routinely taken hourly but when the owners thought about it. Days could go by between readings in some cases. It was not until the Stevenson Screen that some sort of standardisation happened. There is still siting problems with stations.
So we have a collection of data observed in a haphazard fashion from poorly spaced stations over 30% of the planet to produce the ‘average’ temperature. Satellites at least cover the whole planet and produce a better result. Better but not necessarily the correct one.
I have followed this debate for some time and the objections that Dr Brown point out are not new to me. Nor is the strong argument made by Nick Stokes that in fact looking at temperature anomalies is a satisfactory way around instrument and even siting bias. You are not actually recording exact temperature for all the good reasons Dr Brown makes plain, you are recording the daily change in temperature – that the next day is colder or warmer than the previous and by how much. I have no problem with that providing a meaningfully accurate indication of trend which is the important thing.
What I do have a problem with is how meaningful an average daily temperature really is and by extension the usefulness of measuring climate change by a change in average global temperature. Are the days getting hotter? Or are the days staying the same but the nights are getting hotter? Or are the days getting much hotter and the nights slightly cooler?
Or what about the mean temperature? Is it getting hotter earlier in the day and cooling later in the evening but the maximum temperature is unaffected?
…And many other questions relating to diurnal changes and mean temperature. And even then, what real significance does this have on physical climate? Does it make it rain more or less, windier less windy?
Some of these questions were addressed by Dr Lindzen in his presentation to UK parliament, and in Dr Christy’s work, and the result of which simply does not support being ‘alarmed’ enough to warrant the enormous and rapid changes and rushed investment being proposed or being carried out.
I also agree with Dr Brown that the most useful measurement of global temperature, in so far as that has any meaning for climate, is via the satellite record.
Suppose there was a perfect temporal agreement between ‘true’ thermometer readings. Apart from a local constant and measurement error all thermometers would show the same time series. In that case it would not matter where the thermometers reside. Neither would it matter at what places we drop thermometers and where we introduce new ones. However, the earth is heterogeneous in this respect: whereas at some places temperatures increase at others they decrease. Because of this phenomenon the surface record can be manipulated. For the manipulation we may use the fact that for a single station the time series must show a cyclical pattern. The temperatures cannot increase or decrease indefinitely. If you want a warming world, drop those stations from the record that showed increasing temperatures for a number of recent years. This way of dropping stations is not random with respect to temporal pattern, to be distinguished from non-random drop-out with respect to location, which needs not to be a problem.
Is there a drop-out problem in the surface record? According to the GHCN base we have world-wide at the end of 1969 on duty 9644 stations. In the epoch 1970-99 we observe that 9434 dropped out. Included were 2918 new stations. Therefore, in thirty years the surface record almost completely changed. It should be shown that this change was random with respect to pattern. As far as I know this was never shown and I think it cannot be shown because of my own research, telling that the correlations between station time series increased in the course of time. Non-random change with respect to pattern may explain the divergence of the surface and satellite record.
Compared with the bias problem, the error problem is simple. Split the set of stations in a certain region into two subsets using a random number generator. Compute over time the correlation (r) between the subset means. Compute also over time the variance (v) of the complete set. The error variance (e) can be found as e = v(1-r)/(1+r).
Kasuha says:
March 4, 2012 at 11:46 pm
Not surprisingly, I strongly disagree: it’s not settled. My parents, who are in their mid 70s, and their peer group, some of whom are in their 90s, say there is NO difference in weather since their youth. No difference in local climate, which plants grow here, etc. I think what is settled is that people with an agenda have fiddled with past records… because I distinctly remember in the 1970s hearing about how the past was all so much warmer, and it was getting coooooolder.
I’ll take anecdotal evidence over pretty much anything instrumental before, say, 1979… or more reasonably, since the late 90s when people started putting the Internet Microscope onto the climate alarmists.
Because, you see, I don’t trust them, they have never given me a reason to trust them, and most of their actions tell me that I should not trust them. So I don’t.
I got fed up reading all this verbiage. It’s a big, fat strawman argument.
State the obvious 50 million times and make a bogus claim that the other guy doesn’t understand the obvious. And then hope LT can’t see though the confusing verbal fog. Fat chance.
You may as well claim that you can’t measure the temperature in your house, oh he actually did do that, so when you turn on the heater you can’t possibly know for sure that it’s warmer.
So you guys can try the experiment, turn on your heater and measure the temperature in each room of the house. If you claim it’s getting warmer, this “physicist” is going to accuse you of being full of shit.
KR (and Nick Stokes, as usual) are being disingenuous at best, economical with the truth at worst
.
HadCruT3 since 1997, no warming, no correlation with CO2
.
RSS pre & post 1998 El Nino, same trend?
.
Climate Science & Trends
Robert
Many years ago I made this very same argument at RealClimate. I got a very similar, but more rude, reply from them as you have from Stokes and Mosher. People from non-engineering and/or non Physics -ologies have not had the necessary tutolage to understand their lack of knowledge in those fields. You always get the “strawman” argumant back because they can’t understand measuring techniques and their limitations.
Gras Albert says:
March 5, 2012 at 3:52 am
KR (and Nick Stokes, as usual) are being disingenuous at best, economical with the truth at worst
”If you don’t trust the adjustments, then calculate the temperatures without them. You will find that you see essentially the same results – that we’re seeing warming at around 0.15-0.16ºC/decade right now.”
————————————————-
Without knowing the correct drivers, no matter what they are measuring, can a prediction be made from it? Svensmark thinks he knows a driver and has correlated cosmic rays with temperature.
Look at this:
http://calderup.files.wordpress.com/2012/03/101.jpg
Cosmic ray intensity is in red and upside down, so that 1991 was a minimum, not a maximum. Fewer cosmic rays mean a warmer world, and the cosmic rays vary with the solar cycle. The blue curve shows the global mean temperature of the mid-troposphere as measured with balloons and collated by the UK Met Office (HadAT2).
In the upper panel the temperatures roughly follow the solar cycle. The match is much better when well-known effects of other natural disturbances (El Niño, North Atlantic Oscillation, big volcanoes) are removed, together with an upward trend of 0.14 deg. C per decade. The trend may be partly due to man-made greenhouse gases, but the magnitude of their contribution is debatable.
From 2000 to 2011 mid-tropospheric temperatures have remained pretty level, like those of the surface, despite the continuing increase in the gases – in “flat” contradiction to the warming predicted by the Intergovernmental Panel on Climate Change. Meanwhile the Sun is lazy, cosmic ray counts are high and the oceans are cooling.
‘Reference: Svensmark, H. and Friis-Christensen, E., “Reply to Lockwood and Fröhlich – The persistent role of the Sun in climate forcing”, Danish National Space Center Scientific Report 3/2007.’
Knowing that during the 20th century, solar activity increased in magnitude to a so-called grand maximum and It is probable that this high level of solar activity is at or near its end. Would you be gutsy enough to predict warming up to 2030.
I’m with Dr Brown on this. He has made the argument that demolishes GISS with absolute clarity, and faultless logic. Indeed, the people at GISS are exposed as fools, in that the actually believe the nonsense they spout. The people here defending GISS sound like politicians caught with their hands in the cookie jars. Guys, your responses sound ridiculous – you are just turning people off.
Excellent work Dr Brown. This is the sort of thing that makes WUWT the best climate science blog in the world.