Guest Post by Dr. Robert Brown,
Physics Dept. Duke University [elevated from comments]
Dr. Brown mentions “global temperature” several times. I’d like to know what he thinks of this.
Dr. Brown thinks that this is a very nice piece of work, and is precisely the reason that he said that anybody who claims to know the annualized average temperature of the Earth, or the Ocean, to 0.05 K is, as the saying goes, full of [snip] up to their eyebrows.
What I think one can define is an average “Global Temperature” — noting well the quotes — by following some fixed and consistent rule that goes from a set of data to a result. For example, the scheme that is used to go from satellite data to the UAH lower troposphere temperature. This scheme almost certainly does not return “the average Global Temperature of the Earth” in degrees absolute as something that reliably represents the coarse-grain averaged temperature of (say) the lowest 5 kilometers of the air column, especially not the air column as its height varies over an irregular terrain that is itself sometimes higher than 5 kilometers. It does, however, return something that is likely to be close to what this average would be if one could sample and compute it, and one at least hopes that the two would co-vary monotonically most of the time.
The accuracy of the measure is very likely not even 1K (IMO, others may disagree) where accuracy is — the absolute difference between lower troposphere temperature and the “true global temperature” of the lower troposphere. The various satellites that contribute to temperature have (IIRC) a variance on this order so the data itself is probably not more accurate than that. The “precision” of the data is distinct — that’s a measure of how much variance there is in the data sources themselves, and is a quantity that can be systematically improved by more data, where accuracy, especially in a situation like this where one is indirectly inferring a quantity that is not exactly the same as what is being measured cannot be improved by more or more precise measurements, it can only be improved by figuring out the map between the data one is using and the actual quantity you are making claims about.
Things are not better for (land) surface measurements — they are worse. There the actual data is (again, in my opinion) hopelessly corrupted by confounding phenomena and the measurement errors are profound. Worse, the measurement errors tend to have a variable monotonic bias compared to the mythical “true average surface Global Temperature” one wishes to measure.
One is in trouble from the very beginning. The Moon has no atmosphere, so its “global average temperature” can be defined without worrying about measuring its temperature at all. When one wishes to speak of the surface temperature at a given point, what does one use as a definition? Is it the temperature an actual high precision thermometer would read (say) 1 cm below the surface at that point? 5 mm? 1 mm? 1 meter? All of these would almost certainly yield different results, results that depend on things like the albedo and emissivity of the point on the surface, the heat capacity and thermal conductivity of the surface matter, the latitude. Is it the “blackbody” temperature of the surface (the inferred temperature of the surface determined by measuring the outgoing full spectrum of radiated light)?
Even inferring the temperature from the latter — probably the one that is most relevant to an airless open system’s average state — is not trivial, because the surface albedo varies, the emissivity varies, and the outgoing radiation from any given point just isn’t a perfect blackbody curve as a result.
How much more difficult is it to measure the Earth’s comparable “surface temperature” at a single point on the surface? For one thing, we don’t do anything of the sort. We don’t place our thermometers 1 meter, 1 cm, 1 mm deep in — what, the soil? The grass or trees? What exactly is the “surface” of a planet largely covered with living plants? We place them in the air some distance above the surface. That distance varies. The surface itself is being heated directly by the sun part of the time, and is radiatively cooling directly to space (in at least some frequencies) all of the time. Its temperature varies by degrees K on a time scale of minutes to hours as clouds pass between the location and the sun, as the sun sets, as it starts to rain. It doesn’t just heat or cool from radiation — it is in tight thermal contact with a complex atmosphere that has a far greater influence on the local temperature than even local variations in insolation.
Yesterday it was unseasonably warm in NC, not because the GHE caused the local temperature to be higher by trapping additional heat but because the air that was flowing over the state came from the warm wet waters of the ocean to the south, so we had a relatively warm rain followed by a nighttime temperature that stayed warm (low overnight of maybe 46F) because the sky was cloudy. Today it is almost perfectly seasonal — high 50’s with a few scattered clouds, winds out of the WSW still carrying warm moisture from the Gulf and warm air from the south central US, but as the day progresses the wind is going to shift to the NW and it will go down to solidly freeze (30F) tonight. Tomorrow it will be seasonal but wet, but by tomorrow night the cooler air that has moved in from the north will make it go down to 25F overnight. The variation in local temperature is determined far more by what is going on somewhere else than it is by actual insolation and radiation here.
If a real cold front comes down from Canada (as they frequently do this time of year) we could have daytime highs in the 30’s or low 40’s and nighttime lows down in the the low 20s. OTOH, if the wind shifts to the right quarter, the temperature outside could reach the low 80s high and low 50s low. We can, and do, have both extremes within a single week.
Clearly surface temperatures are being driven as strongly by the air and moisture flowing over or onto them as they are by the “ideal” picture of radiative energy warming the surface and radiation cooling it. The warming of the surface at any given point isn’t solely responsible for the warming or cooling of the air above it, the temperature of the surface is equally dependent on the temperature of the air as determined by the warming of the surface somewhere else, as determined by the direct warming and cooling of the air itself via radiation, as determined by phase changes of water vapor in the air and on the surface, as determined by factor of ten modulations of insolation as clouds float around over surface and the lower atmosphere alike.
Know the true average surface Global Temperature to within 1K? I don’t even know how one would define a “true” average surface Global Temperature. It was difficult enough for the moon without an atmosphere, assuming one can agree on the particular temperature one is going to “average” and how one is going to perform the average. For the Earth with a complex, wet, atmosphere, there isn’t any possibility of agreeing on a temperature to average! One cannot even measure the air temperature in a way that is not sensitive to where the sun is and what it is doing relative to the measurement apparatus, and the air temperature can easily be in the 40s or 50s while there is snow covering the ground so that the actual surface temperature of the ground is presumably no higher than 32F — depending on the depth one is measuring.
And then oops — we forgot the Oceans, that cover 70% of the surface of the planet.
What do we count as the “temperature” of a piece of the ocean? There is the temperature of the air above the surface of the ocean. In general this temperature differs from the actual temperature of the water itself by order of 5-10K. The air temperature during the day is often warmer than the temperature of the water, in most places. The air temperature at night is often cooler than the temperature of the water.
Or is it? What exactly is “the temperature of the water”? Is it the temperature of the top 1 mm of the surface, where the temperature is dominated by chemical potential as water molecules are constantly being knocked off into the air, carrying away heat? Is it the temperature 1 cm deep? 10 cm? 1 m? 10 m? 50 m? 100m? 1 km?
Is it the average over a vertical column from the surface to the bottom (where the actual depth of the bottom varies by as much as 10 km)? This will bias the temperature way, way down for deep water and make the global average temperature of the ocean very nearly 4K very nearly everywhere, dropping the estimate of the Earth’s average Global Temperature by well over 10K. Yet if we do anything else, we introduce a completely arbitrary bias into our average. Every value we might use as a depth to average over has consequences that cause large variations in the final value of the average. As anyone who swims knows, it is quite easy for the top meter or so of water to be warm enough to be comfortable while the water underneath that is cold enough to take your breath away.
Even if one defines — arbitrarily, as arbitrary in its own way as the definition that one uses for or the temperature you are going to assign to a particular point on the surface on the basis of a “corrected” or “uncorrected” thermometer with location biases that can easily exceed several degrees K compared to equally arbitrary definitions for what the thermometer “should” be reading for the unbiased temperature and how that temperature is supposed to relate to a “true” temperature for the location — a sea surface temperature SST to go with land surface temperature LST and then tries to take the actual data for both and turn them into a average global temperature, one has a final problem to overcome. One’s data is (with the possible exception of modern satellite derived data) sparse! Very sparse.
In particular, it is sparse compared to the known and observed granularity of surface temperature variations, for both LST and SST. Furthermore, it has obvious sampling biases. We have lots and lots of measurements where people live. We have very few measurements (per square kilometer of surface area) where people do not live. Surface temperatures can easily vary by 1K over a kilometer in lateral distance (e.g. at terrain features where one goes up a few hundred meters over a kilometer of grade). They can and do vary by 1 K over order of 5-10 kilometers variations routinely.
I can look at e.g. the Weather Underground’s weather map readings from weather stations scattered around Durham at a glance, for example. At the moment I’m typing this there is a 13 F variation from the coldest to the warmest station reading within a 15 km radius of where I’m sitting. Worse, nearly all of these weather station readings are between 50 and 55 F, but there are two outliers. One of them is 46.5 F (in a neighborhood in Chapel Hill), and the other is Durham itself, the “official” reading for Durham (probably downtown somewhere) which is 59.5 F!
Guess which one will end up being the temperature used to compute the average surface temperature for Durham today, and assigned to an entirely disproportionate area of the surface of the planet in a global average surface temperature reconstruction?
Incidentally, the temperature outside of my house at this particular moment is 52F. This is a digital electronic thermometer in the shade of the north side of the house, around a meter off of the ground. The air temperature on the other side of the house is almost certainly a few degrees warmer as the house sits on a southwest-facing hill with pavement and green grass absorbing the bright sunlight. The temperature back in the middle of the cypresses behind my house (dense shade all day long, but with decent airflow) would probably be no warmer than 50 F. The temperature a meter over the driveway itself (facing and angled square into the sun, and with the house itself reflecting additional heat and light like a little reflector oven) is probably close to 60 F. I’m guessing there is close to 10F variation between the air flowing over the southwest facing dark roof shingles and the northeast facing dark roof shingles, biased further by loss of heat from my (fairly well insulated) house.
I don’t even know how to compute an average surface temperature for the 1/2 acre plot of land my own house sits on, today, right now, from any single thermometer sampling any single location. It is 50F, 52 F, 58 F, 55F, 61 F, depending on just where my thermometer is located. My house is on a long hill (over a km long) that rises to an elevation perhaps 50-100 m higher than my house at the top — we’re in the piedmont in between Durham and Chapel Hill, where Chapel Hill really is up on a hill, or rather a series of hills that stretch past our house. I’d bet a nickel that it is a few degrees different at the top of the hill than it is where my house is today. Today it is windy, so the air is well mixed and the height is probably cooler. On a still night, the colder air tends to settle down in the hollows at the bottoms of hills, so last frost comes earlier up on hilltops or hillsides; Chapel Hill typically has spring a week or so before Durham does, in contradiction of the usual rule that higher locations are cooler.
This is why I am enormously cynical about Argo, SSTs, GISS, and so on as reliable estimates of average Global Temperature. They invariably claim impossible accuracy and impossible precision. Mere common sense suffices to reject their claim otherwise. If they disagree, they can come to my house and try to determine what the “correct” average temperature is for my humble half acre, and how it can be inferred from a single thermometer located on the actual property, let alone from a thermometer located in some weather station out in Duke Forest five kilometers away.
That is why I think that we have precisely 33 years of reasonably reliable global temperature data, not in terms of accuracy (which is unknown and perhaps unknowable) but in terms of statistical precision and as the result of a reasonably uniform sampling of the actual globe. The UAH is what it is, is fairly precisely known, and is at least expected to be monotonically related to a “true average surface Global Temperature”. It is therefore good for determining actual trends in global temperature, not so good for making pronouncements about whether or not the temperature now is or is not the warmest that it has been in the Holocene.
Hopefully the issues above make it just how absurd any such assertion truly is. We don’t know the actual temperature of the globe now, with modern instrumentation and computational methodology to an accuracy of 1 K in any way that can be compared apples-to-apples to any temperature reconstruction, instrument based or proxy based, from fifty, one hundred, one thousand, or ten thousand years ago. 1 K is the close order of all of the global warming supposedly observed since the invention of the thermometer itself (and hence the start of the direct instrumental record). We cannot compare even “anomalies” across such records — they simply don’t compare because of confounding variables, as the “Hide the Decline” and “Bristlecone Pine” problems clearly reveal in the hockey stick controversy. One cannot remove the effects of these confounding variables in any defensible way because one does not know what they are because things (e.g. annual rainfall and the details of local temperature and many other things) are not the same today as they were 100 years ago, and we lack the actual data needed to correct the proxies.
A year with a late frost, for example, can stunt the growth of a tree for a whole year by simply damaging its new leaves or can enhance it by killing off its fruit (leaving more energy for growth that otherwise would have gone into reproduction) completely independent of the actual average temperature for the year.
To conclude, one of many, many problems with modern climate research is that the researchers seem to take their thermal reconstructions far too seriously and assign completely absurd measures of accuracy and precision, with a very few exceptions. In my opinion it is categorically impossible to “correct” for things like the UHI effect — it presupposes a knowledge of the uncorrected temperature that one simply cannot have or reliably infer from the data. The problem becomes greater and greater the further back in time one proceeds, with big jumps (in uncertainty) 250, 200, 100 and 40 odd years ago. The proxy-derived record from more than 250 years ago is uncertain in the extreme, with the thermal record of well over 70% of the Earth’s surface completely inaccessible and with an enormously sparse sampling of highly noisy and confounded proxies elsewhere. To claim accuracy greater than 2-3 K is almost certainly sheer piffle, given that we probably don’t know current “true” global average temperatures within 1 K, and 5K is more likely.
I’m certain that some paleoclimatologists would disagree with such a pessimistic range. Surely, they might say, if we sample Greenland or Antarctic ice cores we can obtain an accurate proxy of temperatures there 1000 or 2000 years ago. Why aren’t those comparable to the present?
The answer is because we cannot be certain that the Earth’s primary climate drivers distributed its heat the same way then as now. We can clearly see how important e.g. the decadal oscillations are in moving heat around and causing variations in global average temperature. ENSO causes spikes and seems responsible for discrete jumps in global average temperature over the recent (decently thermometric) past that are almost certainly jumps from one poincare’ attractor to another in a complex turbulence model. We don’t even know if there was an ENSO 1000 years ago, or if there was if it was at the same location and had precisely the same dependences on e.g. solar state. As a lovely paper Anthony posted this morning clearly shows, major oceanic currents jump around on millennial timescales that appear connected to millennial scale solar variability and almost certainly modulate the major oscillations themselves in nontrivial ways. It is quite possible for temperatures in the antarctic to anticorrelate with temperatures in the tropics for hundreds of years and then switch so that they correlate again. When an ocean current is diverted, it can change the way ocean average temperatures (however one might compute them, see above) vary over macroscopic fractions of the Earth’s surface all at once.
To some extent one can control for this by looking at lots of places, but “lots” is in practice highly restricted. Most places simply don’t have a good proxy at all, and the ones that do aren’t always easy to accurately reconstruct over very long time scales, or lose all sorts of information at shorter time scales to get the longer time scale averages one can get. I think 2-3 K is a generous statement of the probable real error in most reconstructions for global average temperature over 1000 years ago, again presuming one can define an apples-to-apples global average temperature to compare to which I doubt. Nor can one reliably compare anomalies over such time scales, because of the confounding variables and drift.
This is a hard problem, and calling it settled science is obviously a political statement, not a scientific one. A good scientist would, I truly believe, call this unsettled science, science that is understood far less than physics, chemistry, even biology. It is a place for utter honesty, not egregious claims of impossibly accurate knowledge. In my own utterly personal opinion, informed as well or as badly as chance and a fair bit of effort on my part have thus far informed it, we have 33 years of a reasonably precise and reliable statement of global average temperature, one which is probably not the true average temperature assuming any such thing could be defined in the first place but which is as good as any for the purposes of identifying global warming or cooling trends and mechanisms.
Prior to this we have a jump in uncertainty (in precision, not accuracy) compared to the ground-based thermometric record that is strictly apples-to-oranges compared to the satellite derived averages, with error bars that rapidly grow the further back one goes in the thermometric record. We then have a huge jump in uncertainty (in both precision and accuracy) as we necessarily mount the multiproxy train to still earlier times, where the comparison has unfortunately been between modern era apples, thermometric era oranges, and carefully picked cherries. Our knowledge of global average temperatures becomes largely anecdotal, with uncertainties that are far larger than the observed variation in the instrumental era and larger still than the reliable instrumental era (33 year baseline).
Personally, I think that this is an interesting problem and one well worth studying. It is important to humans in lots of ways; we have only benefitted from our studies of the weather and our ability to predict it is enormously valuable as of today in cash money and avoided loss of life and property. It is, however, high time to admit the uncertainties and get the damn politics out of the science. Global climate is not a “cause”! It is the object of scientific study. For the conclusions of that science to be worth anything at all, they have to be brutally honest — honest in a way that is utterly stripped of bias and that acknowledges to a fault our own ignorance and the difficulty of the problem. Pretending that we know and can measure global average temperatures from a sparse and short instrumental record where it would be daunting to assign an accurate, local average temperature to any given piece of ground based on a dense sampling of temperatures from different locations and environments on that piece of ground does nothing to actually help out the science — any time one claims impossible accuracy for a set of experimentally derived data one is openly inviting false conclusions to be drawn from the analysis. Pretending that we can model what is literally the most difficult problem in computational fluid dynamics we have ever attempted with a handful of relatively simple parametric differential forms and use the results over centennial and greater timescales does nothing for the science, especially when the models, when tested, often fail (and are failing, badly, over the mere 33 years of reliable instrumentation and a uniform definition of at least one of the global average temperatures).
It’s time to stop this, and just start over. And we will. Perhaps not this year, perhaps not next, but within the decade the science will finally start to catch up and put an end to the political foolishness. The problem is that no matter what one can do to proxy reconstructions, no matter how much you can adjust LSTs for UHI and other estimated corrections that somehow always leave things warmer than they arguably should be, no matter what egregious claims are initially made for SSTs based on Argo, the UAH will just keep on trucking, unfutzable, apples to apples to apples. The longer that record gets, the less one can bias an “interpretation” of the record.
In the long run that record will satisfy all properly skeptical scientists, and the “warmist” and “denier” labels will end up being revealed as the pointless political crap that they are. In the long run we might actually start to understand some of the things that contribute to that record, not as hypotheses in models that often fail but in models that actually seem to work, that capture the essential longer time scale phenomena. But that long run might well be centennial in scale — long enough to detect and at least try to predict the millennial variations, something utterly impossible with a 33 year baseline.
rgb
I think you got those two dates flipped. Or you need to insert “error” after “precision” or instead of it. Or SLT.
Steven Mosher says:
March 4, 2012 at 6:49 pm
——————————————
Thanks for your post Steven. I always enjoy them, sometimes (usually) agree.
You point out the variability of UHI. My gut feeling is that in general UHI increases temperature. I work in small airports across Canada (although mainly in B.C. and Northern Ontario). In the last 20 or so years many of these airports have been upgraded (paved runways, access ways plus infrastructure). These airports are the main centers for temperature and precipitation measurements. Even the wind socks often don’t work (because the wind is blocked by the new nearby buildings). So I revert back to my original point that a global temperature anomaly (what ever that is) cannot be accurate to 0.05 degrees (the point of this post I think) and that errors in the temperature record cannot be treated as random as KR attempts to assert.
KR says:
March 4, 2012 at 7:23 pm
Thank you KR. I have read this paper before. Please look at Figure 3 and let me know if you think the correlation coefficients computed for the global surface temperature anomalies are particularly good (especially at high latitudes…). Is a correlation coefficient of 0.5 considered “good”? 0.6? 0.7? Does it matter?
Thanks.
Nick Stokes says:
March 4, 2012 at 7:12 pm
Frank K. says: March 4, 2012 at 6:56 pm
“Can you please provide quantitative justification for the good correlation of the temperature anomalies?”
No. This post is about people who claim something about temperatures are full of something. You can lead with some evidence.
—
Thanks for the response, Nick. Very helpful, indeed.
As KR says, you are conflating global average temperatures with anomalies. Most of your arguments address the question of what is an average absolute temperature. And that is beset with many difficulties, as GISS is able to explain in far fewer words than you use.
Actually, the bulk of the post was addressing how one can precisely measure an absolute average temperature that is in some fixed relationship to a “true” average temperature. It isn’t just about the impossibility of defining an average, or the problem that different definitions can lead to different trends with different signs — the subject of the article that was linked in originally and was relinked in recently just above — it is about being able to realistically extrapolate from a finite, relatively sparse set of thermometric measurements with unknown systematic biases to even one of these definitions and obtain a result as precise as what is being claimed, let alone as accurate.
I own a shotgun that has its front sight off by some fraction of a millimeter to the left. I’m a damn good shot (if I do say so myself:-). When I first bought it, I naturally didn’t notice the problem — who can see that sort of thing by eye? It wasn’t until I started shooting it at known targets that I could tell that the sight was systematically biased, biased so that at close enough range I might not even notice the problem but at any sort of longer range I’d miss the target entirely. Even then, noticing it was dependent on my skill shooting because for a lot of people, the natural variance of their aim would have masked the systematic error. It was also dependent on knowing the target — if my eyesight had been so poor that I couldn’t even see what I was aiming at I might never have detected the bias either.
GISS is in the unenviable position of trying to detect the actual target a gun was aimed at, given an unknown set of biases in the sights of the gun, based on the shot pattern of a load fired at an unknown distance from the target — when you can’t see that target. You can say all you like about the centroid of the shot pattern and how wide it is, but you cannot infer the bias from the shot pattern. Nor can you be certain that the bias is the same from year to year, from target to target. Nor can you be sure that your grand-dad’s hand-loaded blunderbuss produced the same shot distribution with the same biases a hundred years ago, even though you are using data from that gun the same way you are using the data from your current gun and somebody ripped away half of the older targets so you have no idea if any shot landed on the missing parts or not.
UAH/RSS results are a completely distinct way of shooting at a different target, but one that is supposed to be a fairly predictable distance in a predictable direction from the target GISS is trying to infer. Unfortunately, when comparing the results they reveal that GISS is systematically biased in a way that is getting worse over time, like my shotgun missed more distant targets by a larger distance, and worse, that the actual GISS target is on the opposite side of of the UAH/RSS target from where the GISS shot pattern was landing.
And we won’t even touch the problem of using the half-missing targets using grand-dad’s handloads and gun as if they are comparable to the targets of the gun GISS is using today. If today’s guns aren’t firing even on the right side of the target and their sights are biased to miss by ever greater amounts as it is, how much more difficult is it to state positive concusions from an era without UAH/RSS satellite instrumentation!
Note well that GISS has problems with all three aspects of meaning from the numbers. The numbers have no absolute meaning or necessary relevance to an assumed “global average temperature”. At best one hopes for a monotonic, if unknown, relationship, although the numbers themselves are on the wrong side of the only completely independent and arguably much more reliable way of computing a closely related “global average temperature”. GISS makes egregious claims for the “error” associated with these numbers (where in science, error estimates should include error from all sources, not just an estimate of statstical error a.k.a. precision based on assumptions of statistical independence and a lack of bias). Finally, their numbers have what appear to be the wrong trend compared to the only meaningful check we can perform, so one cannot trust even the anomaly whose correct error is being underestimated.
Three strikes and you’re out. And yes, I would continue to say that even with the UAH/RSS data we don’t know the true global temperature for the last thirty-odd years particularly accurately.
Why does that matter? Because underlying the entire argument concerning global warming and cooling, some fraction of which might be anthropogenic is the zeroth law of thermodynamics, is it not? As a few people have pointed out, temperature is a lousy measure for heat, but perhaps it is the best we can do. Still, in order to make it even a lousy measure for heat energy as it flows through a complex open system, we have to be using the same measure of temperature throughout the range where one asserts that the temperature is known.
UAH/RSS data is precious, because nothing we can imagine doing in the future will change the measure itself. Perhaps there is instrumentation error, sure, but looking the microwave emission from atmospheric oxygen is as close to a direct measure of the temperature of that oxygen as one can hope to get, and we can sample from the entire globe frequently and in a more or less unbiased manner to get it. We can imagine increasing the precision with more instruments and controlling even more tightly for systematic biases with careful independent measures from soundings, but otherwise in 100 years people will be able to look at UAH/RSS data and — within the modest uncertainties of method and statistical resolution — be able to do a direct and valid comparison of the exact same thing (perhaps with greater precision) and make well-justified inferences about the comparative temperatures of the lower troposphere and, by extension, the surface.
We can do nothing of the kind with the pre-satellite thermal record. Really, we can’t. Not ever. There are biases systematic and random scattered everywhere in the data, and there is no hope of being able to go back in time and detect them and correct for them. If anything, contemporary data is showing us how difficult the task really is as GISS fails!
If GISS cannot even show the same trend as UAH/RSS, on the right side of the data, during the 30 year instrumental era where one can check it against a truly independent measurement that utterly lacks the many, many unknown biases in contemporary thermometry, why exactly should we believe its claims for the temperature in 1880?
Indeed, if we lived in a sane Universe, why exactly aren’t we using UAH/RSS to correct GISS? I mean, here we are, in possession of information that we can use to measure the systematic biases in the GISS algorithm and perhaps correct them, right? According to UAH/RSS and contemporary modeling, lower troposphere temperatures should be warmer than surface temperatures by a few tenths of a degree. Warmer implies upper bound. Surely it isn’t unreasonable to use the fact that GISS temperatures is warmer than the upper bound and has the wrong slope of trend to recalibrate, for example, the UHI and so on to at least get its algorithm to agree across the last 30 years with UAH/RSS. Surely it isn’t unreasonable to use the observed discrepancy as an object lesson about egregious claims for accuracy for the future, and for extending GISS estimates into the past as well.
rgb
Simple challenge for anyone that thinks average air temperatures can be measured to sub one degree C precision. Buy 10 digital thermometers, check their “outdoor” probes and determine their relative accuracy. For example put all the outdoor probes in the same cup of tap water mixed with ice and record what they read.
Then take those 10 thermometers and place them in 10 different locations about your house. It is a climate controlled environment and should be much more consistent than an out doors environment. Let all the thermometers stabilize for a few hours and then read them. You will likely find variations on the order of 2-6 deg F (1-3 deg C) just within your climate controlled house. Simply raising or lowering the location of a thermometer can give you several degrees of difference in temperature.
Right now at the location of my furnace thermostat (hall way next to bathroom it is 72.0 deg), 30 ft away in my living room at the same elevation along the north wall it is 70.1 deg F In my bedroom 12 ft from the thermostat location it is 71.5 deg F In my kitchen it is 73 deg F (no cooking for at least 12 hours) In the bathroom about 12 ft from the thermostat location it is 71 deg F.
What is the “real temperature” in my home?
If temps vary that much in a climate controlled environment what are the temperature variations over miles of distance and as noted above different surface environments (lakes, trees, open pavement parking lot)? Which one of those temps is the “correct temperature”?
Larry
My issue with using the “anomaly” method to wave away the issues is that it assumes all our measurements have the same variation. They don’t. One set of anomaly measurements will be measuring “temperature” in quite a different way from the next one.
What you need to do to get a hockey stick is find one set of “anomaly” measurements with little natural variability. Then you add that to another set of measurements with high variability which happens to be trending up at the time you want. Hey presto.
So when Stephen Mosher says: “Is that number the “average”. no, technically speaking the average is not measurable. Hansen even recognizes that.” I think he is likely wrong. Yes, Hansen knows that his data is not actually recording global temperature. But he will allow it to be stitched to another series in a way that assumes both of them are measuring the same thing, when they aren’t.
It’s not what the Team say that matters, it’s what they do with the data. On the face of it they accept that they aren’t measuring an actual global temperature. But they will stitch up a hockey stick by adding together data sets that measure entirely different things. But because they are all called “temperature proxies” people let it slide by.
In fact each of those proxies is measuring something entirely different, and they have no right to be fitted together.
Robert Brown says
“But that long run might well be centennial in scale — long enough to detect and at least try to predict the millennial variations, something utterly impossible with a 33 year baseline.”
Physicists have a good deal of trouble with the complexity of the messy real world .Robert might consult his colleagues book “Disrupted networks from physics to climate science” – West and Scafetta for a review of the problem and how to deal with it using power spectrum and wavelet analysis of the various time series. However scientists dealing with climate can hardly say to their departments ,grant givers or to the politicians running the IPCC “That’s a really interesting question I’ll give you an answer in 100 years.” The main point he makes is true and that is that the climate ” team” have grossly exaggerated the certainty of their so called projections which they allow and indeed encourage the world to consider as predictions with a high degree of confidence.
Robert likes the satellite record but it is too short to be yet of much use. He also points out the uncertainties and problems of the land record. I again suggest that for purposes of discussion the best thing to do is to simply for convenience arbitrarily define global temperature trends to be the trends and changes shown by the Hadley SST data which goes back to 1850 and can be extended back by carefully analysing various proxies.This suggestion is supported by the following considerations.
1. Oceans cover about 70% of the surface.
2. Because of the thermal inertia of water – short term noise is smoothed out.
3. All the questions re UHI, changes in land use local topographic effects etc are simply sidestepped.
4. Perhaps most importantly – what we really need to measure is the enthalpy of the system – the land measurements do not capture this aspect because the relative humidity at the time of temperature measurement is ignored. In water the temperature changes are a good measure of relative enthalpy changes.
5. It is very clear that the most direct means to short term and decadal length predictions is through the study of the interactions of the atmospheric sytems ,ocean currents and temperature regimes – PDO ,ENSO. SOI AMO AO etc etc. and the SST is a major measure of these systems.Certainly the SST data has its own problems but these are much less than those of the land data.
What does the SST data show? The 5 year moving SST temperature average shows that the warming trend peaked in 2003 and a simple regression analysis shows an nine year global SST cooling trend since then .The data shows warming from 1900 – 1940 ,cooling from 1940 to about 1975 and warming from 1975 – 2003. CO2 levels rose monotonically during this entire period.There has been no net warming since 1997 – 15 years with CO2 up 7.9% and no net warming. Anthropogenic CO2 has some effect but our knowledge of the natural drivers is still so poor that we cannot accurately estimate what the anthropogenic CO2 contribution is. Since 2003 CO2 has risen further and yet the global temperature trend is negative. This is obviously a short term on which to base predictions but all statistical analyses of particular time series must be interpreted in conjunction with other ongoing events and in the context of declining solar magnetic field strength and activity – to the extent of a possible Maunder minimum and the negative phase of the Pacific Decadal Oscillation a global 20 – 30 year cooling spell is more likely than a warming trend.
This last simple empirically based statement is about as good as we can do right now. We might add that as Lindzen has shown the humidity and cloud feedbacks are necessarily negative or we wouldn’t be here to discuss the matter we can add with some confidence that catastrophic warming is very unlikely in the forseeable future and this whole CO2 scare is completely unnecessary and indeed economically nuts.
For the next 30 –40 years we might suggest that damaging cooling is a distinct possibilty Some thought might be taken as to how to deal with this by increasing agricultural productivity and stockpiling foodstocks against possible shorter growing seasons ,late frosts and more droughts in a generally less humid world with the greater climate variability that a cooler world would bring
On a silly note “Malcolm Miller says: March 4, 2012 at 12:54 pm
Excellent outline of the ‘temperature’ problem. I always ask alarmists, ‘Where do you put the thermometer to measure the Eareth’s temperature?’ They have no answer.”
I suggest a rectal temp would be the most accurate so Cleavland it is! 🙂
On a more serious not I’ve always sided with Dr Dyson on the whole global average temp thing. It doesn’t exist and is pretty meaningless.
Frank K. says:March 4, 2012 at 7:52 pm
“Thanks for the response, Nick. Very helpful, indeed.”
Well, Frank, in a spirit of helpfulness…
Here is a plot of monthly readings in 2011. You can make it average over any subset of months that you choose. It is a global map, colored by the temperatures at individual stations. There is no spatial averaging. The colors grade linearly from one station to another.
The thing to see is that there are large correlated regions. Big regional patterns. Anomalies move together. ConUS is the main exception. This might just be to do with the station quality issues that Anthony raises – I don’t know.
You can see the same thing here with trends.
Robert Brown – And yet you continue to conflate the accuracy of temperature anomalies (0.05C, looking at a data set of highly correlated stations) with absolute temperatures.
To be quite frank, I do not care what the absolute temperature accuracy is – but I care quite a bit about how the temperature is changing (which is the anomaly from the baseline). To which point you have made no significant arguments.
I would agree that UAH and RSS are extremely useful measures, primarily of stratospheric and tropospheric temperatures – although they have different sensitivities to variations such as volcanic activity and ENSO than surface temperatures (http://iopscience.iop.org/1748-9326/6/4/044022).
But they show much the same story as the surface measures in terms of trends: http://www.woodfortrees.org/plot/uah/mean:60/plot/rss/mean:60/plot/uah/trend/plot/rss/trend – 0.135C/decade for UAH/RSS, or for GISS/HadCRUT3: http://www.woodfortrees.org/plot/hadcrut3vgl/from:1970/mean:60/plot/gistemp/from:1979/mean:60/plot/hadcrut3vgl/from:1979/trend/plot/gistemp-dts/from:1979/trend – 0.198/0.146C/decade. Considering that the tropospheric satellite signal is somewhat mixed with the stratospheric (cooling) signal, that indicates fairly good agreement. HadCRUT3 fails to include polar data, extrapolating from average global anomaly rather than nearby polar anomaly like GISS, so (personal opinion) I trust GISS more in that respect.
But you know, that really doesn’t matter. Even if you don’t trust surface temperature records, we are (according to the satellite temps as well) seeing considerable warming, considerable changes from what we have seen previously in terms of temperature. And that means changes – changes to crop productivity, sea temperature, rainfall, Hadley cell precipitation, snowpack for water supplies, on and on.
And your unwarranted criticisms of the surface temperature records are (IMO) a mere side-show, prestidigitation to distract from the issue. And, to repeat, completely unsupportable.
My apologies – my previous comment had a bad link, poor construction on my part:
It should be http://www.woodfortrees.org/plot/hadcrut3vgl/from:1970/mean:60/plot/gistemp-dts/from:1979/mean:60/plot/hadcrut3vgl/from:1979/trend/plot/gistemp-dts/from:1979/trend for GISS/HadCRUT3 trends, or perhaps http://www.woodfortrees.org/plot/hadcrut3vgl/from:1970/mean:60/plot/gistemp/from:1979/mean:60/plot/hadcrut3vgl/from:1979/trend/plot/gistemp/from:1979/trend – I had inadvertently mixed GISS global mean and GISS extrapolated global mean.
The latter link is probably more relevant to this discussion; no extrapolation. Trends are 0.146/0.156C/decade for HadCRUT3/GISS, respectively. Which is certainly not trivial in terms of change.
Robert Brown says: March 4, 2012 at 7:53 pm
“GISS is in the unenviable position of trying to detect the actual target a gun was aimed at, given an unknown set of biases in the sights of the gun, based on the shot pattern of a load fired at an unknown distance from the target — when you can’t see that target.”
No, that’s the point of anomalies. You don’t need to quantify the systematic biases. You’re measuring the change.
“Nor can you be certain that the bias is the same from year to year, from target to target. “
Does the misalignment of your gun vary like that? Of course, one can never be certain, but the chance is much improved.
Nick Stokes says:
March 4, 2012 at 8:20 pm
“The thing to see is that there are large correlated regions. Big regional patterns. Anomalies move together.”
How are you inferring correlation from a map of anomaly “averages”? Just wondering… Have you computed correlation coefficients for recent data between random stations like Hansen did in his 1987 paper? That would be interesting.
“And that means changes – changes to crop productivity, sea temperature, rainfall, Hadley cell precipitation, snowpack for water supplies, on and on. ”
Oh brother…[sigh]
Dr. Brown I am still with you. As I understand it, global temperature in an open thermodynamic system like the earth does not measure heat energy of the globe and the stored heat energy, if it were known, may not change the global climate in sync with the temperature changes. So if climate is changing from stored heat energy, the uncertainty in connecting a very uncertain global temperature changes to climate changes when the heat energy is inferred from uncertainities in the temperature measurements, as you have presented above, borders on a leap of faith..
Robert Brown says: March 4, 2012 at 4:25 pm. Damn, I’m going to just give up. Mousepad disease. Looks like I’ve completely lost two hourlong posts. Grrr.
I, too, have lost long posts–probably to the benefit of the world. As other commenters have pointed out, one way to avoid this problem is to enter your comment into a word processor and then copy the text into the “Leave a Reply” box. However, you want to be careful when doing this as I’ve seen funny translations of the fonts, paragraph spacings, etc.
WoodForTrees.org just got the January 2012 (2012.08) update for HADCRUT3: 0.218°C.
The HADCRUT3gl trend since 2001, has dropped to near -0.7°C (-1.26°F) per century.
See http://www.woodfortrees.org/plot/hadcrut3gl/from:1980/to:2012.08/plot/hadcrut3gl/from:1980/to:2001/trend/plot/hadcrut3gl/from:2001/to:2012.08/trend
Dr. Brown. What happened to Duke when they played North Carolina? My sister is an avid Duke fan and I had to suffer her disposition during the game. Please tell coach K to win all remaining games; it makes my life easier.
@KR,
“Dr Brown, are you perhaps not familiar with the law of large numbers
… devoid of useful content or numeracy.”
LOL, the law of large numbers only works for a repeatable experiment like rolling dice, and deals with probability. How do you propose on a single day (or year) to get an accurate picture of temperature using this law? With dice each roll is presumed identical. No such presumption is possible in taking the temperature at any, or any collection of locations.
Over what timescale do you expect your temperature measurements to converge to a precise measurement, and measurement of what? By the time you get any large quantity of measurements then entire system has moved on. Also any measurements at a single location aren’t truly repeatable.
Why on earth would you expect a chaotic system like the weather to converge on anything.
You are a silly silly man. Luckily you are anonomous so you don’t have to live with the reputation of your foolishness.
Frank K. – “Please look at Figure 3 and let me know if you think the correlation coefficients computed for the global surface temperature anomalies are particularly good (especially at high latitudes…). Is a correlation coefficient of 0.5 considered “good”? 0.6? 0.7? Does it matter?”
The correlations are quite strong, and I will note that (a) variances from the correlation are both positive and negative (hence evening out variances) and (b) most stations are much less than 1200km from each other. And hence the correlation is considerably higher than 0.5 for the vast majority of stations.
This correlation distance is mainly an issue at the poles, where there are fewer stations. GISS extrapolates polar values from near-polar stations (which seems reasonable, given the polar amplification of any temperature change), while HadCRUT3 extrapolates from the global average anomaly. HadCRUT4, which will be released shortly, looks to have more Siberian and near-polar stations – and shows greater warming as a result of including more relevant data.
Well I think Nick Stokes said it best; about following a set of rules to get a consistent result.
Such is GISSTemp, or HADcruD.
They are good representations of GISSTemp and HADcruD. It is the Temperature of the earth, that they have nothing to do with.
I would differ with Professor Brown, in that the only “global Temperature” it makes any sense to try and observe is the “surface Temperature”. That would be the ocean water surface Temperature for around 73% of the Total area, and land surface Temperature for the 27% or so that isn’t ocean.
Those are the surfaces, which actually emit, the primary surface LWIR radiation, or directly heat the atmosphere above via conduction or other thermal processes.
And the only way to accurately make such measurments, is to comply with the Nyquist Sampling Theorem, which applies to ALL sampled Data Systems. Neither GISSTemp nor HADcruD do that, so neither correctly gathers global Temperature data. But as Nick implies, they give very consistent values for GISSTemp or HADcruD; whatever the purpose of those observations is.
KR says:
“…we are (according to the satellite temps as well) seeing considerable warming, considerable changes from what we have seen previously in terms of temperature. And that means changes – changes to crop productivity, sea temperature, rainfall, Hadley cell precipitation, snowpack for water supplies, on and on.”
Yadda, yadda. As a matter of fact, agricultural productivity is increasing in lock step with increasing CO2. Global warming is entirely beneficial, and causes more precipitation – a good thing, no?
Your “on and on” and “considerable” are emotional, unquantified terms that have no place in scientific discussions. The plain fact is that empirical observations support the [unfalsified] hypothesis that CO2 is both harmless and beneficial.
Nothing unusual is occurring. Nothing! Natural climate variability is the same as always. If and when global temperatures begin to accelerate, wake me. Until then, all you’re doing is pseudo-scientific hand waving.
Frank K. – Regarding changes:
Changes will cost $. Larger changes will cost more $$$. In my region we’ve seen (over the last 30 years) a drop in precipitation, a rise in temperature, a shift in growth zones (viable plant species), and average flowering dates movement of ~9-10 days earlier.
You think that comes for free?
Curiousgeorge says:
March 4, 2012 at 3:18 pm
Brilliant!
In other news.,,
“full of [SNIP] up to their eyebrows.”
Oh lord, my mind just quails at the thought of which four letter word is actually being represented by [SNIP].
Dear Moderators, please continue to protect my eyes from the written version of words I think or hear on a daily basis.
You must think you’re saving my soul or something.
Pass.
Bennett Dawson
[REPLY: We didn’t do it for you. This is a family blog. Think of the children! -REP]
Sigh… – OK, ignore what I said about any “consequences”. Opinions will differ there.
Back to the post:
Dr. Brown is conflating absolute temperatures with anomalies (strawman argument), which include baseline offsets as part of the data. Anomalies have extremely strong correlations over considerable distances, making them quite measurable, and reducing uncertainties with more data to the extent that a 2-STD variance of 0.05C is supported by the numbers. RSS and UAH data give much the same trends as the surface records, indicating mutual support for the observed trends.
Dr. Brown has talked quite a lot, but it appears (IMO) nothing but a distraction from the observed trends. His claims of uncertainty are not supported by the data.