The pause continues…
Dr. Roy Spencer writes:
The Version 5.6 global average lower tropospheric temperature (LT) anomaly for August, 2013 is +0.16 deg. C (click for large version):
The global, hemispheric, and tropical LT anomalies from the 30-year (1981-2010) average for the last 20 months are:
YR MON GLOBAL NH SH TROPICS
2012 1 -0.145 -0.088 -0.203 -0.245
2012 2 -0.140 -0.016 -0.263 -0.326
2012 3 +0.033 +0.064 +0.002 -0.238
2012 4 +0.230 +0.346 +0.114 -0.251
2012 5 +0.178 +0.338 +0.018 -0.102
2012 6 +0.244 +0.378 +0.111 -0.016
2012 7 +0.149 +0.263 +0.035 +0.146
2012 8 +0.210 +0.195 +0.225 +0.069
2012 9 +0.369 +0.376 +0.361 +0.174
2012 10 +0.367 +0.326 +0.409 +0.155
2012 11 +0.305 +0.319 +0.292 +0.209
2012 12 +0.229 +0.153 +0.305 +0.199
2013 1 +0.496 +0.512 +0.481 +0.387
2013 2 +0.203 +0.372 +0.033 +0.195
2013 3 +0.200 +0.333 +0.067 +0.243
2013 4 +0.114 +0.128 +0.101 +0.165
2013 5 +0.083 +0.180 -0.015 +0.112
2013 6 +0.295 +0.335 +0.255 +0.220
2013 7 +0.173 +0.134 +0.212 +0.074
2013 8 +0.158 +0.107 +0.208 +0.009
Note: In the previous version (v5.5, still provided to NOAA due to contract with NCDC) the temps are slightly cooler, probably due to the uncorrected diurnal drift of NOAA-18. Recall that in v5.6, we include METOP-A and NOAA-19, and since June they are the only two satellites in the v5.6 dataset whereas v5.5 does not include METOP-A and NOAA-19.
Names of popular data files:
From the UAH online press release by Dr. Phillip Gentry:
Global Temperature Report: August 2013
- Global climate trend since Nov. 16, 1978: +0.14 C per decade August temperatures (preliminary)
- Global composite temp.: +0.16 C (about 0.29 degrees Fahrenheit) above 30-year average for August.
- Northern Hemisphere: +0.11 C (about 0.20 degrees Fahrenheit) above 30-year average for August.
- Southern Hemisphere: +0.21 C (about 0.39 degrees Fahrenheit) above 30-year average for August.
- Tropics: +0.01 C (about 0.02 degrees Fahrenheit) above 30-year average for August.
July temperatures (revised):
Global Composite: +0.17 C above 30-year average
Northern Hemisphere: +0.13 C above 30-year average
Southern Hemisphere: +0.21 C above 30-year average
Tropics: +0.07 C above 30-year average
(All temperature anomalies are based on a 30-year average (1981-2010)
for the month reported.)
Notes on data released Sept. 10, 2013:
Compared to seasonal norms, in August the coolest area on the globe was southern Greenland, where temperatures in the troposphere were about 1.97 C (about 3.55 degrees F) cooler than normal, said Dr. John Christy, a professor of atmospheric science and director of the Earth System Science Center (ESSC) at The University of Alabama in Huntsville. The warmest area was south of New Zealand in the South Pacific, where tropospheric temperatures were 2.82 C (about 5.1 degrees F) warmer than seasonal norms.
Archived color maps of local temperature anomalies are available on-line at:
As part of an ongoing joint project between UAHuntsville, NOAA and NASA, Christy and Dr. Roy Spencer, an ESSC principal scientist, use data gathered by advanced microwave sounding units on NOAA and NASA satellites to get accurate temperature readings for almost all regions of the Earth. This includes remote desert, ocean and rain forest areas where reliable climate data are not otherwise available.
The satellite-based instruments measure the temperature of the
atmosphere from the surface up to an altitude of about eight
kilometers above sea level. Once the monthly temperature data is
collected and processed, it is placed in a “public” computer file for
immediate access by atmospheric scientists in the U.S. and abroad.
Neither Christy nor Spencer receives any research support or funding from oil, coal or industrial companies or organizations, or from any private or special interest groups. All of their climate research funding comes from federal and state grants or contracts.
— 30 —



Richard Barraclough says:
September 11, 2013 at 2:15 am
You can’t really include both months in the count, unless your analysis goes from the beginning of July to the end of August. I mean, supposing you were comparing July 2013 (0.173) with August 2013 (0.158), you’d say there’d been a drop of 0.015 over 1 month (not 2 months).
Taking your second sentence first, yes, if I were to compare July to August, then I would assume it was 0.173 on July 15 and 0.158 on August 15 and I would assume the drop was over 1 month.
As for the July to August, I cannot demonstrate that on WFT since it uses version 5.5, but if we look at RSS, it is flat from November 1, 1996 to August 31, 2013. So both November and August are counted and this gives 202 months. You can check this for yourself at this site:
http://www.woodfortrees.org/plot/rss/from:1996.8/plot/rss/from:1996.8/trend
Then click on “Raw data”. See that the firsts 2 months are 1996.83 and 1996.92 which are November and December. At the bottom, note that there are 8 months starting with 2013. As well, it also says “#Number of samples: 202” and gives the “slope = -5.24568e-05 per year” for those 202 months.
By the way, from October 1, the slope is “slope = 0.000142636 per year”. So if we assume a smooth transition from October 1 to November 1, we could say the zero mark is at October 23, giving 202.27 months to August 31.
I work with the LibreOffice spreadsheet program. It has a powerful function where you can convert between Celsius and Kelvin anomalies by simply changing the parameter designation at the top of a column of data by using the F2 key.
OK, OK, now you’re just being mean. Everybody knows the degree size for Celsius and Kelvin is the same. Or at least, I profoundly hope so.
I think he meant convert from anomalies (Celsius and/or Kelvin) into degrees absolute, and that’s how I framed my reply. If he seriously meant change the NAME of the units, well, gee…
rgb
Of course that was viewing the upper error of the 1880s temps with the lower error for the 21st century temps (ie., maximising the spread), but it does accord with your assertion that when one properly takes into account the margins of error, one cannot say definitively whether it has warmed since the 1880s, although it is likely that it has.
in a maximum likelihood fit. This complicates the unbiased estimate of the probability of getting the temperature record we have if there was no actual warming, in point of fact, in the actual (unknown) global average temperature that all of the estimates are intended to estimate if: a) the error bars are some reasonably accurate measure of probable error in the central number, ideally a gaussian/normal estimator such as a standard deviation; b) the average from 2012 and 2013 are not really independent sample, as the temperature of 2012 is EXPECTED to be comparatively close to that of 2013; c) there are systematic biases and errors in the computations of the annual temperatures.
I am not sure whether the Met Office have taken down that chart since it is not particularly helpful for the message they are overly keen to promote.
Ordinarily this is the sort of thing the R-value of a trended fit is supposed to help with, although the issue is seriously complicated here because of the huge error bars in most of the past even in the thermometric era (and believe me, they are larger still whatever the assertions of the authors in the proxy era preceding that) and the fact that there is substantial autocorrelation so that the measurements are not even close to being independent. To be picky, even if you have 143 years of annualized numbers in the range from 1870 to 2013, you don’t have anything like 143 degrees of freedom in the computation of
For example, if we postulate — as is I think generally accepted to be the fact — that the thermometric record is increasingly corrupted by an urban heat island effect (UHI) covering more and more of the areas sampled as populations increase and cities grow and more roads are built and more trees are cut, then it is straightforward to produce 100% of the warming observed by simply underestimating the UHI effect in the station data and not reflecting the uncertainty in the UHI correction in the error estimate assigned to the later points. There are other systematic errors reflected in the older data — sea surface temperature used to be estimated by e.g. throwing a bucket overboard astern and measuring the temperature of the water brought back up, on an unbelievably inadequate and ever-shifting grid tied tightly to the usual shipping lanes. Some of these we can identify and try to fix, others we have no hope of fixing. Given that the ocean alone accounts for 70% of the Earth’s surface, the uncertainty in historical SST data is enormous and profound and most of it we don’t even know how to correct (although that won’t stop people from trying).
Additional systematic errors can come from something as simple as when, and how, people recorded daily temperatures, where their thermometer was physically located at the time it was recorded, who MADE their thermometer (in particular, how accurate and precise it was), whether or not the recorder was sober and industrious or an alcoholic who often forgot to check at all and just filled in numbers that sounded reasonable to get paid.
And finally, there are the really, really big holes, even on land. We have heavily oversampled temperature measurements near older cities, but Antarctica (a whole continent, mind you) was virtually unknown and unexplored, let alone systematically sampled for temperature, well into the 20th century. One could actually make a case for the fact that its temperatures aren’t adequately sampled today. Ditto for much of the sahara, much of australia, large chunks of the americas, large chunks of china, siberia, asia. Between the oceans and these large holes with no systematic measurements at all, one has to interpolate, infill, and commit various other data sins that involve replacing actual data with guesses just so you can apply a consistent algorithm over time. Again, all one really needs to create illusory warming is to select perfectly “reasonable” infilling/interpolating/extrapolating algorithms that erase local trends and replace them with, effectively, the local trends of someplace else, probably someplace with an uncontrolled UHI correction that increases over time (because the oldest stations in your record are those near cities, or situated in the countryside of populated regions that became a lot more populated over a century, of course).
Taking all of this into account, yeah –the world probably has warmed, but has it warmed 0.5 C over 143 years? 1 C over 143 years? 1.5 C over 143 years? That is really difficult to say. Interestingly, there are places on the world where UHI is not much of an issue — they were wild in 1870 and are wild today — that show a lot less trended warming than the average. The places that show the most warming almost invariably have a serious UHI problem. The US is rife with UHI corruption in its record, as Anthony has pointed out in a formal paper, but his paper just scratches the surface. One could literally look at the entire record station by station, and it is almost impossible to tell HOW to correct a station with poor siting.
At Raleigh-Durham airport — airports are a primary source of “official” temperature readings for an area — the weather stations aren’t placed in such a way that they obtain optimally accurate temperature readings for the local CLIMATE, they are placed to give accurate readings of the temperatures over the runways, as those are what planes need to know as they land. They tend to be sitting in open fields, right next to runways, over asphalt, in places where a simple change in the direction of the breeze from over the grass to over the nearby runway can cause temperatures to spike during the day, while those same dry runways can retain heat for much longer than insulating, dew-dampened grass at night. Not to mention that they tend to be located in the middle of a tangle of high-density highway traffic (lots of roadway and concrete) and are constantly bathed in CO_2 and water vapor given off by jets taking off and landing and wafting overhead from the nearby road traffic.
In Durham were I live (still almost in the city) temperatures are consistently 1-2C cooler than the airport, especially high temperatures. The temperatures where I live are consistently as much as another degree warmer than a place REALLY out in the country another five miles, far from any expressway or urban center larger than a cluster of houses on a small road. What is the UHI correction for RDU airport? It should arguably be as much as 3 C, probably at least 1.5 to 2 C, but if one subtracted that much from its reading everything changes, record temperatures disappear, much of the supposed warming of the area disappears.
And then there is the entire “anomaly” issue. They don’t actually use RDUs temperature as measured in degrees K to determine the global anomaly per se anyway. How can they? They haven’t got any good measurements even for that location that reliably stretch back to 1870 — the land was all forest or tobacco field back then, and Raleigh and Durham both were barely what we’d consider to be large towns now. Instead they try to turn it all of the local readings into the difference between what is read and a presumed “normal average” temperature for the location, and then average this anomaly spatially with infilling etc with much statistical juju magic.
In the end, the one thing we can say with near certainty is that the corrections that are constantly made to the global temperature estimates over the last 143 years are biased. We can say that because no matter what you think about methodology, the likelihood of making a systematic error that results in more warming over that stretch and making a systematic errror that results in more cooling over that stretch ought to be about the same. On average, one would expect that altering the computational algorithm used to estimate the temperatures as is done between at least the major releases of e.g. GISS and HADCRUT (and now BEST) would result in increased linear trends as often as decreased linear trends. However they have all resulted in increased linear trends! They have cooled the past, or warmed the present, or both — they never seem to warm the past, cool the present, or both. Given that at this point there are many distinct instances of this occurring, one can actually formulate a probability of getting all warming changes given the null hypothesis of no human bias in the selection of the changes themselves, and reject the null hypothesis with a p-value well below 0.05, probably well below 0.01. In other words it is nearly certain that some fraction of the warming in the “official” temperature databases is not only an artifact, it is a deliberate artifact, one made by biased human choices in what changes to implement. I have no doubt that every change can be “justified” a posteriori somehow, but that does not remove the statistical evidence for bias.
Once again, to fix things at this point is probably impossible. We cannot go back in time and measure the temperature in the middle of antarctica in 1887, and any attempt to infer it from ice cores or other proxies have error bars so large that the estimate is all but useless (and easily corrupted with human bias in its analysis). To even fix the UHI problem — that is in principle fixable — would require an enormous investment, the development of a set of serious and inviolable criteria for weather station siting, regular inspection and corroboration/testing by qualified personnel, the development (almost impossible) of a way of correctly estimating the magnitude of the UHI, and the elimination of anomaly-based measurements and analysis altogether. The first moment of a distribution is almost certainly going to be more accurately known that the second moment — the actual mean is more accurate than the deviation from the mean — and even NASA admits that we do not know the actual mean to better than a range of some 2 C even today.
That is an enormous range. Honestly, it is big enough that it makes it quite possible that proxy derived temperature estimates may be more accurate than the thermometric estimates (and generally, may be a bit more difficult to fiddle with although as dendroclimatolgists everywhere persist in demonstrating, far from impossible).
It’s sad, really, and is the reason that I only really trust the satellite record of the last 33 years, with weak extension and increasing error bars back to maybe 1950. Before that, I think we’re dealing with anecdotal evidence, not the real thing. Suggestive, never conclusive. I’m not sure I trust ARGO even now, but in ten or twenty more years I’m guessing it will be beyond fiddling too. Even these measurements have substantial errors, but it is a lot more difficult to introduces human biases into their results.
We are thus at the infancy of real climate science. We have reasonable evidence that the climate changes over geologic and human historic time. We have anecdotal evidence of the changes, and can probably guess at how the temperatures have varied within a degree or two over decadal to century timescales and beyond. We have at least some things that are simply not susceptible to UHI bias — the LTT estimates, ARGO — that are at least reasonably independent of each other as well. It is now much more difficult to introduce a hidden bias into the future climate record. Given a few more decades of reliable data, especially data on both warming and cooling trends instead of monotonic warming, especially data that embraces the many factors that might influence the climate that make the climate more than a one-knob projective pony, we might be able to put together a credible semiempirical theory of climate change that actually works to predict at least a few decades out. I’m not optimistic about being able to do better than that in less time than a full century of observation and refinement.
This is arguably the hardest problem in the world to solve. Well, except for predicting the stock market. That’s harder. But really, really hard. It really isn’t that surprising that we haven’t solved it yet. It would actually be more than a bit surprising if we had.
rgb
kadaka (KD Knoebel) says:
September 10, 2013 at 11:06 pm
++++++++++++
Thank you for translating Mosher for me.
Steven Mosher on September 10, 2013 at 12:58 pm says:
True, it is very difficult for you to convert between anomalies in °Celsius to anomalies in Kelvin.
++++++++
Was Mosher being sarcastic? I cannot tell. Delta C is exactly the same as delta K… so anomalies would be identical.
***
rgbatduke says:
September 11, 2013 at 2:09 pm
***
I imagine Mosher would be squirming/fidgeting if he reads that revealing reply. According to him, it’s all already taken care of.
Dear moderator, I am sorry for posting the same OT comment on Watts et al (2012) on two separate threads but I was interested in what happened to the paper and hoped to increase the chances that somebody in the know would read my comments and answer. I did not know whom to ask and how. I sent an email to Anthony before inquiring about the paper, but he never answered (I am not criticizing that, just saying).
[Reply: You are forgiven, my child. Carry on… ~mod.]
Mosh can not respond right now, please leave a message.
(freaking answering machine)
So, dear moderator, what happened to the paper? Do you have any information from Anthony?
[it was entirely reworked, which took almost a year, and is being submitted – mod]