Guest Post by Willis Eschenbach
Stefan Rahmstorf and Dim Coumou have published a paper (paywalled, of course) in one of the best-known vanity presses of science, PNAS (Proceedings of the National Alarmists of Science). I think this is another PNAS study that appears to be peer-reviewed, but is actually only “edited”, whatever that means. It has been discussed at some length on the blogs, not always favorably.
Their paper is called “Increase of extreme events in a warming world” (R&C2011). They have developed a mathematical relationship to show that if there is a warming trend in a temperature record, the most recent years will likely be the warmest years. … …
… yeah, yeah, I know … no surprise, right. Seemed like that to me, too, the latest release from the Department of the Blindingly Obvious.
In any case, their test case is the July data for Moscow. Curiously, they use the unadjusted Moscow data, not the adjusted data usually used. Figure 1 shows a graph of the unadjusted and adjusted July temperature in Moscow for the last 130 years, along with the adjustment.
Figure 1. Adjusted and Unadjusted GISS temperatures for July in Moscow. Green line shows the amount of the adjustment (right scale). Adjustment shows the effect of the two-legged GISS method for removing UHI.
Generally the GISS adjustment kinda makes sense, in that the effect of it is to adjust for a known heat island phenomenon in and around Moscow. The hook in the end is odd, but it’s the GISS computer algorithm and they’re sticking with it, and in this instance, it might be just by coincidence, for once the GISS adjustment is not unreasonable.
So … why did R&C2011 use the unadjusted GISS rather than the adjusted GISS data?
R&C discuss this question over at RealClimate. They put up a graph there that I agree with, showing a problem with the method GISS uses to adjust the temperature for UHI. The problem is that the UHI is larger in the winter, but the GISS adjustment is applied uniformly to every month. I was able to replicate their graph exactly from the GISS data. Here is their figure, and mine based on the same GISS data for Moscow. As you can see, my calculations match the R&C2011 results exactly.
Figure 2. Upper panel is Rahmstorf and Coumou’s Figure 2 from his discussion of his paper at RealClimate (RC). Lower panel shows my emulation, using GISS data downloaded from the web. I have given the figures in °/century, rather than per year in the Rahmstorf data, for comparison with the Rahmstorf quote below. End of data is 2010.
Here’s the odd part. At RC, Rahmstorf says of the graph:
But the graph shows some further interesting things. Winter warming in the unadjusted data is as large as 4.1ºC over the past 130 years, summer warming about 1.7ºC – both much larger than global mean warming. Now look at the difference between adjusted and unadjusted data (shown by the red line): it is exactly the same for every month! That means: the urban heat island adjustment is not computed for each month separately but just applied in annual average, and it is a whopping 1.8ºC downward adjustment.
It mystified me. Where in the graph was the 1.8°C adjustment, the red line shows 1.3°C adjustment? It took me a while to realize what they’d done. The graph shows trend per century. But R&C are talking about the trend per 130 years. That’s why the 1.8°C is “whopping”, because it’s not per century like the graphs. But that’s just the usual fast shuffle I’ve learned to expect from these guys, nothing substantial, just inflating their numbers for effect.
Also, he says that Moscow warming is “much warmer than the global mean warming,” as though that proved something. I cracked up when I read that. Dear R&C: about half of the individual station temperature trends worldwide are warmer than the global mean warming trend … duh …
Then I turned to their paper. Here, you do have to watch the pea under the shell very carefully, these guys will fool you. In the paper, R&C don’t use the trend measures discussed at RC. They don’t use the per-century trend of the entire dataset they show in the graph in Figure 2 of the discussion at RC. Instead, they use another measure of the trend entirely. Here’s their text from the paper:
Next we apply the analysis to the mean July temperatures at Moscow weather station (Fig. 1E), for which the linear trend over the past 100 y is 1.8 °C and the interannual variability is 1.7 °C.
I really don’t like that. That’s picking an arbitrary length of trend, a hundred years. There’s a tendency to think that over such a long period as a century, that the trend doesn’t change much. But that’s not the case. Figure 3 shows the century-long trailing trend for the Moscow July temperature.
Figure 3. Trailing 100-year temperature trend, July temperatures, Moscow. Trend varies greatly even year to year. Trend 1911-2010 = 1.83°C/century. Trend 1910-2009 1.40°C/century.
This makes the choice of the particular trend they used (1.8°/century 1911-2010) quite arbitrary. Why 100 years? Why not 80 years, or 120 years? In addition, even if we choose 100 years, why use that particular hundred years? Indeed, the 100 year trend ending the previous year is only 1.4°C/century, not 1.8.
I agree with R&C that the GISS adjustments distort the picture improperly for the monthly trends. This is actually the only novel part of the R&C paper. It is an interesting finding, one I had not considered. However, the proper way to resolve the problem with the temperature adjustment is not to throw out the adjustment and use unadjusted data, particularly with an arbitrary trend length. The way to resolve the issue is to figure out a way to adjust the data properly.
As a first cut, the obvious way to distribute it is proportionally, depending on the size of the warming. That should give an answer reasonably close to reality. Here is the same adjustment (1.3°/century) distributed proportionally across the months based on the size of each month’s warming trend.
Figure 4. Proportionally adjusted monthly trends for Moscow. Average adjustment to trend is the same as in Figure 2.
If you were going to use a trend for July, the trend shown in green in Figure 4 would be a more reasonable trend than the unadjusted value.
In any case, here’s the problem. They are using a July trend of 1.8°C/century, which is the 1911-2010 trend. The unadjusted July trend, calculated over the entire period of record as shown in their Figure 2, is 1.1°C/century. The proportionally adjusted July trend for the entire period of record is 0.4°C/century (green, Figure 4).
This illustrates the arbitrary nature of their entire process. Based on choices made with no ex-ante criteria, they’ve picked one of many possible linear trend intervals and ending points. I find it … mmm … coincidental that their mathematical procedure works so well with that particular trend (1911-2010, 1.8°C/century). Would it not give a totally different answer if they used the previous year’s trend? (1910-2009, 1.40°C/century) Surely the answer would be different if they used the proportionally adjusted values shown in Figure 4? I find their arbitrary choice indefensible.
Finally, although they tried to stay away from the “anthropogenetics made me do it” explanation, they couldn’t quite give it up entirely. To their credit, the abstract says nothing about humans. But they make three statements of attribution in the body, viz:
Our analysis of how the expected number of extremes is linked to climate trends does not say anything about the physical causes of the trend. However, the post-1980 warming in Moscow coincides with the bulk of the global-mean warming of the past 100 y, of which approximately 0.5 °C occurred over the past three decades (Fig. 1D), most of which the Intergovernmental Panel on Climate Change has attributed to anthropogenic greenhouse gas emissions [IPCC AR4].
Moscow warming “coincides” with warming which is attributed to humans.
The fact that observed warming in western Russia is over twice the global-mean warming is consistent with observations from other continental interior areas as well as with model predictions for western Russia under greenhouse gas scenarios [IPCC AR4]. Hence, we conclude that the warming trend that has multiplied the likelihood of a new heat record in Moscow is probably largely anthropogenic: a smaller part due to the Moscow urban heat island, a larger part due to greenhouse warming.
Here, the warming is fully partitioned. Part is from the UHI, and a “larger part” is due to greenhouse warming. Nothing is left over for natural variation.
Our statistical method does not consider the causes of climatic trends, but given the strong evidence that most of the warming of the past fifty years is anthropogenic [IPCC AR4], most of the recent extremes in monthly or annual temperature data would probably not have occurred without human influence on climate.
This last one is classic: “… most of the recent extremes … would probably not have occurred without human influence on climate”. I have to say I’m highly allergic to this kind of vague handwaving. It has no place in a scientific paper. “Most” of the extremes? How many, and which ones? “Probably would not have occurred” … what is the probability 55%? 95%? And “a human influence on climate”? What influence where? That is suitable for a children’s book, not a science paper.
In addition, I find these citations which simply refer the reader to the entire IPCC magnum opus to be totally lacking in scientific rigor. It reminds me of a fire-and-brimstone preacher of my youth in a tent revival, holding up the Bible and thumping it with his fist and saying “The answer’s in here”! Well, perhaps the answer is in there … but where? Waving the whole book means nothing. Anyone who does that kind of IPCC thumping without citing chapter and verse is a scientific poseur. R&C don’t even bother to specify Working Group 1, 2, or 3. We’re supposed to figure out where, in the several thousands of pages of the UN IPCC AR4, support for their claim is to be found. That is not a scientific citation in any sense of the word, and no reviewer should countenance such ludicrous lack of specificity. Oh, right … this is not peer-reviewed … well, no editor should allow it either.
This seems like the most modern of weapons, a stealth paper. It doesn’t say anything about humans in the abstract. In fact, R&C state quite correctly that their work does not “consider the causes of climatic trends”.
But gosh, despite that, the IPCC says Moscow is “consistent with model predictions”, so even though they don’t consider causes, R&C will consider causes … it’s humans’ fault, case closed.
Hey, here’s an idea for R&C. If your “statistical method does not consider the causes of climatic trends”, then don’t consider the causes of climatic trends. That’s stealth alarmism, not science.
In any case, following the trail of breadcrumbs, here’s a different look at the unadjusted Moscow July data:
Figure 5. Moscow temperature trends, split into pre- and post-1948 trends.
I bring this up, with the split in the trend in 1948 because the Moscow weather station has its own Wikipedia page. Wiki says that the station was established in 1948. Here’s what the station looks like:
Figure 6. Views looking across the Moscow weather station, showing views in all eight cardinal directions.
Of interest is the ring of trees which almost completely surrounds the weather station. This will have had a warming effect as the trees grew up. I can find no other metadata, I’m sure the readers can supply more. But the trees look like they could have been planted after the Great Patriotic War. Who knows?
I bring this last issue up, not to come to any conclusion about Moscow or the validity of the adjustments, but to emphasize the fragmented and complex nature of most long-term temperature records. The fact that we can take a 100-year trend of the Moscow data doesn’t mean that there is any meaning in that trend. The effects of a ring of slow-growing trees around the site, and a city behind the trees, plus a station move, make any measurements of the long-term Moscow trend speculative at best.
Regards to everyone,
w.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
John B says:
October 28, 2011 at 3:00 pm
My thanks for your answer, John.
To refresh folks, number three was:
I had supported number one:
The paper actually says (emphasis mine):
So my claim was correct. They had shown that in warming times, there’s likely to be more records set in the recent past.
OK, so that one is settled. Let’s move to the question of whether their model is adequate. For their Monte Carlo analysis they used
My question is, do you think that temperature datasets can be well represented by that model, a trend plus white (uncorrelated) noise?
Me, I’d never make that assumption. Generally, temperature datasets have a strange structure. They are at least passably represented by something like a one-lag ARMA model. The AR in “ARMA” means “auto-regressive”, a measure of how much today’s temperature influences tomorrow’s temperature. “MA” is “moving average”, a measure of how much tomorrow’s temperature is affected by the recent average temperature. Usually, these have coefficients on the order of [0.85,-0.31]. The high AR value means that tomorrow’s temperature, as common sense suggests, depends in part on today’s temperature. Un-intuitively, however, tomorrow’s temperature depends inversely on the moving average. It’s interesting, and I’m not sure what the difference in the sign does to the final output or what that means.
The main problem with that ARMA representation is that nature likes wild cards, it has more outliers than a typical ARMA structure.
But in any case, when I read that his monte carlo analysis used a trend plus white (uncorrelated Gaussian) noise model, I just laughed and put it down.
It’s particularly inappropriate because the autocorrelation in the Moscow record differs greatly in the two parts (pre and post 1948, see Fig. 5). The first part has almost no autocorrelation (lag 1, 0.03). The autocorrelation in the second part is much larger (0.3). This is further support for the idea that we are looking at a spliced record. It also shows that while the first part of the Moscow record might be adequately represented by a white noise model, that is not true of the second part. So it cannot be represented by any single model.
Warm regards,
w.
GaryP says:
October 28, 2011 at 6:43 pm
Thanks, GaryP, and your experience is quite correct. Cool air under a forest rolls downhill.
In this case, it seems more like a field with a band of trees around the perimeter. It also looks fairly level. I haven’t been able to pin it down on Google Earth yet though. If that is the case, what you get is a wind barrier growing up around the site. The trees appear to be far enough away so that shading won’t increase much. But the wind will be decreased.
It’s very important because evaporation varies roughly linearly with wind speed. So if the trees cut the wind in half, they cut the evaporative cooling in half, and local temperatures rise accordingly.
w.
caroza says:
October 28, 2011 at 12:08 pm
The distribution should be skewing right if CAGW is true. How does Monte Carlo help with that? I would argue that CAGW actually requires the distribution to move intact to a new mean, but since some would like to have it both ways (the recent manatee-slaying cold snaps are consistent with warming), it would have to skew right, correct? So why Monte Carlo?
To make up for my total newb comment earlier.
This looks like the monitoring site to me : http://g.co/maps/ka24f
Site location is on the North side of Moscow. Prevailing winds are S,SW. Large tree stand to the south of the site. Looks like a good candidate for UHI but that is for the pro’s to decide.
Caroza, the “trick” mentioned should have been in a new paragraph as it obviously didn’t apply to this post. To repeat, anyone wishing to exaggerate claims of AGW (which is everyone with a grant at stake) uses graphs with disproportionate scales. If they used a graph in degrees Kelvin starting at 0, then the effect would hardly be noticeable. Before anyone else jumps down my throat, this comment is tongue in cheek.
I am always suspicious of numbers of deaths ascribed to various agendas supported by their proponents. For instance we are now being told that 100,000 deaths pa in the UK are as a result of alcohol, the same number related to tobacco. Coincidence, I think not. Likewise 56,000 in a city where the maximum temperature in the summer is 26 Celsius.
Willis, I think Stevo has answered you. The paper is not about averages, eg “warmest years”, but about probability of extreme events and records. So the answer to your list of options is “none of the above”.
You wrote:
“Caroza, you are right that I didn’t get into the specifics of the paper.”
That much is clear.
I see the goalpost has now moved to the Gaussian white noise. From the realclimate discussion:
1) “we take the trend line and add random ‘noise’, i.e. random numbers with suitable statistical properties (Gaussian white noise with the same variance as the noise in the data).” i.e. the white noise is generated to have the same mathematical properties as the observed data, and
2) “so we used a non-linear trend line (see Fig. 1 above) together with Monte Carlo simulations. What we found, as shown in Fig. 4 of our paper, is that up to the 1980s, the expected number of records does not deviate much from that of a stationary climate, except for the 1930s.” i.e. the simulated data is a good enough fit for the actual data to have predictive value.
“The paper actually says (emphasis mine):
So my claim was correct. They had shown that in warming times, there’s likely to be more records set in the recent past.”
Your claim was incorrect, and you are trying to show that it was correct by quoting something that shows it was incorrect. You bolded the wrong bit of that sentence. You claimed originally:
Your quote above, again, but with the relevant bit emphasised:
So your claim was incorrect. You have misunderstood the paper, and you don’t seem to be able to understand that you misunderstood it. It said nothing about warm years at all. It discussed extreme events. Do you understand the difference?
I think this just shows how impossible it is to “adjust” the real data to take out specific effects. How do we know our adjustment is correct becuase we don’t have a comparison available of what the measurements would have been without the effect.
I had said:
stevo says:
October 29, 2011 at 4:16 am
Gosh, you’re right, stevo, the paper “said nothing about warm years at all”.
But neither did my quote. I talked about “warmest years”. These are also called “record years” or “extreme events”.
Do you understand the difference? My claim, that the “warmest years” (AKA extreme events) would tend to be found in the recent years, is exactly what the paper said.
Call back in when you have understood the paper.
w.
caroza says:
October 29, 2011 at 1:46 am
Caroza, if you think that “averages” are also known as “warmest years”, you haven’t understood a word I said.
“Warmest years” ARE extreme events, and that’s what the paper is about. That’s what “warmest” means, my slow-witted friend, warmer than the rest, AKA record years, extreme events. Warmest years are not “averages” as you seem to believe.
Come back when you understand that. Until then …
Bye,
w.
Willis
I said ‘averages, for example “warmest years”‘, not ‘averages, aka “warmest years”‘. But are you seriously going to try to tell me that annual mean temperature isn’t an average?
An extreme event is a very high (or low) point – i.e. a single data point. (Extreme is defined in terms of the number of standard deviations from the mean.) A record is an extreme event (high or low) which beats all previous events in the dataset studied.
If the data points are monthly average temperatures as in the Moscow data under discussion, then no, the average temperature for a year is not an event, it is an average of the values of events. The only time a warmest year would also be an event would be the case where the data points examined were annual mean temperatures.
Yes, they looked at global mean temperature as data points as well. But (to get you back on topic): you started your post with the remark Stevo and I both picked up on, went off on a tangent about the author’s observation of contamination of the data from correction for UHI, finished off with some pictures of trees and ended with this summing up: “The effects of a ring of slow-growing trees around the site, and a city behind the trees, plus a station move, make any measurements of the long-term Moscow trend speculative at best.”
I still fail to see what any of this has to do with a paper which is about the distribution of extreme and record events (so yes, perhaps I am slow-witted, although I assure you I am not your friend). Either way, I’m unlikely to agree with you so yes, I think it’s time to draw this to a close.
No, Willis, you did not understand the paper correctly, and you did not report its contents correctly. Perhaps this is because, as you have proudly said yourself, you have no scientific credentials whatsoever.
You claimed
“They have developed a mathematical relationship to show that if there is a warming trend in a temperature record, the most recent years will likely be the warmest years”
Myself and caroza have tried to explain to you that this is not what they did, but you lack the humility to accept that you are wrong. They did not need to develop a mathematical relationship for this, because it’s intuitively obvious. What they did was quantify the dependence of the probability of extreme events occurring on the underlying trend. It’s very different. Come back when you have understood that.
stevo says:
October 30, 2011 at 12:59 pm
They claimed to quantify the dependence. Unfortunately, unlike the actual temperature data, they used white noise instead of red noise for their monte carlo analysis.
In addition, with every record in the world to choose from, they used a spliced, heteroskedastic record which is known to be affected by UHI. This invalidates any analysis they might have done, as their monte carlo analysis certainly didn’t include a spliced record with no autocorrelation in the first part and significant autocorrelation in the second part.
As a result, the only supportable, verifiable outcome of their study is their finding that in a warming time, the most recent data will have an excess of records.
Which is what I said.
Come back when you have understood that, as an acquaintance of mine remarked …
w.
Incorrect again. You misrepresent the paper, and now you attempt to misrepresent what you said about it.
Willis, how exactly did you obtain your Proportionally adjusted monthly trends for Moscow, as displayed in figure 4, green bars ?
stevo says:
October 30, 2011 at 8:11 pm
Sorry, but that’s content-free. Quote what you disagree with.
w.
Rob Dekker says:
November 3, 2011 at 11:46 pm
Rob, I took the average adjustment (~1.3°/century). Rather than apply it evenly across the board, I allocated it based on the size of the trend.
For a given month, this works out to 1.3 * 12 * (month’s trend / sum of all months trends).
w.
Willis says :
And how exactly did you “allocate” the global average UHI adjustment to each month ? The method that you used is important, since it tells us if your UHI adjustment has any basis in reality. So it would be nice if you would explain exactly what you did there for Figure 4.
Also, regarding Figure 1, you plot a green line which supposedly is the “GISS adjusted minus unadjusted” graph, labeled “Adjustment shows the effect of the two-legged GISS method for removing UHI” in the figure caption. Do you have a reference to the publication that explains the “two-legged GISS method” and why it shows such a clean and surreal decadal step function ? And while you are at it, can you tell us why the “two-legged GISS method” seems to show a reduced UHI effect after 2000 ? Did the city of Moskow reduce energy use or so after 2000 ? Or is this “two-legged GISS method” more like a “two-arm-waving Eschenbach method” ?
Rob Dekker says:
November 4, 2011 at 11:48 pm
I gave an example of the math. If you can’t figure it from there, not sure what else I can say to explain it. Here’s the math again.
If you truly have a question about the math, ask it.
Are you naturally a jerk, Rob, or do work on it special? What’s with the agro, did I trip over your ego or something? The GISS method is described somewhere in one of their pubs, and you know what? I’m not looking it up for you. You want to make nasty remarks, and also get me to answer your questions? Sorry, you only get one of those, not both.
It’s a funny method, which assigns a pivot point and calculates a trend on either side of it. Do I know why it shows a reduced effect after 2000? Nope, that’s the mystery of the method. It’s based on nearby stations, and theoretically it adjusts urban stations to match the trend of the nearby rural stations.
If you truly care about the method, I’m sure you can find it. I don’t care if you do, you are far too spiteful for my taste. Anyhow, when you find it, report back so we can know that you were serious about your question and not just being unpleasant for the sake of it. That’s how I found out about the GISS two-legged method, Rob, I went looking for it. If you’re actually interested, I’m sure you can do the same.
w.