Does This Analysis Make My Tropics Look Big?

Guest Post by Willis Eschenbach

There is a new paper in Nature magazine that claims that the tropics are expanding. This would be worrisome because it could push the dry zones further north and south, moving the Saharan aridity into Southern Europe. The paper is called “Recent Northern Hemisphere tropical expansion primarily driven by black carbon and tropospheric ozone”, by Robert Allen et al. (paywalled here , supplementary information here  , hereinafter A2012). Their abstract says:

Observational analyses have shown the width of the tropical belt increasing in recent decades as the world has warmed. This expansion is important because it is associated with shifts in large-scale atmospheric circulation and major climate zones. Although recent studies have attributed tropical expansion in the Southern Hemisphere to ozone depletion the drivers of Northern Hemisphere expansion are not well known and the expansion has not so far been reproduced by climate models. Here we use a climate model with detailed aerosol physics to show that increases in heterogeneous warming agents—including black carbon aerosols and tropospheric ozone—are noticeably better than greenhouse gases at driving expansion, and can account for the observed summertime maximum in tropical expansion.

Setting aside the question of their use of a “climate model with detailed aerosol physics“, they use several metrics to measure the width of the tropics—the location of the jet stream (JET), the mean meridional circulation (MMC), the minimum precipitation (PMIN), the cloud cover minimum (CMIN), and the precipitation-evaporation (P-E) balance. Figure 1 shows their observations and model results for how much the tropics have expanded, in degrees of latitude per decade.

FIGURE 1. ORIGINAL CAPTION FROM A2012: Figure 2 | Observed and modelled 1979–1999 Northern Hemisphere tropical expansion based on five metrics. a, Annual mean poleward displacement of each metric, as well as the combined ALL metric. … CMIP3 models are grouped into nine that included time-varying black carbon and ozone (red); three that included time-varying ozone only (green); and six that included neither time-varying black carbon nor ozone (blue). Boxes show the mean response within each group (centre line) and its 2σ uncertainty. Observations are in black. In the case of one observational data set, trend uncertainty (whiskers) is estimated as the 95% confidence level according to a standard t-test.

I note in passing that the error bars of the observations are very wide. In fact, they barely establish the change as being different from zero, and in a couple cases are not statistically significant.

Now, several people have asked me recently how I can analyze a paper so quickly. There are some indications that set off alarms, or that tell me where to look. In this case, the wide error bars set off the alarms. I also didn’t like that instead of giving the claimed expansion per decade, they reported the total expansion over the 28 years of the study … that’s a second red flag, as it visually exaggerates their results. Finally, the following paragraph in A2012 told me where to look:

We quantify tropical width using a variety of metrics5,11: (1) the latitude of the tropospheric zonal wind maxima (JET); (2) the latitude where the Mean Meridional Circulation (MMC) at 500 hPa becomes zero on the poleward side of the subtropical maximum; (3) the latitude where precipitation minus evaporation (P-E) becomes zero on the poleward side of the subtropical minimum; (4) the latitude of the subtropical precipitation minimum (PMIN); and (5) the latitude of the subtropical cloud cover minimum over oceans (CMIN). To obtain an overall measure of tropical expansion, we also average the trends of all five metrics into a combined metric called ‘ALL’. Expansion figures quoted in the text will be based on ALL unless otherwise specified.

What told me where to look? Well, the sloppy citation. Note that they have not given citations for each of the 5 claims. Instead, they have put no less than seven citations at the head of the list of the five groups of observations and model results. That, to me, is a huge red flag. It means that there is no way to find out the source of each of the five individual observational results in A2012. So I went to look at the citations. They are as follows:

5. Zhou, Y. P., Xu, K.-M., Sud, Y. C. & Betts, A. K. Recent trends of the tropical hydrological cycle inferred from Global Precipitation Climatology Project and International Satellite Cloud Climatology Project data. J. Geophys. Res. 116, D09101 (2011).

6. Bender, F., Ramanathan, V. & Tselioudis, G. Changes in extratropical storm track cloudiness 1983–2008: observational support for a poleward shift. Clim. Dyn. http://dx.doi.org/10.1007/s00382-011-1065-6 (2011).

7. Son, S.-W., Tandon, L. M., Polvani, L. M. & Waugh, D. W. Ozone hole and Southern Hemisphere climate change. Geophys. Res. Lett. 36, L15705 (2009).

8. Polvani, L. M., Waugh, D. W., Correa, G. J. P. & Son, S.-W. Stratospheric ozone depletion: the main driver of twentieth-century atmospheric circulation changes in the Southern Hemisphere. J. Clim. 24, 795–812 (2011).

9. Son,S.-W. et al. Impact of stratospheric ozone on Southern Hemisphere circulation change: a multimodel assessment. J. Geophys. Res. 115, D00M07 (2010).

10. Kang, S. M., Polvani, L. M., Fyfe, J. C.& Sigmond, M. Impact of polar ozone depletion on subtropical precipitation. Science 332, 951–954 (2011).

11. Johanson, C. M. & Fu, Q. Hadley cell widening: model simulations versus observations. J. Clim. 22, 2713–2725 (2009).

For no particular reason other than that it was available and first in the list, I decided to look at the Zhou paper, “Recent trends of the tropical hydrological cycle inferred from Global Precipitation Climatology Project and International Satellite Cloud Climatology Project data”. Also, that was a citation that refers to the minimum precipitation (PMIN) for both hemispheres, as used in A2012. Figure 2 shows results from the Zhou paper:

Figure 2. ORIGINAL CAPTION FROM ZHOU: Figure 4. Time‐latitude cross sections of zonal mean seasonal precipitation and the corresponding linear trend with latitude. Solid orange lines mark the 2.4 mm d−1 precipitation threshold which is used as the boundaries of subtropical dry band. The boundary at the high and low latitude of the dry band is used as a proxy of the boundary of Hadley cell and ITCZ, respectively. Solid black lines indicate latitude with minimum precipitation. Dashed red lines mark the Hadley cell boundary determined by the 250 Wm−2 threshold using HIRS OLR data.

Now, the black line in these four frames show the minimum precipitation, so that must be where they got the PMIN data. So I went to look at what the Zhou paper says about the trend in the minimum precipitation PMIN. That’s shown in their Figure 5:

Figure 3. ORIGINAL CAPTION FROM ZHOU: Figure 5. Linear trends of the latitude of minimum precipitation, ITCZ, and Hadley cell boundaries inferred from GPCP for each season and the year marked on the horizontal axis for (a) the Northern Hemisphere and (b) the Southern Hemisphere. … Leftmost, middle, and rightmost bars in each group are for minimum precipitation, Hadley cell, and ITCZ boundary, respectively. For quantities significant at the 90% level, bars are shaded green, blue, and orange, respectively.

Now, let me stop here and discuss these results. I’m interested in the “Year” category for minimum precipitation (green), since that’s what they used in the A2012 paper. Note first that the minimum precipitation results that they are using are not even significant at the 90% level, which is very weak. But it’s worse than that. This paper shows one and only one result that is significant at the 90% level out of a total of six “YEAR” results.

This brings up a very important and routinely overlooked problem with this kind of analysis. While we know that one of these six “YEAR” results appears to be (weakly) significant at the 90% level, they’ve looked at six different categories to find this one result. What is often ignored is that the real question is not whether that one result is significant at the 90% level. The real question is, what are the odds of finding one 90% significant result purely by chance when you are looking at six different datasets?

The answer to this is calculated by taking the significance level to the sixth power, namely 0.96, which is 0.53 … and that means that the odds of finding a single result significant at the 90% level in six datasets are about fifty/fifty.

And that, in turn, means that their results are as meaningless as flipping a coin to determine whether the tropics are expanding on an annual basis. None of their results are significant.

It also means that the data from the Zhou paper which are being used in the A2012 paper are useless.

Finally, I couldn’t reproduce either the average value, or the error bars on that average, in the A2012 “ALL” data. Here are the “ALL” values from my Figure 1 (the A2012 Figure 2):

Item, Value, Error
JET, 0.45, 1.09
P-E, 0.75, 0.29
MMC, 0.24, 0.08
PMIN, 0.17, 0.51
CMIN, 0.33, 0.06
ALL, 0.33, 0.12

When I average the five values, I get 0.39, compared to their 0.33 … and the problem is even greater with the error bars. The error of an average is the square root of the sum of the squares of the errors, divided by the number of data points N. This calculates out to an error of 0.25 … but they get 0.12.

Does this mean that the tropics are not expanding? Well, no. It tells us nothing at all about whether the tropics are expanding. But what it does mean is that their results are not at all solid. They are based at least in part on meaningless data, and they haven’t even done the arithmetic correctly. And for me, that’s enough to discard the paper entirely.

w.

PS: I suppose it is possible that they simply ignored the results from the Zhou paper and used the results from another of their citations for the minimum precipitation PMIN … but that just exemplifies the problems with their sloppy citations. In addition, it brings up the specter of data shopping, where you look at several papers and just use the one that finds significant results. And that in turn brings up the problem I discussed above, where you find one significant result in looking at several datasets.

About these ads

84 thoughts on “Does This Analysis Make My Tropics Look Big?

  1. Superb ….. just shows how little one has to dig to expose the smell of bullshit. Honestly – what has science come to ?

  2. Why would Southern Europe have to become more arid? Isn’t there quite a sizable sea in there somewhere, quite capable of dampening the impact of shifting climate zones?

    Or are we supposed to believe the Mediterranean would move away, dry up or simply go to sleep and react not a peep to the new climatic conditions?

  3. Excellent work Willis – clear and to the point. This review is exactly what the peer reviewers should have done and had this paper terminated before publication.

    I wonder if the peer reviewers understand that their job isn’t simply to peer at the paper – they’ve actually got to do some review.

  4. Willis, you say: “This would be worrisome because it could push the dry zones further north and south, moving the Saharan aridity into Southern Europe.”

    Even if this is true, that would also imply that tropical rains would come to an expanded region of Africa. It seems to me that the total arid land area would decrease, at least with respect to Europe and Africa as a combined region.

    So one could argue that on balance, it would a net plus for both Europe and Africa as far as food production is concerned. Unless of course the hope is for Europe to have a better climate and more food security than Africa.

    I’m just saying that the premise for alarm here with this prospect of expanding tropics is based on what seems to me to be an unfortunate bias. Wouldn’t the objective view be that it doesn’t matter what minor countertrend happens to a given region or group of people, only what happens to the world overall?

    That said, I don’t want to see Europe have to go through any difficulty either, but surely they are better equipped than most to adjust to new problems? (Importing food, adopting new agricultural practices, desal including the cost of power to run the desal plants …. ) So if the overall global impact is more precipitation, more arable land, more food production, and more and cleaner water for the people, and a reduction in the cost of all these things (again, looking globally and factoring in any increased difficulty in a certain region), can that really be seen as anything but positive?

    RTF

  5. I question the black carbon aerosols heating the planet. They cannot create heat only adsorb solar heat and get warm themselves, which costs energy, and radiate heat at a lower level due to the energy lost warming them, to the surroundings. This will lower heating at the surface not increase it. These warming effects seem to violate both 1st and 2nd laws. Also the vegetation response to warming lags that heating by several years it will still be evident when temperatures fall then start to die off. 8000 years ago there was no Amazon rain forest, it was grassland, and the Sahara was forested. The planet was cooler then due the last major ice age finishing 2-3000 years previously. It has also been warmer than today during the MWP, RWP and Holocene Climate Optimum but as far as I can tell from research of those times the tropics remained roughly where they are now.
    There seems to be no recognition that climates change with corresponding environmental changes and this due to the changing natural climate drivers.

  6. Nice job on one cite, but the analysis really requires an understanding of them all. Kinda sloppy review…

  7. The poor statistical review of papers which rely heavily on convoluted statistics seems to be a major issue with “climate science” papers. In my own, medical, field papers with this level of statistical content would be rejected unless we had a recognised statistician as a co-author. Even then, a specialist statistical review would be required.

  8. The geometric mean of those data, rather than the arithmetic mean (average) is perhaps what they calculated? (=GEOMEAN(x1,x2,x3,x4,x5))

  9. This brings up the obvious differences between peer and pal reviews. With relative ease, as he he has done so often before, Willis does an excellent hatchet job on the data analysis and ‘models’ in this paper.

    So does pal review in ‘climate science’ mean: i) nobody reads anyone else’s paper for fear of finding something wrong in it, ii) nobody reads anyone else’s papers because they are too lazy, or iii) the pal, or peer, has insufficient knowledge and/or mental ability to understand the contents of. the paper?

    The bottom line: It’s ‘climate science’, where the rules are different from all other fields of science. Anything goes, as long as it supports ‘The Cause’.

  10. The data in the Zhou paper is obviously not normally distributed, and yet is treated as if it is. It is then combined with other data-sets and again treated as Gaussian..

  11. bsk:

    At May 22, 2012 at 4:41 am you write;

    Nice job on one cite, but the analysis really requires an understanding of them all. Kinda sloppy review…

    No!
    When you are considering buying a car and the first wheel you examine is damaged, then you do not need to see the other three wheels to know the car needs repair before you buy it. You do not require an examination of the other wheels to know that.

    Similarly, only a fool would ‘buy’ the message of this paper because – as Willis has shown – the first piece of evidence used to form that message is ‘not fit for purpose’. You do not require an examination of the other pieces of evidence to know that.

    You comment is a kinda sloppy review of Willis’ work. Nice try, though.

    Richard

  12. Thanks Willis! So everything moves N’ward, and the green pastures of Britain become the parched scrub of Spain?

    In 2010 the UK National Trust suggested that gardeners replace traditional British lawns with cacti, and orange and lemon trees, in order to prepare for global warming.

    Due to freezing, snowy winters, anyone taking this horticultural advice would now have a very low-maintenance garden! :-)

  13. PS. It’s what I term beer review ie. if you let my paper through without actually reading it I’ll buy you a crate of beer.

  14. What is the difference between GSA fubar party expenditures in Vegas and climate science research like this one?

    None. Note to the current President (no disrespect intended). I want my money back.

  15. Nice work Willis.

    The bit that I liked was where you deconstruct the “significant” result to show that it is actually an expected outlier, given the number of observations. Matthew Briggs has done a good job of explaining the ways in which you can “generate” statistically significant results (based on arbitrary 0.1 or 0.05 levels of probability), but it does not often get brought into other critiques.

    If I could make one plea to reviewers of papers where such stats are (ab)used, I would ask them to require authors to state how many parameters were evaluated before picking a particular one as significant. The ‘gold standard’ of 0.05 really just means one in 20 and if you are looking at 20 variables, it is completely unremarkable if you find a 0.05 level in one of them. This level of mis-use of statistics is rife in all branches of science.

  16. Sorry if this has been said before, but surely ‘the tropics’ are a well-defined astronomical feature: They are the latitudes at which the sun is directly overhead at the solstice.

    Am I missing something here?

  17. … oh, and “Saharan aridity” has nothing to do with the ‘tropics’. Such aridity occurs well outside the tropics too. It is a product of the ‘weather’ in certain geographical regions. Otherwise you would not get the lush verdant flora that is typical of the tropics.

  18. [typo, should be “precipitation-evaporation balance”]
    Willis, I’m not sure why you’re objecting to using the Zhou et al. Pmin figure. In the table, it is given as 0.17 +/- 0.51. While not significant in and of itself, there’s nothing statistically invalid about combining measurements which individually are not significant.
    .
    Also, for the “All” figure — if one has metrics with different errors, a simple average isn’t the best approach. A weighted average, weight being inversely proportional to the square of the error, would seem appropriate. [Although only two metrics dominate the calculation; the others become essentially irrelevant.] But that method doesn’t produce the paper’s 0.33 +/- 0.12 result either.

  19. One of the previous articles (see below) showed a 200 year natural pattern driving the winds north and south, which would explain any small observed change in the tropics. No carbon black required.

    As I gather the gist of this article is that climate models are more sensitive to carbon black than they are to GHG in predicting a change in the tropics. Since climate models don’t account for observed natural cycles and climate science has no explanation for why they occur, this paper really is talking about the limitations and sensitivities in climate models, while conveniently missing the point that models are not climate.

    Premonitions of the Fall (in temperature)
    Posted on May 20, 2012 by Anthony Watts
    Guest post by David Archibald

  20. ImranCan says:

    May 22, 2012 at 4:10 am
    Superb ….. just shows how little one has to dig to expose the smell of bullshit. Honestly – what has science come to ?
    OR

    Superb ….. just shows how little one has to dig to expose the bullshit below the smell. Honestly – what has science come to ?

  21. I think Shevva nails it – its about the grants and getting the next one – think of a paper you can concoct and apply for a grant. Some time back now I was lecturing at a local university on a part time basis and my department head ( a professor and still a good pal) eventually approached me about publishing a paper. I baulked at first because I am not an academic but a working engineer. As it happened I was the first one to start using the university’s recently acquired CFD code, had built up some basic expertise, had used it under a commercial licence and was including its practical use in my engineering design project class. So we decided I would do a paper in that area. When I said I could do something on using CFD in design and elaborated he said “great, there are at least two LPU’s in that”. LPU I asked? What is an LPU. Least Publishable Unit was the reply. It was an epiphany as to how academic funding is influenced and explains an awful lot about the practice of climate science. I eventually wrote the paper, got sent to a conference on the other side of the country, a good time was had by all and the department notched up a few grand in additional funding brownie points plus expenses.

  22. Was “the minimum pressure (PMIN)” meant to say minimum precipitation?

    Wouldn’t a reanalysis product be useful in diagnosing tropical expansion? Then again, interpolated datasets have their own limitations.

    [Thanks, fixed. -w.]

  23. Although recent studies have attributed tropical expansion in the Southern Hemisphere to ozone depletion the drivers of Northern Hemisphere expansion are not well known and the expansion has not so far been reproduced by climate models.

    If other cases are to be followed, since the models do not show expansion, it must not be happening.

    Here we use a climate model with detailed aerosol physics to show that increases in heterogeneous warming agents—including black carbon aerosols and tropospheric ozone—are noticeably better than greenhouse gases at driving expansion, and can account for the observed summertime maximum in tropical expansion.

    From the looks of their Fig 1, the model results miss the observation by both model & observed error margins? How is this model better?

  24. DocMartyn says:
    May 22, 2012 at 5:06 am
    The data in the Zhou paper is obviously not normally distributed, and yet is treated as if it is. It is then combined with other data-sets and again treated as Gaussian..
    ========
    This is a very good point. Time and time again we see climate science performing statistical analysis under the assumption the data is normally distributed, without any evidence this is a correct assumption.

    Time series data is rarely normally distributed. Yet climate science proceeds under the assumption that it is – without having shown this to be true.

    There is a simple eyeball test to check if climate data has a “normal” distribution. When data has a normal distribution, as you expand the scale the data will “smooth”. So, for example, while daily temperatures might fluctuate quite a bit, as you expand out to decades and centuries, this fluctuation should “average out”.

    And, when we look at the famous “hockey stick” that is what we see. Centuries of past history where the temperature doesn’t fluctuate much at all. This is modern climate science.

    However, when we look at other measures of climate before “the Team” and the IPCC controlled what was published, we see quite a different picture. When we expand the scale beyond 10 thousand years we see huge NATURAL variations in temperature. Even Gore showed this in his movie.

    This increasing natural variation that appears as we increase the scale tells us the climate is not normally distributed. Climate is not a coin toss. The odds of temperature going up and down do not remain constant year to year, century to century. You cannot use mathematical techniques that assume the odds remain constant.

    Thus, statistical analysis that relies on the assumption of normal distribution is bogus. The conclusions in much of modern climate science are a result of the faulty application of mathematics to “prove” a foregone conclusion. That foregone “conclusion” is that climate remain constant, except for human activity. Using mathematical methods that assumes this to be true is what makes it appear true.

  25. Willis
    Your trenchant, witty, searching analyses of made-up-science do a service to all sceptics and dissenters and show everyone what proper science should look like.

    I particularly like your ‘smelly’ categories. Things that set your BS detector nose twitching! Journal editors and peer reviewers should draw up an audit list based on your work. It would filter out the most egregious rubbish. Thanks again – bloody brilliant!

  26. Have to agree with Jerome here – Willis, as a former resident of the tropics, how did you let them get away with that? Are they saying that monsoons become prevalent in southern Europe?

  27. The paper is clearly rubbish, as Willis says, but at least the focus is shifting towards the very phenomena that I have been banging on about for over 4 years now.

    I don’t think there is any serious disagreement with the observation that in the late 20th century warming period the jets became more zonal and the equatorial air masses did expand poleward a little and that the scenario now with the less active sun is very different.

    Similarly there is lots of evidence that the MWP had zonal jets and expanded tropics whereas the LIA had meridional jets and contracted tropics.

    To whom should I apply for grant monies ?

  28. HaroldW says:
    May 22, 2012 at 6:31 am

    Willis, I’m not sure why you’re objecting to using the Zhou et al. Pmin figure. In the table, it is given as 0.17 +/- 0.51. While not significant in and of itself, there’s nothing statistically invalid about combining measurements which individually are not significant.

    Thanks, Harold. I object because the error is not correctly represented. As I showed above, even a result that appears to be significant is not significant if you have searched through six datasets to find it. Similarly, the results which are not significant are even less significant if you search through six datasets to find them.

    If you are doing that kind of selective picking of individual results gleaned from casting a wide net over a large number of datasets, you can find any kind of result that you wish and use it … but that doesn’t magically make it significant.

    w.

  29. From their caption:

    “…(Observed and modelled 1979–1999)…In the case of one observational data set, trend uncertainty (whiskers) is estimated as the 95% confidence level according to a standard t-test…”

    And you’re telling us they never, ever mentioned which one it was?

    Also, is a period of 20 years sufficient to make these claims?

    Especially since wiki states “…The region (Sahara desert) has been this way since about 1600 BCE, after shifts in the Earth’s axis increased temperatures and decreased precipitation. Then, due to a climate change, the savannah changed into the sandy desert as we know it now…” – if those natural events were that strong, then the tropics would have gone through a large transformation too.

    This means that natural events centuries ago had a greater effect than the current CO2 levels.

  30. “Wouldn’t the objective view be that it doesn’t matter what minor countertrend happens to a given region or group of people, only what happens to the world overall?”

    Why should anyone try to be objective? Each of us, if he is normal, puts his own people first, and mine are Europeans and those who are derived from Europeans.

    So I want Europeans, in that wider sense, to come out ahead of anyone else, including Africans (though I wish the Africans well).

  31. Jer0me, johanna,

    What is being discussed here is possible movement of the tropical Hadley cell circulation. Roughly speaking, the intense sunshine at the equator causes a massive uplift of air there. At high altitudes, this air moves away from the equator, both to the north and south. At about 25-30 degrees of latitude, this air falls. Over land at least, there is virtually no moisture in this air, and the result is deserts, such as the Sahara.

    There are at least potential reasons to believe that in a warmer climate, this region of dry falling air would move farther away from the equator. For example, in the cool 1970s, the Sahel, the boundary region between the Sahara and the lush tropics in Africa, was drought stricken, causing much misery. In the warmer decades since, it has been much wetter. Anyway, this is the possible effect that is in play here.

  32. curryja says:
    May 22, 2012 at 7:39 am

    Willis, check out this paper

    http://webster.eas.gatech.edu/Papers/Hoyos_Webster2011.pdf

    Thanks as always, Judith. I got very nervous when I looked at their Figure 1 (a).

    The problem is, where are the error bars? So I looked at the dataset description, where they say that the 95% confidence interval for their data is about 1.6°C … and the total warming shown over the period is only about 0.8°C … not a healthy combination.

    I’ll take a more detailed look at it later today, but that’s not a good start.

    The second problem is that the analysis revolves around the area of the Pacific Warm Pool, which is an area little traversed by ships. As a result, the errors in that calculation are going to be even larger … and are also not mentioned.

    All the best,

    w.

  33. Richard T. Fowler says:
    May 22, 2012 at 4:20 am

    Willis, you say: “This would be worrisome because it could push the dry zones further north and south, moving the Saharan aridity into Southern Europe.”

    Even if this is true, that would also imply that tropical rains would come to an expanded region of Africa. It seems to me that the total arid land area would decrease, at least with respect to Europe and Africa as a combined region.

    So one could argue that on balance, it would a net plus for both Europe and Africa as far as food production is concerned…
    ___________________________________
    You beat me too it. It would also might increase the growing seasons in Canada, Russia and China, a very big plus.

    Paul Vaughan presented this graph yesterday in another thread. It shows the hot temperatures are not linked to the equator but to dry land between 15 and 30 degrees latitude: http://i55.tinypic.com/dr75s7.png

  34. Rob Potter says:
    May 22, 2012 at 5:57 am
    …..If I could make one plea to reviewers of papers where such stats are (ab)used, I would ask them to require authors to state how many parameters were evaluated before picking a particular one as significant. The ‘gold standard’ of 0.05 really just means one in 20 and if you are looking at 20 variables, it is completely unremarkable if you find a 0.05 level in one of them. This level of mis-use of statistics is rife in all branches of science
    _________________________________
    I am not sure if it is true now, but statistics was not even a required course when I got my BS in Chemistry. Some of the industry courses I took were only a “How to” course on what buttons to push to get “statistics” out of the “Statisitical Package” being sold to the company. Only by taking college level stat courses as part of my continuing Ed did I actually get any of the math and a few courses just barely scraps the surface.

    Given Phil Jones, remark about Excel, I am afraid this is typical of science where “the one (statistical) course man is king”

  35. ferd berple says:
    May 22, 2012 at 7:18 am

    DocMartyn says:
    May 22, 2012 at 5:06 am
    The data in the Zhou paper is obviously not normally distributed, and yet is treated as if it is. It is then combined with other data-sets and again treated as Gaussian..
    ========
    This is a very good point. Time and time again we see climate science performing statistical analysis under the assumption the data is normally distributed, without any evidence this is a correct assumption.

    Time series data is rarely normally distributed. Yet climate science proceeds under the assumption that it is – without having shown this to be true.
    ————————————————————————————-
    I have just done 15 rounds with someone in another forum about so-called ‘cancer clusters’ about exactly the same issue. Unsurprisingly, a combination of electromagnetic radiation and ‘chemicals’ were being blamed in that case. It is hard enough to be arguing with members of the general public about this; it must be infuriating to have to do it with people who claim to have scientific training.

  36. Stephen Richards says:
    May 22, 2012 at 6:46 am
    ImranCan says:

    May 22, 2012 at 4:10 am
    Superb ….. just shows how little one has to dig to expose the smell of bullshit. Honestly – what has science come to ?
    OR

    Superb ….. just shows how little one has to dig to expose the bullshit below the smell. Honestly – what has science come to ?

    OR . . .

    Superb … just shows how perceptive, persistent, and capable one has to be to expose climate science chicken dung.

    There—fixed it for y’all.

  37. Stephen Wilde says: @ May 22, 2012 at 7:34 am

    …To whom should I apply for grant monies ?
    ___________________________
    To the US government of course. I am sure there is lots of Obama Money left since Bernanke is printing more money.

    Just do not forget the get out of peer-review free card “This study shows that increases in heterogeneous warming agents such as carbon dioxide drive….”

  38. It’s easy to confuse the expansion of equatorial humid zones (and subsequent greening of the “Horse Latitudes” deserts) with overall tropical expansion. Of course, the most recent such “greening” humid zone expansion was during the late 20th Century. It was good while it lasted, but with every day becomes a fading memory. Now onward into a cold and dry age. An age of death.

  39. Willis: another awesome entry at WUWT. Thanks very much. I appreciate not only the pithiness of your analysis (and the unique flavor of it) but the very useful description of “red flags.” The disconnect between their citations and the underlying scholarship (Zhou) is really disconcerting. Do they not realize how sloppy they are? Or are they just assuming that most of us won’t dig below the first belt of citations?

    More generally it seems to me that scholarship is based on trust; that the coin of trust is debased by inept, or careless, or fraudulent citation; that citations “stack” on each other; that we are now so caught up in a network of stacked citations that, once we cease to trust the network, we will find ourselves not just one or two steps back, but way back. Is there not room for an independent checker-of-trustworthiness? Clearly the old (peer-reviewed publication) system is not working very well. The market for trustworthiness would, or should, pay handsomely for a better system. Look at the sums we are prepared to gamble on the putative findings of these papers…

  40. “There is a new paper in Nature magazine that claims that the tropics are expanding. This would be worrisome because it could push the dry zones further north and south.”

    As far as I understand, the dry zones are dry, because there is little rain that falls – not because of the temperatures. So it doesn’t automatically follow that if the tropics expand, that the the dry zones will be pushed away. The precipitation pattern is the one to watch – if you can formulate such a model that is both simple as well as close to real world for precipitation, you could earn several prizes.

  41. Look at the years … 1979 through 1999, basically a warm-phase PDO oscillation as Dr. Easterbrook has established.

  42. DocMartyn says:
    May 22, 2012 at 5:06 am
    The data in the Zhou paper is obviously not normally distributed, and yet is treated as if it is. It is then combined with other data-sets and again treated as Gaussian..
    ========
    This is a very good point. Time and time again we see climate science performing statistical analysis under the assumption the data is normally distributed, without any evidence this is a correct assumption.

    Time series data is rarely normally distributed. Yet climate science proceeds under the assumption that it is – without having shown this to be true.
    ————————————————————————————-

    I too will strongly agree with this point. I see this far too often, especially in climate research, where the data are assumed to be normally distributed. In our work with thunderstorm and tornado data, the data sets are rarely normally distributed, so we use non-parametric tests for significance. If t-tests, or other tests applicable to normal distributions, are used then it is incumbent on the authors to show the data are, in fact, normally distributed. When reviewing a paper such as this I would certainly request revisions to explain the statistical approach used.

  43. ImranCan says (May 22, 2012 at 4:10 am): “Superb ….. just shows how little one has to dig to expose the smell of bullshit. Honestly – what has science come to ?”

    Like a lot of other things, “science” has come to the single-minded pursuit of taxpayers’ money.

  44. Willis:

    Many moons ago when I was just a weedhopper I acquired a copy of Darrel Huff’s “How to Lie with Statistics”

    For years it was a prized highlight of my personal library until I made the mistake of lending it out and never saw it again. The concepts in it are still entirely valid but it’s almost as old as I am and the writing is charmingly dated at this point. It strikes me that the world is sorely in need of an updated “How to Lie with Statistics for the 21st Century”. Since you seem to share some of Mr. Huff’s gift for conveying mathematical concepts in understandable language you could be the right guy to produce such a tome. Perhaps you could rope McIntyre and Briggs in as coauthors and get Josh to do the artwork.

    Although I am not a fan of top down mandates, we could assure the financial success of the project by making it a mandatory text for every school child in the country, with mastery of the material a requisite for high school graduation or preferably for getting out of the 8th grade. That might seem to be a bit draconian but if it could be implemented I strongly suspect that it would be one of simplest and most effective things we could do to reverse what seems to be an inexorable trend toward totlitarianism that the the world is presently experiencing.

  45. Gary Hladik:

    At May 22, 2012 at 12:09 pm you say;

    Like a lot of other things, “science” has come to the single-minded pursuit of taxpayers’ money.

    No. ‘Climate science’ and much other science has become that “single-minded pursuit”, but most of science has not.

    Many of us predicted that ‘climate science’ was likely to damage the reputation of all science. Sadly, as your post shows, our prediction has come true.

    Richard

  46. R Barker says: “…Your analysis brought to mind an editorial in the 10 May 2012 Nature by Daniel Sarewitz about the bias making its way into scientific research if for no other reason than expecting to get the desired result. http://www.nature.com/news/beware-the-creeping-cracks-of-bias-1.10600

    That editorial deserves a thread of its own. It states: “…It would therefore be naive to believe that systematic error is a problem for biomedicine alone. It is likely to be prevalent in any field that seeks to predict the behaviour of complex systems — economics, ecology, environmental science, epidemiology and so on….The first step is to face up to the problem — before the cracks undermine the very foundations of science.”

    The fact that the author [Daniel Sarewitz] lacks the temerity to include climate science by name doesn’t take away from his message or its obvious applicability to the output of Jones, Mann, Travesty Trenberth, self-anointed-messiah Hansen, et al. Will Nature itself “face up to the problem?” I doubt it very much. Dr. Sarewitz.is no lightweight, but neither are others [e.g., Judith Curry] who’ve carried the same message in recent times.

  47. Like I said.

    More energy in the system results in a faster hydrological cycle which accelerates energy to space so as to offset any deceleration of energy to space caused by GHGs.

    Energy content for the system as a whole remains exactly the same but it is distributed differently and the climate ‘price’ is a shift in the permanent climate zones.

    Then one must consider how far human CO2 emissions would shift the climate zones as compared to shifts caused by solar and oceanic variability.

    Going by the MWP and LIA the sun and oceans shift the zones by 1000 miles or so.

    I’d guess our emissions would shift them by less than a mile.

  48. And there I was foolishly thinking the tropics were defined by the tilt of the Earth’s axis to the plane of the ecliptic as it orbits the Sun.

    Another triumph for the settled science!! Who’da thunk it???

  49. I must say, the issuance of a press release to notify the world of a group of people doing public (bovine) defecation is getting a little tiresome. I don’t care how “carefully” the reviewers have scrutinized it.

    On the other hand, I continue to learn from Willis’ forays into the wilds of “Climate Science.” Mostly how to improve my bovine defecation detection apparatus.

  50. Thanks Willis!

    Whenever I read one of your perspicacious forensic dissections of a statistical analysis (and the comments that follow), I almost always have a small “Oh Yeah!” flash back moment to some half forgotten tidbit from my college statistics classes!

    As for your satirical paraphrase “Does This Analysis Make My Tropics Look Big?”, I’d opine “No! But I do admire your Temperate Latitude with the critics!”
    MtK

  51. richardscourtney has clearly never reviewed a paper or done science work, other than that…

  52. John Marshall says:
    May 22, 2012 at 4:29 am

    Warm times are wetter. Cool times are drier. (i forgot to add that)

    You should have continued to forget. It’s not true here in the US Pacific Northwest, where fall and winter are the wettest seasons, spring and summer the driest. I imagine your general statement is falsified elsewhere as well.

  53. wsbriggs says:
    May 22, 2012 at 4:47 pm

    I must say, the issuance of a press release to notify the world of a group of people doing public (bovine) defecation is getting a little tiresome. I don’t care how “carefully” the reviewers have scrutinized it.

    On the other hand, I continue to learn from Willis’ forays into the wilds of “Climate Science.” Mostly how to improve my bovine defecation detection apparatus.

    Why that’s easy! All it takes is the word “model” as part of the main conclusion.

  54. ChrisH says:
    May 22, 2012 at 4:46 am

    The poor statistical review of papers which rely heavily on convoluted statistics seems to be a major issue with “climate science” papers. In my own, medical, field papers with this level of statistical content would be rejected unless we had a recognised statistician as a co-author. Even then, a specialist statistical review would be required.

    About that.

    http://www.nature.com/news/beware-the-creeping-cracks-of-bias-1.10600

    Unfortunately much of what passes as medical research today is not worth the paper it’s written on due to botched experimental design and misuse of statistics:

  55. Thanks to several of you for discussing non-normal distributions. I am now a slightly better scientist.

    And thanks to others for pointing out that the whole concern may be misplaced anyway, as warmer is better even if true.

    The basis of Life is reduction of carbon dioxide. This comes from fossil fuels so only this source of energy can feed the world over the next century. Well, that and real warming, which causes release of carbon dioxide from the oceans.

  56. Rosco says:
    May 22, 2012 at 2:51 pm

    And there I was foolishly thinking the tropics were defined by the tilt of the Earth’s axis to the plane of the ecliptic as it orbits the Sun.

    Another triumph for the settled science!! Who’da thunk it???

    Thanks, Rosco. There are actually two tropics, the physical tropics and the meteorological tropics.

    The physical tropics, as you point out, runs from 23.45°S to 23.45°N. They are defined by the angle of the axis of the earth with respect to the plane of its orbit.

    The meteorological tropics, on the other hand, are delineated by the Hadley cells. As the Zhou analysis points out, you can define the edges and the center in various ways. However you define it, in general it’s the width of the cells. Here’s a drawing of the Hadley circulation I did a while ago:

    Note that the actual width of the Hadley cells run from about 30°S to 30°N. They are bounded on the outside by the great desert belts that circle the world at those two latitudes.

    All the best,

    w.

  57. Jeff Alberts says:
    May 22, 2012 at 5:25 pm
    John Marshall says:
    May 22, 2012 at 4:29 am

    Warm times are wetter. Cool times are drier. (i forgot to add that)

    You should have continued to forget. . . . .

    I’ll make a WAG that John M. was thinking of glacial advances and interglacials. The notion is that a massive increase in ice also decreases the surface area of the ocean, produces cold winds, decreases precipitation, and provides wind-blown silt (loess; see Palouse).

  58. Willis Eschenbach says: “……When I average the five values, I get 0.39, compared to their 0.33 … and the problem is even greater with the error bars. The error of an average is the square root of the sum of the squares of the errors, divided by the number of data points N. This calculates out to an error of 0.25 … but they get 0.12.”

    When the authors say “…and the combined metric ALL shows ….” you are assuming that they took the average of the mean values of each metric and the error bars are calculated from the mean values assuming N = 5. That is most probably not what they did – since there are error bars already associated with each mean value and the observed data population may be different for MMC, PMN, CMIN, etc. – in which case the direct averaging of the means does not make much sense. I think what they did is different (although I have not read the sup. materials or Zhou paper) – knowing the number of data points, the mean and standard error for each group, they calculated sum x and (sum x^2) for each group, and then added all the (sum x) divided by the total sum of all data points to get the mean and using the sum of (sum x^2), the combined error bar was also calculated. In other words if the five means are m1, m2, m3, m4, and m5 with number of data points n1, n2, n3, n4 and n5, in each group, then your calculation of the net mean is (m1+m2+m3+m4+m5)/5, while the real combined (weighted) mean is (m1*n1+m2*n2+….+m5*n5)/(n1+n2+n3+n4+n5) – they are different unless n1, n2, n3 etc are all the same. Same problem goes with the net error bars also. As before, Willis, I really enjoy reading your criticisms.

  59. bsk:

    At May 22, 2012 at 4:41 am you said (in full);

    Nice job on one cite, but the analysis really requires an understanding of them all. Kinda sloppy review…

    And I explained the logical error of that at May 22, 2012 at 5:28 am saying;
    [snip]

    When you are considering buying a car and the first wheel you examine is damaged, then you do not need to see the other three wheels to know the car needs repair before you buy it. You do not require an examination of the other wheels to know that.

    Similarly, only a fool would ‘buy’ the message of this paper because – as Willis has shown – the first piece of evidence used to form that message is ‘not fit for purpose’. You do not require an examination of the other pieces of evidence to know that.

    [snip]

    The only rational dispute of my point would have been a demonstration that the perceived flaw in the paper is trivial and insignificant so my analogy is not correct. But the analogy is correct because the paper’s flaw is so serious that it does damage the validity of the paper’s arguments.

    So, at May 22, 2012 at 5:25 pm you have replied to me by saying (in full);

    richardscourtney has clearly never reviewed a paper or done science work, other than that…

    This is another display of your lack of logical ability and it adds a display of your ignorance.

    Therefore, I suggest you leave trolling of technical threads to people with some competence as trolls.

    Richard

  60. Does the movement south of the Tropic of Cancer and north of the Tropic of Capricorn have any influence on the above? Due to the obliquity of the ecliptic the tropics and polar circles are moving by about 14 metres a year. The tropics are decreasing by some 1100 square metres a year

  61. I remember 20 odd years ago when expanding tropics and poleward migration of climate zones were the predicted signatures of global warming. When these things didn’t happen, the focus switched to other metrics that could be argued did show global warming; surface temps, Arctic ice, glacier melt, etc.

    That papers are again being published arguing that the tropics are expanding, even though the data is as weak as ever, is probably due to the other metrics not behaving in the predicted way, and a generally desperate search for something that shows global warming still continues.

  62. Willis, I should have added that once you figure out how the authors have used weighted averages for calculating the average and the error bar for the “ALL” from the mean and standard errors for each metric, I hope you can explain to your ardent followers why (1) Zhou’s input is largely irrelevant in affecting the conclusions in this paper, and (2) why the glaringly large error bars for JET, P-E, are PMIN are also immaterial – in other words why your entire criticisms that started this discussion are invalid.

  63. Rob G. says:
    May 22, 2012 at 10:04 pm

    Willis Eschenbach says:

    “……When I average the five values, I get 0.39, compared to their 0.33 … and the problem is even greater with the error bars. The error of an average is the square root of the sum of the squares of the errors, divided by the number of data points N. This calculates out to an error of 0.25 … but they get 0.12.”

    When the authors say “…and the combined metric ALL shows ….” you are assuming that they took the average of the mean values of each metric and the error bars are calculated from the mean values assuming N = 5. That is most probably not what they did – since there are error bars already associated with each mean value and the observed data population may be different for MMC, PMN, CMIN, etc. – in which case the direct averaging of the means does not make much sense.

    Thanks, Rob, but you are so intent on finding some error in what I’ve done that you neglect to read what I’ve quoted. The authors say, as I quoted above (emphasis mine):

    To obtain an overall measure of tropical expansion, we also average the trends of all five metrics into a combined metric called ‘ALL’.

    Now, that is the sum total of their explanation. When they say that they “average the trends of all five metrics”, I take it to mean that they average the trends of all five metrics. You may claim all you want that that doesn’t make much sense, or that they did something else; and perhaps it doesn’t and they did … but it’s what they say that they did.

    You also say:

    Rob G. says:
    May 23, 2012 at 3:37 am

    Willis, I should have added that once you figure out how the authors have used weighted averages for calculating the average and the error bar for the “ALL” from the mean and standard errors for each metric, I hope you can explain to your ardent followers why (1) Zhou’s input is largely irrelevant in affecting the conclusions in this paper, and (2) why the glaringly large error bars for JET, P-E, are PMIN are also immaterial – in other words why your entire criticisms that started this discussion are invalid.

    “Ardent followers”? This is science, please, leave the side commentary aside, it damages your case.

    If you think that Zhou’s input is largely irrelevant, then please explain why the authors concluded that they should include it. They must think it is not irrelevant.

    If you think that the “glaringly large error bars … are also immaterial”, you’ll have to explain that as well. Please note that in fact the Zhou error bars should be even larger, to account for the fact that they have mined the data to obtain the results they use.

    w.

  64. Willis Eschenbach says: “…Thanks, Rob, but you are so intent on finding some error in what I’ve done that you neglect to read what I’ve quoted. The authors say, as I quoted above (emphasis mine)”.

    I agree with you that the authors did not make that part clear enough – meaning whether they are using the simple arithmetic mean or the weighted average, which is unfortunate. I agree that part. But it took me only couple of minutes to figure that out – there are only two options for the average in this case, so it is one or the other.

    Willis says: “..You may claim all you want that that doesn’t make much sense, or that they did something and perhaps it doesn’t and they did … but it’s what they say that they did…”

    When there are fewer data points in one group and larger observed data in another group, taking the average of the mean from each group will introduce a bias in the final result towards the set with fewer data points. That is why we use weighted averages, it accounts for the number of data points in each group, thus more representative of the whole experiment. That is what I was saying.

    Willis says: “..“Ardent followers”? This is science, please, leave the side commentary aside, it damages your case.”

    I do not have any problems in a neutral scientific discussion – but many of the comments I saw above are puzzling because they are supporting your views without even looking at your calculations and logic. It is a parenthetic comment – nothing major.

    Willis said: “..If you think that Zhou’s input is largely irrelevant, then please explain why the authors concluded that they should include it. They must think it is not irrelevant. If you think that the “glaringly large error bars … are also immaterial”, you’ll have to explain that as well…”

    They have larger error bars but fewer data points in some groups and smaller error bars but larger data points in other group (that is the reason why the net error bar is smaller) – thus more weight is going to groups with larger data points with smaller standard errors. That is why I think the above mentioned groups have only a smaller impact in the final result.

  65. Rob G.:

    I am following the discussion between you and Willis with interest.

    You conclude your post to him at May 23, 2012 at 6:35 am saying;

    Willis said: “..If you think that Zhou’s input is largely irrelevant, then please explain why the authors concluded that they should include it. They must think it is not irrelevant. If you think that the “glaringly large error bars … are also immaterial”, you’ll have to explain that as well…”

    They have larger error bars but fewer data points in some groups and smaller error bars but larger data points in other group (that is the reason why the net error bar is smaller) – thus more weight is going to groups with larger data points with smaller standard errors. That is why I think the above mentioned groups have only a smaller impact in the final result.

    OK. I understand that, but you do not provide any quantification of that “smaller impact”. Hence, I would be grateful if you were to state what significance you are claiming for that “smaller impact”.

    I would appreciate your answer to my request because I look forward to reading a response from Willis to your arguments, and a rational debate of your point (that I quote) has to be about that significance.

    Richard

  66. Rob G. says:
    May 23, 2012 at 6:35 am

    Willis said:

    “..If you think that Zhou’s input is largely irrelevant, then please explain why the authors concluded that they should include it. They must think it is not irrelevant. If you think that the “glaringly large error bars … are also immaterial”, you’ll have to explain that as well…”

    They have larger error bars but fewer data points in some groups and smaller error bars but larger data points in other group (that is the reason why the net error bar is smaller) – thus more weight is going to groups with larger data points with smaller standard errors. That is why I think the above mentioned groups have only a smaller impact in the final result.

    Thanks again, Rob. I fear that the Zhou paper doesn’t give the number of data points that they have used to arrive at their error estimate. As a result, I see no way of doing the calculation that you propose. If you have data regarding your claim, please bring it forwards.

    In addition, as I have pointed out above, the Zhou paper finds one result which is valid at the 90% level in six datasets. This means that it is only valid at the 50% level.

    Now, you can use that result in further calculations. But if you wish to do so, you need to adjust the results so that they reflect the true p-value at the 50% level. The only way to do that is to widen the error bars.

    The same is true for the result, which is not significant at the 90% level but in fact is not statistically different from zero at the 90% level. The error bars need to be increased to account for the fact that it was found only by examining six datasets.

    Finally, I am generally suspicious of results which depend on an average of five different results, two of which are statistically not different from zero. How many studies did they have to go through to find those five different results? How many studies which found no change in the tropical area did they examine?

    And finally, an unfortunate consequence of our current system of doing science is that negative results are rarely published. If my research shows that for example the tropics are indeed expanding, I can likely get that published. But if my equally valid research shows that there is no change in the tropical area, the odds of that getting published are miniscule. Despite the great importance of negative results in science, journals are not interested in publishing such findings.

    As a result, what they have done is search through a bunch of papers from which negative results have already been excluded, chosen those few positive results which support their case (including two results which are statistically no different from zero), averaged them, and declared victory.

    I find that singularly unconvincing.

    I also note in passing that it is common in climate science that when model results disagree with observations, the model results are declared to be superior … except as in this case, when the models do not give the desired result.

    My best to you,

    w.

  67. Rob G. says:
    May 23, 2012 at 6:35 am

    Willis Eschenbach says:

    “…Thanks, Rob, but you are so intent on finding some error in what I’ve done that you neglect to read what I’ve quoted. The authors say, as I quoted above (emphasis mine)”.

    I agree with you that the authors did not make that part clear enough – meaning whether they are using the simple arithmetic mean or the weighted average, which is unfortunate. I agree that part. But it took me only couple of minutes to figure that out – there are only two options for the average in this case, so it is one or the other.

    Actually, two people upthread (here and here) proposed entirely different explanations and methods for calculating the average … so there are obviously more than “two options for the average”, and there is no reason to assume that it is “one or the other”.

    In addition, there is one other option, because failure is always an option—this is the option that they have simply made an arithmetical mistake. So your idea that it is “one or the other” doesn’t fit the facts, because we already have five different possibilities for the average.

    w.

  68. richardscourtney says: “OK. I understand that, but you do not provide any quantification of that “smaller impact”. Hence, I would be grateful if you were to state what significance you are claiming for that “smaller impact”.”

    Richard, I would very much like to do this as well, but as I was telling Willis earlier, time is a big problem for me now – and I was going to depend on Willis to do the quantification part (he is certainly prolific at quantifying such things), but he already said he cannot find the number of data points in Zhou (and I have not read Zhou yet). So I will go through all the papers this weekend, and I will post what I can find on quantification. Please check back after the weekend. I am also very curious what exactly they did here, and your question is highly relevant.

  69. Willis,

    Now I am also very curious what they did with the averaging, so I will go through the papers and see whether I can come up with some useful data. I see that you have checked on the number of data points in Zhou, but I will read those references as well.

    Willis said: “Now, you can use that result in further calculations. But if you wish to do so, you need to adjust the results so that they reflect the true p-value at the 50% level. The only way to do that is to widen the error bars.” “Finally, I am generally suspicious of results which depend on an average of five different results, two of which are statistically not different from zero. How many studies did they have to go through to find those five different results? ”

    Even with wider error bars, its significance will be small if they are doing a weighted average. On the second part, the results are significant if the three remaining groups have large enough data points and have a uniform trend (without large error bars) – that seems to be the case here.

    Willis said: “And finally, an unfortunate consequence of our current system of doing science is that negative results are rarely published. If my research shows that for example the tropics are indeed expanding, I can likely get that published. But if my equally valid research shows that there is no change in the tropical area, the odds of that getting published are miniscule. Despite the great importance of negative results in science, journals are not interested in publishing such findings.”

    I do not know this, I would expect negative results can be published, if you have the quantitative indicators to to suggest that. Are you aware of any such data to show that tropics are more or less in an equilibrium or any negative results? That would be interesting.

    Willis says: “I also note in passing that it is common in climate science that when model results disagree with observations, the model results are declared to be superior … except as in this case, when the models do not give the desired result.”

    Observations are always more reliable than models, they have to be used as benchmarks to verify models, and models are useful only if it can capture the current trend/mechanisms and thus can be used for other boundary conditions or to predict future events.

    On averages, the geometric average is not probably not very useful (that is useful when the range is different, I believe), the specific weighted average HaroldW has proposed is certainly an option – but not likely.

    Over the weekend I will see what data I can find to give you more input. So we will see you soon.

  70. ps. Willis, As I mentioned in my posts in the other thread, I have been so busy recently, otherwise I would really like to go through the details, but I will work on it over the weekend (I have about eight manuscripts to review for journals in the next several days, and as you very well know, if I do not do a good job, someone like you is going to criticize the reviewer, although they are not in climate science). So, for example, I was essentially saving time by not writing the details of your criticisms of the paper in the other thread (http://wattsupwiththat.com/2012/05/03/icy-arctic-variations-in-variability/ ), I can deal with logical stuff much faster – but I certainly was not trying to be a vampire (your comment: “Thus far, you have said nothing about my work. You have not raised a single objection to my claims in the head post. You have not criticized my math, my logic, or my data. Instead, you want to talk about consensus, logical fallacies, the theory of science, hypothetical questions, theoretical dilemmas, anything but the actual subject under consideration which you treat like a vampire treats garlic … and frankly, Scarlett, I don’t give a damn.”).

    I hope we will have very productive discussions in the future, when I can add something useful here. I also hope some of the bloggers here, from both sides, are more friendly to each other – as most of the bloggers are. I do not have anything against skeptics, although I get a bit unhappy when scientists are portrayed as dishonest or stupid – most of them are concerned about their reputation and do not belong to that group, although there are many exceptions.

    All the best.
    Rob

  71. Clicked link to paper…
    1.)IF:
    Competing financial interests
    The authors declare no competing financial interests.

    2.)THEN:
    Tropical expansion primarily driven by black carbon and tropospheric ozone
    The authors declare Recent Northern Hemisphere tropical expansion primarily driven by black carbon and tropospheric ozone.

    3.)PROFIT

  72. I downloaded the U-Wind (i.e. east-west) data from the 20th Century Reanalysis Project v2. I then interpolated the latitudes which separate the easterly Trade Winds from the Westerlies. I consider this the border of the “meteorological tropics” in the Horse Latitudes. In the Northern Hemisphere for the years 1979-1999 I found a poleward 0.18 +/- 0.31 decade rate. This looks compatible with the MMC figure above which is also based on circulation.

    For the period 1911-2010 the decade rate is poleward 0.007 +/- 0.025 degrees.

    For the period 1951-2010 the decade rate is *equatorward* 0.020 +/- 0.049 degrees.

    It all looks insignificant.

    Some plots and source code can be found here:

    https://sites.google.com/site/climateadj/tropical-expansion

Comments are closed.