Nature Magazine’s Folie à Deux, Part Deux

Guest Post by Willis Eschenbach

Well, in my last post I thought that I had seen nature at its worst … Nature Magazine, that is. But now I’ve had a chance to look at the other paywalled Nature paper in the same issue, entitled Anthropogenic greenhouse gas contribution to flood risk in England and Wales in autumn 2000, by Pardeep Pall, Tolu Aina, Dáithí A. Stone, Peter A. Stott, Toru Nozawa, Arno G. J. Hilberts, Dag Lohmann and Myles R. Allen (hereinafter Pall2011). The supplementary information is available here, and contains much of the concepts of the paper. In the autumn of 2000, there was extreme rainfall in southwest England and Wales that led to widespread flooding. Pall2011 explores the question of the expected frequency of this type of event They conclude (emphasis mine):

… in nine out of ten cases our model results indicate that twentieth century anthropogenic greenhouse gas emissions increased the risk of floods occurring in England and Wales in autumn 2000 by more than 20%, and in two out of three cases by more than 90%.

Figure 1. England in the image of Venice, Autumn 2000. Or maybe Wales. Picture reproduced for pictorial reasons only, if it is Wales, please, UKPersons, don’t bust me, I took enough flak for the New Orleans photo in Part 1. Photo Source

To start my analysis, I had to consider the “Qualitative Law of Scientific Authorship”, which states that as a general rule:

Q ≈ 1 / N^2

where Q is the quality of the scientific study, and N^2 is the square of the number of listed authors. More to the point, however, let’s begin instead with this. How much historical UK river flow data did they analyze to come to their conclusions about UK flood risk?

Unfortunately, the answer is, they didn’t analyze any historical river flow data at all.

You may think I’m kidding, or that this is some kind of trick question. Neither one. Here’s what they did.

They used a single seasonal resolution atmospheric climate computer model (HadAM3-N144) to generate some 2,268 single-years of synthetic autumn 2000 weather data. The observed April 2000 climate variables (temperature, pressure, etc) were used as the initial values input to the HadAM3-N144 model. The model was kicked off using those values as a starting point, and run over and over a couple thousand times. The authors of Pall2011 call this 2,268 modeled single years of computer-generated weather “data” the “A2000 climate”. I will refer to it as the A2000 synthetic climate, to avoid confusion with the real thing.

The A2000 synthetic climate is a universe of a couple thousand single-year outcomes of one computer model (with a fixed set of internal parameter settings), so presumably the model space given those parameters is well explored … which means nothing about whether the actual variation in the real world is well explored by the model space. But I digress.

The 2,268 one-year climate model simulations of the A2000 autumn weather dataset were then fed into a second much simpler model, called a “precipitation runoff model” (P-R). The P-R model estimates the individual river runoff in SW England and Wales, given the gridcell scale precipitation.

In turn, this P-R model was calibrated using the output of a third climate model, the ERA-40 computer model reanalysis of the historical data. The ERA-40, like other models, outputs variables on a global grid. The authors have used multiple linear regression to calibrate the P-R model so it provides the best match between the river flow gauge data for the 11 UK rainfall catchments studied, and the ERA-40 computer reanalysis gridded data. How good is the match with reality? Dunno, they didn’t say …

So down at the bottom there is some data. But they don’t analyze that data in any way at all. Instead, they just use it to set the parameters of the P-R model.

Summary to date:

•  Actual April 2000 data and actual patterns of surface temperatures, air pressure, and other variables are used repeatedly as the starting point for 2,268 one-year modeled weather runs. The result is called the A2000 synthetic climate. This 2,268 single years of synthetic weather is used as input to a second Precipitation-Runoff model. The P-R model is tuned to the closest match with the gridcell precipitation output of the ERA-40 climate reanalysis model. Using the A2000 weather data, the P-R model generates 2,268 years of synthetic river flow and flood data.

So that’s the first half of the game.

For the second half, they used the output of four global circulation climate models (GCMs). They used those four GCMs to generate what a synthetic world would have looked like if there were no 20th century anthropogenic forcing. Or in the words of Pall2011, each of the four models generated “a hypothetical scenario representing the “surface warming patterns” as they might have been had twentieth-century anthropogenic greenhouse gas emissions not occurred (A2000N).” Here is their description of the changes between A2000 and A2000N:

The A2000N scenario attempts to represent hypothetical autumn 2000 conditions in the [HadAM3-N144] model by altering the A2000 scenario as follows: greenhouse gas concentrations are reduced to year 1900 levels; SSTs are altered by subtracting estimated twentieth-century warming attributable to greenhouse gas emissions, accounting for uncertainty; and sea ice is altered correspondingly using a simple empirical SST–sea ice relationship determined from observed SST and sea ice.

Interesting choice of things to alter, worthy of some thought … fixed year 1900 greenhouse gases, cooler ocean, more sea ice, but no change in land temperatures … seems like that would end up with a warm UK embedded in a cooler ocean. And that seems like it would definitely affect the rainfall. But let us not be distracted by logical inconsistencies …

Then they used the original climate model (HadAM3-N144), initialized with those changes in starting conditions from the four GCM models, combined with the same initial perturbations used in A2000 to generate another couple thousand one-year simulations. In other words, same model, same kickoff date (I just realized the synthetic weather data starts on April Fools Day), different global starting conditions from output of the four GCMs. The result is called the A2000N synthetic climate, although of course they omit the “synthetic”. I guess the N is for “no warming”.

These couple of thousand years of model output weather, the A2000N synthetic climate, then followed the path of the A2000 synthetic climate. They were fed into the second model, the P-R model that had been tuned using the ERA-40 reanalysis model. They emerged as a second set of river flow and flood predictions.

Summary to date:

•  Two datasets of computer generated 100% genuine simulated UK river flow and flood data have been created. Neither dataset is related to actual observational data, either by blood, marriage, or demonstrated propinquity, although to be fair one of the models had its dials set using a comparison of observational data with a third model’s results. One of these two datasets is described by the authors as “hypothetical” and the other as “realistic”.

Finally, of course, they compare the two datasets to conclude that humans are the cause:

The precise magnitude of the anthropogenic contribution remains uncertain, but in nine out of ten cases our model results indicate that twentieth century anthropogenic greenhouse gas emissions increased the risk of floods occurring in England and Wales in autumn 2000 by more than 20%, and in two out of three cases by more than 90%.

Summary to date

•  The authors have conclusively shown that in a computer model of SW England and Wales, synthetic climate A is statistically more prone to synthetic floods than is synthetic climate B.

I’m not sure what I can say besides that, because they don’t say much beside that.

Yes, they show that their results are pretty consistent with this over here, and they generally agree with that over, and by and large they’re not outside the bounds of these conditions, and that the authors estimated uncertainty by Monte Carlo bootstrapping and are satisfied with the results … but considering the uncertainties that they have not included, well, you can draw your own conclusions about whether the authors have established their case in a scientific sense. Let me just throw up a few of the questions raised by this analysis.

QUESTIONS FOR WHICH I HAVE ABSOLUTELY NO ANSWER

1.  How were the four GCMs chosen? How much uncertainty does this bring in? What would four other GCMs show?

2.  What are the total uncertainties when the averaged output of one computer model is used as the input to a second computer model, then the output of the second computer model is used as the input to a third simpler computer model, which has been calibrated against a separate climate reanalysis computer model?

3.  With over 2000 one-year realizations, we know that they are exploring the HadAM3-N144 model space for a given setting of the model parameters. But are the various models fully exploring the actual reality space? And if they are, does the distribution of their results match the distribution of real climate variations? That is an unstated assumption which must be verified for their “nine out of ten” results to be valid. Maybe nine out of ten model runs are unrealistic junk, maybe they’re unalloyed gold … although my money is on the former, the truth is there’s no way to tell at this point.

4.  Given the warnings in the source of the data (see below) that “seldom is it safe to allow the [river gauge] data series to speak for themselves”, what quality control was exercised on the river gauge data to ensure accuracy in the setting of the P-R modeled parameters? In general, flows have increased as more land is rendered impermeable (roads, parking lots, buildings) and as land has been cleared of native vegetation. This increases runoff for a given rainfall pattern, and thus introduces a trend of increasing flow in the results. I cannot tell if this is adjusted for in the analysis, despite the fact that the river gauge records are used to calibrate the P-R model.

5.  Since the P-R model is calibrated using the ERA-40 reanalysis results, how well does it replicate the actual river flows year by year, and how much uncertainty is there in the calculated result?

6.  Given an April 1 starting date for each of the years for which we have records, how well does the procedure outlined in this paper (start the HadAM3-N144 on April Fools Day to predict autumn rainfall) predict the measured 80 years or so of rainfall for which we have actual records?

7.  Given an April 1 starting date for each of the years for which we have records, how well does the procedure outlined in this paper (start the HadAM3-N144 on April Fools Day to predict river flows and floods) predict the measured river flows for the years and rivers for which we have actual records?

8.  In a casino game, four different computer model results are compared to reality. Since they predict different outcomes, if one is right, then three are wrong. All four may be wrong to a greater or lesser degree. Payoff on the bet is proportional to correlation of model to reality. What is the mathematical expectation of return on a $1 bet on one of the models in that casino … and what is the uncertainty of that return? Given that there are four models, will betting on the average of the models improve my odds? And how is that question different from the difficulties and the unknowns involved in estimating only this one part of the total uncertainty of this study, using only the information we’ve been given in the study?

9.  There are a total of six climate models involved, each of which has different gridcell sizes and coordinates. There are a variety of methods used to average from one gridcell scheme to another scheme with different gridcell sizes. What method was used, and what is the uncertainty introduced by that step?

10.  The study describes the use of one particular model to create the two sets of 2,000+ single years of synthetic weather … how different would the sets be if a different climate model were used?

11.  Given that the GCMs forecast different rainfall patterns than those of the ERA-40 reanalysis model, and given that the P-R model is calibrated to the ERA-40 model results, how much uncertainty is introduced by using those same ERA-40 calibration settings with the GCM results?

12.  Did they really start the A2000N simulations by cooling the ocean and not the land as they seem to say?

As you can see, there are lots of important questions left unanswered at this point.

Reading over this, there’s one thing that I’d like to clarify. I am not scornful of this study because it is wrong. I am scornful of this study because it is so very far from being science that there is no hope of determining if this study is wrong or not. They haven’t given us anywhere near the amount of information that is required to make even the most rough judgement as to the validity of their analysis.

BACK TO BORING OLD DATA …

As you know, I like facts. Robert Heinlein’s comment is apt:

What are the facts? Again and again and again-what are the facts? Shun wishful thinking, ignore divine revelation, forget what “the stars foretell,” avoid opinion, care not what the neighbors think, never mind the unguessable “verdict of history”–what are the facts, and to how many decimal places? You pilot always into an unknown future; facts are your single clue. Get the facts!

Because he wrote that in 1973, the only thing Heinlein left out was “beware computer model results.” Accordingly, I went to the river flow gauge data site referenced in Pall2011, which is here. I got as far as the part where it says (emphasis mine):

Appraisal of Long Hydrometric Series

… Data precision and consistency can be a major problem with many early hydrometric records. Over the twentieth century instrumentation and data acquisition facilities improved but these improvements can themselves introduce inhomogeneities into the time series – which may be compounded by changes (sometimes undocumented) in the location of the monitoring station or methods of data processing employed. In addition, man’s influence on river flow regimes and aquifer recharge patterns has become increasingly pervasive, over the last 50 years especially. The resulting changes to natural river flow regimes and groundwater level behaviour may be further affected by the less perceptible impacts of land use change; although these have been quantified in a number of important experimental catchments generally they defy easy quantification.

So like most long-term records of natural phenomena, this one also has its traps for the unwary. Indeed, the authors close out the section by saying:

It will be appreciated therefore that the recognition and interpretation of trends relies heavily on the availability of reference and spatial information to help distinguish the effects of climate variability from the impact of a range of other factors; seldom is it safe to allow the data series to speak for themselves.

Clearly, the authors of Pall2011 have taken that advice to heart, as they’ve hardly let the data say a single word … but on a more serious note, since this is the data they used regarding “climate variability” to calibrate the P-R model, did the Pall2011 folks follow the advice of the data curator? I see no evidence of that either way.

In any case, I could see that the river flow gauge data wouldn’t be much help to me. I was intrigued, however, by the implicit claim in the paper that extreme precipitation events were on the rise in the UK. I mean, they are saying that the changing climate will bring more floods, and the only way that can happen is if the UK has more extreme rains.

Fortunately, we do have another dataset of interest here. Unfortunately it is from the Hadley Centre again, this time the Hadley UK Precipitation dataset of Alexander and Jones, and yes, it is Phil Jones (HadUKP). Fortunately, the reference paper doesn’t show any egregious issues. Unfortunately but somewhat unavoidably, it uses a complex averaging system. Fortunately, the average results are not much different from a straight average on the scale of interest here. Unfortunately, there’s no audit trail so while averages may only be slightly changed, there’s no way to know exactly what was done to a particular extreme in a particular place and time.

In any case, it’s the best we have. It lists total daily rainfall by section of the UK, and one of these sections is South West England and Wales, which avoids the problems in averaging the sections into larger areas. Figure 2 shows the autumn maximum one-day rainfall for SW England and Wales, which was the area and time-frame Pall2011 studied regarding the autumn 2000 floods:

Figure 2. Maximum autumn 1-day rainfall, SW England and Wales, Sept-Oct-Nov. The small trend is obviously not statistically different from zero.

The extreme rainfall shown in this record is typical of records of extremes. In natural records, the extremes rarely have a normal (Gaussian or bell-shaped) distribution. Instead, typically these records contain a few extremely large values, even when we’re just looking at the extremes. The kind of extreme rainfalls leading to the flooding of 2000 are seen in Figure 3. I see this graph as a cautionary tale, in that if the record had started a year later, the one-day rainfall in 2000 would be by far the largest in the record.

In any case, for the 70 years of this record there is no indication of increasing flood risk from climate factors. Pall2011 has clearly shown that in two out of three of the years of synthetic climate B, the chance of a synthetic autumn flood in a synthetic SW England and Wales went up by 90%, over the synthetic flood risk in synthetic climate A.

But according to the observational data, there’s no sign of any increase in autumn rainfall extremes in SW England and Wales, so it seems very unlikely they were talking about our SW England and Wales … gives new meaning to the string theory claim of multiple parallel universes, I guess.

IMPLICATIONS OF THE PUBLICATION OF THIS STUDY

It is very disturbing that Nature Magazine would publish this study. There is one and only one way in which this study might have stood the slightest chance of scientific respectability. This would have been if the authors had published the exact datasets and code used to produce all of their results. A written description of the procedures is pathetically inadequate for any analysis of the validity of their results.

At an absolute minimum, to have any hope of validity the study requires the electronic publication of the A2000 and A2000N climates in some accessible form, along with the results of simple tests of the models involved (e.g. computer predictions of autumn river flows, along with the actual river flows). In addition, the study needs an explanation of the ex-ante criteria used to select the four GCMs and the lead model, and the answers to the questions I pose above, to be anywhere near convincing as a scientific study. And even then, when people finally get a chance to look at the currently unavailable A2000 and A2000N synthetic climates, we may find that they bear no resemblance to any reality, hypothetical or otherwise …

As as result, I put the onus on Nature Magazine on this one. Given the ephemeral nature of the study, the reviewers should have asked the hard questions. Nature Editors, on the other hand, should have required that the authors post sufficient data and code so that other scientists can see if what they have done is correct, or if it would be correct if some errors were fixed, or if it is far from correct, or just what is going on.

Because at present, the best we can say of the study is a) we don’t have a clue if it’s true, and b) it is not falsifiable … and while that looks good in the “Journal of Irreproducible Results“, for a magazine like Nature that is ostensibly about peer-reviewed science, that’s not a good thing.

w.

PS – Please don’t construe this as a rant against computer models. I’ve been programming computers since 1963, longer than many readers have been around. I’m fluent in R, C, VBA, and Pascal, and I can read and write (slowly) in a half-dozen other computer languages. I use, have occasionally written, and understand the strengths, weaknesses, and limitations of a variety computer models of real-world systems. I am well aware that “all models are wrong, and some models are useful”, thats why I use them and study them and occasionally write them.

My point is that until you test, really test your model by comparing the output to reality in the most exacting tests you can imagine, you have nothing more than a complicated toy of unknown veracity. And even after extensive testing, models can still be wrong about the real world. That’s why Boeing still has test flights of new planes, despite using the best computer models that billion$ can buy, and despite the fact that modeling airflow around a plane is orders of magnitude simpler than the modeling global climate …

I and others have shown elsewhere (see my thread here, the comment here, and the graphic here) that the annual global mean temperature output of NASA’s pride and joy climate model, the GISS-E GCM, can be replicated to 98% accuracy by the simple one-line single-variable equation T(n) = [lambda * Forcings(n-1)/tau + T(n-1) ] exp(-1/tau) with T(n) being temperature at time n, and lambda and tau being constants of climate sensitivity and lag time …

Which, given the complexity of the climate, makes it very likely that the GISSE model is both wrong and not all that useful. And applying four of that kind of GCMs to the problem of UK floods certainly doesn’t improve the accuracy of your results …

The problem is not computer models. The problem is Nature Magazine trying to pass off the end results of a long computer model daisy-chain of specifically selected, untested, unverified, un-investigated computer models as valid, falsifiable, peer-reviewed science. Call me crazy, but when your results represent the output of four computer models, which are fed into a fifth computer model, whose output goes to a sixth computer model, which is calibrated against a seventh computer model, and then your results are compared to a series of different results from the fifth computer model but run with different parameters, in order to demonstrate that flood risks have changed from increasing GHGs … well, when you do that, you need to do more than wave your hands to convince me that your flood risk results are not only a valid representation of reality, but are in fact a sufficiently accurate representation of reality to guide our future actions.

About these ads

153 thoughts on “Nature Magazine’s Folie à Deux, Part Deux

  1. The first time I recall computer modelling being presented as “proof” was a very long time ago. I no longer recall even what the topic was, but the question I asked was along the lines of “so what evidence do you have that your computer model reflects the real world”

    I shall always remember two things the answer, and the reaction in the room.

    Answer; oh, we ran the model several thousand times and we got the same answer every time.

    Reaction in the room; Me and perhaps three of four other people laughing hystericaly. The other 150 so or at the lecture…puzzled looks.

    30 years later I’m watching the climate debate and thinking…so those 150 morons got their degrees I see…

  2. 12. Did they really start the A2000N simulations by cooling the ocean and not the land as they seem to say?
    —————–
    Do you never worry that your are out of your depth?

    The HadAM3 is an atmosphere model – that is it does not calculate what goes on in the ocean. Instead it has ocean surface temperatures prescribed as a boundary condition. The land temperatures are calculated by the model.

  3. The only thing you forgot to mention is that these computer modelers live in their own synthetic virtual world.

    Kind Regards

    Michael

  4. Bingo! Though it is particularly pervasive in climate science, it can be said of nearly all sciences that computers and modeling have replaced data collection and analysis. Shame.

  5. Well Willis, the thought occured to me that the old saying, “science only changes when the old guard dies” is a double edged sword. When folk like you and Dr. Spencer and Christy, et. al. pass from this mortal coil, then I believe all science will be done by computer modeling.

    Sad really.

  6. One of my favorite “prediction” papers is Mailhot et al. (2007) from the Journal of Hydrology (http://dx.doi.org/10.1016/j.jhydrol.2007.09.019).

    After 12 pages of simulating (sorry, estimating “the expected changes in”) future extreme rainfall intensities for southern Quebec, it adds that “Results obtained in this study remain model dependent since the use the output of different global climate models (GCM) might bring very different results.”

  7. Willis – Please rest assured that Fig 1 is not a photoshop fantasy. It is a photo of the River Ouse in the City of York (North Yorkshire UK) taken from Lendle Bridge close to the city centre. In the 21 years that I lived in the York area, before leaving for the Land of Nuclear Power Generation in 2005, this senario has been repeated at least on three occasions. The White Elizabethan style building ( under water on the lefthand side of the photo) is a well known pub which now has great difficulty getting insurance cover!

    All this water runs off the North Yorkshire Moors and has done for many hundreds of years.

    I look forward to your posts. Keep up the good work
    rgf

  8. Murray Grainger says:
    February 24, 2011 at 2:57 pm

    Also, typo ” Here is their description of the changes between A2000 and A200N:” missing a “0″

    Thanks, fixed.

    w.

  9. My guess is they were looking to make some red noise with their daisy chain. Because as we know, red noise generates a hockey stick.

  10. Sorry Willis, I just couldn’t finish this tale. Models constructed of model data? What a steaming pile that is!

    I suppose they had to push something (anything?) out the door to prop up the dying cause, but this one is really sad.

  11. You are right, Willis, to call this a travesty of science. The purpose of a scientific paper is to present an experiment that others can reproduce, offering a hypothesis to be verified. Without the code and details nobody can reproduce their results. But even if you did, it would be like running a program again with the same input. If you failed to reproduce the results of the experiment in that case, it would just mean that your hardware is broken!

    The purpose of this paper is not science, it is propaganda – pure and simple. I use models all the time to estimate probable future results based on known conditions. At no moment do I assume that the models know more than the modellers that created them. You cannot discover truth by running a model. You only “discover” the initial assumptions that generated the model. Cascading models as if they were observations and cooking the data only makes it worse. If a rounding error makes the result 1% wrong, you cannot make it right by running the same error 2,648 times.

    I guess this is obvious to programmers but completely unbelievable to neophytes. That must be why the ivory tower programmers with their abstruse models and statistical sleight of hand seem to dominate the Climate “Science” argument. Like we always used to say in school – if you can’t dazzle them with your brilliance, baffle them with your BS!

  12. The summary was printed in our local paper yesterday. AGW is a political campaign and the warmists have the upper hand when they can get press releases printed. Doesn’t matter whether the papers are right, wrong or indifferent. The prize goes to the team that can sway public opinion.

  13. Hi Willis,

    Interesting formula you have there for quality of scientific papers SciQual = 1/AuthNum^2.

    I can confirm a similar phenomenon exists for patents in large corporations too. It’s political. I’m a named inventor on four granted patents. In two of them I’m the sole inventor and those two I thought were innovative and valuable so I kept them to myself for personal aggrandizement. The other two – not so much. On those other two I named a couple of colleagues along with me (up to three people could share a patent with each getting the full financial incentive) in order to either repay a favor or have a future favor owed to me. I suspect it was more or less that way everywhere at all the big corporation patent mills and don’t see any reason why it wouldn’t apply to published papers from the halls of academia too.

  14. Let’s see.

    They throw a cubic dice 2000+ times and conclude the average is 3.5.

    They then throw a tetrahedral dice 2000+ times and conclude the average is 2.5

    Conclusion: Pythagoras causes global warming.

  15. So, Dick Telford, are you justifying this crap, or what? To me, it places Nature below the level of the National Enquirer. I’m starting to wonder if there aren’t any inquiring minds left out there in the world of government funded science.

  16. In re the PS: When I worked at MIT LL, there was a fellow scientist who used to
    resubmit his program with attached data twice, just to make sure the results were
    not a fluke. Just by doing a model run again doesn’t reinforce the results. Arrgh!

  17. Richard Telford says:
    February 24, 2011 at 2:59 pm

    “Do you never worry that your are out of your depth?”

    You say that like climatologists writing computer programs aren’t out of their depth. Which of course they are as any programming professionals who saw the sphaghetti code created by the East Anglia miscreants will tell you.

    So the way I see it is “When in Rome, do as the Romans do” and “People who live in glass houses shouldn’t throw stones”. Pretty much everyone in this CAGW brouhaha is out of their depth in one way or another.

    When in Rome, do as the Romans do is what I say.

  18. Is this what climate science has become? Fantasy results from fantasy worlds. I suppose that once you can justify cold as being a symptom of warming then anything is possible.

    If I’m not mistaken the first building on the left of your photo is The Kings Arms in York. Now all the authors needed to do had they wanted some real data was to go into The Kings Arms and order a pint. Fixed next to the bar is a floor to ceiling brass strip marked with flood levels going right back to the Civil War. The English one. IIRC 1640 was the year to beat.

    Of course these days any flooding of York is all about global warming, the other 350 years of flooding having been down to witchcraft or something.

  19. It rained all night the day I left,
    The weather it was dry;
    The sun so hot I froze to death;
    Susanna, don’t you cry.

  20. It would be more than interesting to see the full source code used in the paper; oh that is assuming they’ve lived up to their scientific obligation and made the source code available for review by peers and readers of their paper.

    “Because of the critical importance of methods, scientific papers
    must include a description of the procedures used to produce the
    data, sufficient to permit reviewers and readers of a scientific paper
    to evaluate not only the validity of the data but also the reliability
    of the methods used to derive those data
    . If this information is not
    available, other researchers may be less likely to accept the data
    and the conclusions drawn from them. They also may be unable
    to reproduce accurately the conditions under which the data were
    derived.” – US National Academy of Sciences (NAS),

    http://www.btc.iitb.ac.in/library/On_being_a_scientist.pdf

  21. Richard Telford says:
    February 24, 2011 at 2:59 pm

    12. Did they really start the A2000N simulations by cooling the ocean and not the land as they seem to say?

    —————–
    Do you never worry that your [sic] are out of your depth?

    The HadAM3 is an atmosphere model – that is it does not calculate what goes on in the ocean. Instead it has ocean surface temperatures prescribed as a boundary condition. The land temperatures are calculated by the model.

    Perhaps unlike you, Richard, I ask questions when I don’t know the answer. It’s an ugly habit, I’m aware of that, one that’s frowned on at RealClimate, but asking questions is the only way I know of to learn. Does asking questions mean someone is “out of their depth”? Generally not, on my planet. I get worried when people stop asking questions …

    I understand that the SSTs are prescribed and the land surface temperatures are calculated in the HadAGM3. What I’m referring to are the initial conditions input to the HadAM3 model for the kickoff of the A2000N simulations. Or as I said, the conditions that “start the A2000N simulations”. As I understand it, the only changes in the starting conditions are the SSTs, not the land temperature, but that’s not clear, which is why I asked …

    Now, given your certitude above, I’m sure that you can show us where the Pall2011 folks talk about setting the starting conditions for the HadAM3 runs, in particular the method they used to set the starting land temperatures, soil wetness, and other conditions. I couldn’t find it, but I’m aware that I might have missed it, which is why I asked. Once you provide that, we can move on to the other 11 questions …

    Again, this is why having access to the code and the data used is so vital. If I had that, there’d be no question of what the input climate variables were. Instead, we waste time with this.

    Next, speaking of depths that one is in or over, do you never worry that the count of your unrelenting personal attacks on me is inversely proportional to the depth of your actual belief in your scientific claims? Every moment you spend speculating on whether I’m out of my depth is time not spent explaining your view of the science … coincidence? You be the judge.

    Finally, whether you or I are out of our depth is immaterial. I say I don’t know how they initialized the land temperatures for the A2000N runs. It seems you are saying you know how they set them, but it’s not clear. If you know, you’ll let us know. Or someone else will. Or not.

    But what does “depth”, whatever that means, have to do with that process that we’re engaged in? Someone totally “out of their depth”, a rank beginner, may point me to the correct answer to any of my 12 questions above. Depth is meaningless.

    w.

  22. And they call this science?

    Extraordinary, what a worthless study, and no doubt paid for by the tax payer.

    When I read the summary of this report in the newspapers, I do not recall seeing it reported that this was simply the results of a computer model run (or worse still a model run based upon another model run). Quite frankly, any such report should make it clear that it is simply based upon computer models and the findings are therefore likely to be complete and utter bo**ocks.

  23. Thank you Willis. And soon coming to another journal somewhere near you, is a different author quoting this stuff as gospel.

  24. daniel says:
    February 24, 2011 at 4:06 pm
    Science fiction is definitively close to consensus climate science, and vice versa

    Science fiction is fiction based upon speculations about science within the laws of nature.

    Science fantasy is fiction based upon speculations about supernatural fantasies.

    Supernatural fantasies are often represented by simulacra.

    One form of simulacra are numerical models such as the climate models used by consensus Climate Science.

    Climate Science models which incorporate supernatural simulacra to represent climate are by definition pseudo-scientific fantasies.

    Climate Science models which incorporate supernatural simulacra to represent climate are by definition pseudo-scientific fantasies akin to science fantasy fiction.

    Supernatural Climate Science models cannot by definition be natural science fiction.

  25. Anthony Watts says:
    February 24, 2011 at 3:24 pm

    Willis just for fun, what would the trend plot look like if you removed the two peak outliers?

    WUWT is nothing if not a full service blog. The trend has increased by a whacking great eight-tenths of a millimetre … per century … over the trend of the complete dataset. Still far, far from significant.

    w.

  26. It appears the authors have begun the scientific method. They formed a hypothesis and used a computer model to generate predictions. The next step is to test the predictions.

    Where they go astray is they seem to believe they can test the predictions with a computer model. This isn’t how it works in science or engineering. The model outputs are tested against reality. What they’re doing is just about the same as designing an aircraft on an engineering workstation then plugging it into Microsoft Flight simulator to test the design. If it flies as expected in MS Flight Simulator they skip building an actual prototype, skip over the hassle of using a test pilot to verify flight characteristics, and go straight into production and loading up the new planes with paying passengers for the first actual flight.

    That’s how absurd this climate prediction science really is… only worse because the aircraft is the entire globe and they’re loading it up with 7 billion paying passengers on its maiden flight. I’m one of the paying passengers and I not only don’t want to be a guinea pig in this grand scheme – I want the cost of my ticket refunded!

  27. I am not even reading the comments before hand, excuse the rudeness, But what a load of bollocks. The floods in the north of england was due to the last labour government cutting the budget for river and canal clearing, it was bugger all to do with Co2. and everything to do with incompetence.

  28. Ron Furner says:
    February 24, 2011 at 3:38 pm

    Willis – Please rest assured that Fig 1 is not a photoshop fantasy. It is a photo of the River Ouse in the City of York (North Yorkshire UK) taken from Lendle Bridge close to the city centre. In the 21 years that I lived in the York area, before leaving for the Land of Nuclear Power Generation in 2005, this senario has been repeated at least on three occasions. The White Elizabethan style building ( under water on the lefthand side of the photo) is a well known pub which now has great difficulty getting insurance cover!

    All this water runs off the North Yorkshire Moors and has done for many hundreds of years.

    I look forward to your posts. Keep up the good work
    rgf

    I didn’t think it looked like it was photoshopped. It looked to me like a huge pile of heartbreak and loss. No surprise, as you point out, nature does that. I just wish that folks who are concerned about possible CO2 effects on the weather were as concerned about current weather effects on the poor … that’s the real problem. The rich, by and large, are not hurt by the weather. It is the poor who suffer, and have for centuries. Claiming to be concerned about the “climate refugees” that may be the result of 2050 weather, while looking away from the issue of people dying today of the current weather, is a loser in my book.

  29. Brilliant work, once again, Willis. Thanks so much for your tireless efforts to reveal the so-called “peer review” process for what it is, namely, a pal review process.

    What you describe in the modeler’s work reminds me of the sort of thing I have done when acclimating new technicians to a computer model.

  30. Another marvellous dissection! Though it is difficult to actually digest mush.

    I recall the first papers I read about models. It was actually referring to their use in bear biology but made the basic point. It was called ‘Models and Reality’ and the authors tried to emphasize how different the two could be, as a warning to the bear biologists who were eagerly starting to use them for various things. Unfortunately, they didn’t listen and now they are the basis of much of that research and are an integral part of the pseudoscience called Conservation Biology. Thus we have predictions of polar bear extinction, etc.

  31. “I guess this is obvious to programmers but completely unbelievable to neophytes. ”

    Agreed, I’ve been coding since ’73. Computers are useful tools, but anyone that thinks models are anything other than models has missed the plot.

    Stock market forecasting has less variables than climate forecasting. Like the weather you can quite often predict where the market will be in 2 or 3 days, but you will also make mistakes, just like the weather forecast.

    Now try and predict the value of the DOW 50 years from now in constant dollars. Yes, the DOW will likely go up, but will go up faster than inflation?

    The simple fact is that if climate models could really predict something meaningful about the future, they would use the models to predict something with $$ value other than scare stories to try and drum up more funding.

  32. Well I hope these long modeling sessions feature screen output with little round critters with big and voracious mouths that go “glom, glom” as they fall from the sky, and the modeler can control a modeled laser beam to blast them before they hit the ground and cause a 1000 year flood. Otherwise, it must be terribly boring to be a “scientist” these days.

    sarc?/

    I was in the business of economic modeling and forecasting for a long time. The results mean little. Only the assumptions count. These folk are, simply put, foolish.

  33. Claiming to be concerned about the “climate refugees” that may be the result of 2050 weather, while looking away from the issue of people dying today of the current weather, is a loser in my book.
    ==============================
    my sentiments too

    The money that has been selfishly drained for this “science” when we have real problems that not one single person talks about any more……….

  34. Hi Willis,
    Thank you for this very illuminating post.
    My first reaction was to look at the calandar.
    NO – it’s not April 1st just yet.
    (I had thought initially that you were just having a joke at our expense).
    (It’s behind a pay wall, you tell us, so we can’t just check up on you).

    My next reaction was that perhaps it was Nature that had intended this as a joke and had just released it too early by mistake.

    My next reaction (quick thinker that I am),
    Is to suggest that you and I should write up a request for funding.
    The topic shoud be – well, something really unbelievable about the climate,
    rather similar in fact to the the various imaginative theories about the universe,
    that astronomers dream up while waiting to observe on cloudy nights.

    We should ask for mullions and mullions of dollars of funding – the more expensive, the more impressive it would be.
    (My wife could lease us some time on her clapted out laptop at say $1 million per hour,
    so our expenses would be quite legitimate and large.
    (Don’t worry, she could slip you say, half on the side, for technical type advice on the more difficult issues such as how to switch the darn thing on and off).

    But more seriously, we know that Science is about to rebage itself as “New Science Fiction for Kiddies”.
    But I’m stumpted – what should we now call Nature?
    I Know – Nature Tricked – mmmm perhaps not.

  35. Willis,

    I’ll bet if you put the year 2000 “flyer” back into data you can make the increase per centery a bit higher (say by .1 mm or so) ………..

    It’s more that a bit scary to think that the Nature paper will end up being cited elsewhere. I almost wish we were back in the punch card days- as no one in their right mind would of spent the time to generate such useless information. I hate to think how much CO2 was generated to crunch all the bits of ? (I was going to say data, but it sounds like there was very little measured information used in the study).

    PS Thank you for spending the time to review the article and for providing insighful comments on the study design.

  36. 3×2 says:
    February 24, 2011 at 4:37 pm

    … If I’m not mistaken the first building on the left of your photo is The Kings Arms in York. Now all the authors needed to do had they wanted some real data was to go into The Kings Arms and order a pint. Fixed next to the bar is a floor to ceiling brass strip marked with flood levels going right back to the Civil War. The English one. IIRC 1640 was the year to beat.

    Of course these days any flooding of York is all about global warming, the other 350 years of flooding having been down to witchcraft or something.

    There’s a good discussion of the use of this kind of flood data here, from the same source as the river gauge data used in the study.

    There’s some problems with the King’s Inn claims. First, the flood board doesn’t say what you say it says. And it looks like a modern addition. There may be another older record there.

    The other problem is that the King’s Inn started out as an inn in a rural country trading center. Now, as befits its age, it is right next to the river in the middle of what has become a good-sized city. There’s the Ouse bridge (built in 1566, demolished in 1810, rebuilt in 1821) within 50 feet of it, and two other bridges within 1,500 feet. As late as 1800 there were only about 20,000 people in York (and nothing was paved), as opposed to 190,000 now with huge paved areas. What has all of this development done to the nature of the floods since the pub was built in the 1500’s?

    w.

  37. Nice review Willis.

    Climate model results compared to climate model results.

    And what could possibly go wrong with that?

    —————

    What is the most shocking thing about this is that all the pro-AGW researchers said this result was very robust.

    I mean really. Why not look at some actual data of rainfall and river flow patterns before one concludes that global warming has anything to do with it.

    If you look in depth at any pro-AGW paper, you will find this exact same pattern. No consideration of reality – just some CO2-based climate model results. How’s that for looking at actual patterns.

    This science has gone so far off the rails that I don’t know how it can be corrected. Some defunding perhaps.

  38. “It is very disturbing that Nature Magazine would publish this study. There is one and only one way in which this study might have stood the slightest chance of scientific respectability. This would have been if the authors had published the exact datasets and code used to produce all of their results.”

    Willis, Anthony, others–have you noticed that as of Feb 11, 2011, Science magazine will now require that authors provide the computer code and make the data available in a Web-based archive? See Science v331, p. 649.

    Seems like a response to the skeptics, to me.

  39. Wouldn’t it be a good idea to show that the model is capable of predict a good year and bad year? Or a decade with a lot of rain, and a decade without much rain. There is a lot of ‘the past’ to reproduce. If they could reproduce it – everyone would be suitably impressed and people might believe the predictions of the future.

    This is what happens in every other area of science. If your models cannot explain what you already know to be the case, then why bother?

    In climate science the approach seems to be – the model cannot be wrong, so:

    a. no need to test the model
    b. the past must be wrong in some cases – e.g. no MWP
    c. disingenuousness is justified

  40. I love the writing of Willis, and I completely read his articles.
    Whereas the post-normal science guy never talks science, data, or facts, and I am unable to finish more than a few sentences of his articles.

  41. This sounds just like what happened when the small, local bank computerized around 40 years ago. The end of the month statement showed an error. I took it to them to straighten the mess out. All I heard was “But, computers don’t make mistakes.” I had paper copies of all deposits to prove my point. I had to talk with the vice-president in charge of the local branch to get it all sorted out. Total, blind faith in ANYTHING the computer prints out has no place in any activity.

  42. Still waiting for Richard Telford’s response… yep…
    … still waiting… still.

    And now waiting some more.

  43. My thinks you have given this foolishness way more time, energy and space then it deserves. I know I have accused others, never myself of course, of masturbating with their data. This takes that to heights I have never dreamed of before. If we were talking about sex, not climate, it would be the highest form of pornography. I am at a loss for something clever to say about this mess. I’m even at a loss to say even things less then clever.

  44. Incredible work, Willis. Thank you, once again.

    For years now, I’ve been trying to admonish my Warmista friends and foes to not mistake “correlation”for “causation”. I believe I may have had a stroke of genius (or maybe just a stroke?) just now when I stumbled on the idea of a possible analogy to precisely prove my point and I believe that I just may have found it (at a US government site, no less):

    Cheese consumption causes Global Warming!

    Eerily familiar shape, no?

  45. Seems apropos for this study:
    The difference between theory and practice is greater in practice than in theory.

  46. I’d be more impressed if the UK government started to buy back properties based on the predictions. Clearly no one really believes this “science”.

  47. davidmhoffer says:
    February 24, 2011 at 2:51 pm
    David, that’s exactly what financial modellers were saying about investment products in 2007 – “we’ve run the model several thousand times and the results are very robust” – and the investments all went down the plughole.
    PS guys please excuse the ignorance, but I can’t tell how, even if everything the authors said was correct, that would prove that human produced CO2 was to blame? Is this just another one of those models that is programmed to assume “positive feedbacks” etc – ie to assume the correlation between AG and W, rather than demonstrate it?

  48. Lance Wallace says:
    February 24, 2011 at 6:37 pm

    “It is very disturbing that Nature Magazine would publish this study. There is one and only one way in which this study might have stood the slightest chance of scientific respectability. This would have been if the authors had published the exact datasets and code used to produce all of their results.”

    Willis, Anthony, others–have you noticed that as of Feb 11, 2011, Science magazine will now require that authors provide the computer code and make the data available in a Web-based archive? See Science v331, p. 649.

    Seems like a response to the skeptics, to me.

    I hadn’t seen that, Lance. It says in part (formatting mine):

    Science’s policy for some time has been that “all data necessary to understand, assess, and extend the conclusions of the manuscript must be available to any reader of Science” (see http://www.sciencemag.org/site/feature/contribinfo/). Besides prohibiting references to data in unpublished papers (including those described as “in press”), we have encouraged authors to comply in one of two ways: either by depositing data in public databases that are reliably supported and likely to be maintained or, when such a database is not available, by including their data in the SOM. However, online supplements have too often become unwieldy, and journals are not equipped to curate huge data sets. For very large databases without a plausible home, we have therefore required authors to enter into an archiving agreement, in which the author commits to archive the data on an institutional Web site, with a copy of the data held at Science. But such agreements are only a stopgap solution; more support for permanent, community-maintained archives is badly needed.

    To address the growing complexity of data and analyses, Science is extending our data access requirement listed above to include computer codes involved in the creation or analysis of data.

    To provide credit and reveal data sources more clearly, we will ask authors to produce a single list that combines references from the main paper and the SOM (this complete list will be available in the online version of the paper).

    And to improve the SOM, we will provide a template to constrain its content to methods and data descriptions, as an aid to reviewers and readers.

    We will also ask authors to provide a specific statement regarding the availability and curation of data as part of their acknowledgements, requesting that reviewers consider this a responsibility of the authors.

    We recognize that exceptions may be needed to these general requirements; for example, to preserve the privacy of individuals, or in some cases when data or materials are obtained from third parties, and/or for security reasons. But we expect these exceptions to be rare.

    Many thanks for the heads-up. I consider that a huge win for both good science and for those like Steve McIntyre and Judith Curry and others who have been pushing for good science practices for some time now.

    w.

  49. Willis,

    Thanks for your work. Agree with everything except for this:

    “Again, this is why having access to the code and the data used is so vital.”

    I don’t think I would want to look at the code unless they demonstrated that the model actually reproduced some observations. Until then I think it would be a waste of time. You seem almost to be saying that you might be able to verify the model by reading the code, which I’m sure you don’t mean.

  50. mike g said

    “To me, it places Nature below the level of the National Enquirer. I’m starting to wonder if there aren’t any inquiring minds left out there in the world of government funded science.”

    You’re exactly right. More and more legitimate scientists, who actually don’t mind having their work analyzed and critiqued, are turning to smaller journals, knowing that the larger journals are basically gutter fiction. Look at Scientific American, and their personal attacks on anyone who disagrees with them. ( Especially Bjorn Lomborg.)

    A friend of mine who has been a geologist for 20 years, ( and is actually a government scientist, no less,) has submitted many different studies and papers in his time, and he informed me that he cant think of anyone in his field who would trust Nature to do a through analysis.

  51. Sounds like if Gavin of Faux Climate is tired of E&E, theres another rag out there which will gladly publish his “work” without bothering to read it.

  52. The observed increase in atmospheric CO2 concentration of 100 ppm has produced an increase in downward ‘clear sky’ long wave IR (LWIR) radiation of 1.7 W.m-2 over the last 200 years. Conservation of energy indicates that this increase in flux can only produce a maximum increase in water evaporation of 0.065 mm per day per square meter. The that’s right, 65 micron, or less than the width of a human hair.
    The penetration depth of LWIR radiation into the ocean is also less than 100 micron, so it is impossible for a 100 ppm increase in atmospheric CO2 concentration to have any effect on ocean temperatures or rainfall extremes.

    Nature has chosen to ignore the basic laws of physics and publish climate astrology.
    The journal should be read ‘for entertainment only’.

  53. Willis, thanks for reading this paper and summing it up for the rest of us. For me, reading modeling papers has all the joy of chewing on aluminum foil. I avoid modeling lectures and seminars, too; I either get annoyed or sleepy (or both) at those.
    Your chart of maximum daily rainfalls is great, and makes this whole post rewarding.

  54. Hum?, it would make sense that if the models can produce such certainty, they should be able to look at current conditions and predict the winters all around the world well ahead of time. Aparently they neet to talk to the MET, and CRU before they (The MET) make their predictions.

    Is is curious to see 1930, or 31 with such a large one day rainfall. what else was happening around the world in that time…
    1930 May 13th Farmer killed by hail in Lubbock, Texas, USA; this is the only US known fatality due to hail.
    1930 June 13th 22 people killed by hailstones in Siatista, Greece.

    1930 Sept 3rd Hurricane kills 2,000, injures 4,000 (Dominican Republic).
    1930s Sweden The warmest decade was the 1930s, after which a strong cooling trend occurred until the 1970s INTERNATIONAL JOURNAL OF CLIMATOLOGY http://onlinelibrary.wiley.com/doi/10.1002/joc.946/abstract
    1930 Russian heat wave in the 1930′s, for the decade was 0.2 degrees below 2000 to 2010 heat wave.
    1930 set 3 all time HIGHEST state temperatures, Delaware, 110F Jul. 21, Kentucky, 114 Jul. 28, Tennessee 113 Aug. 9, and one all time LOWEST state record, Oklahoma -27 Jan. 18. About 400% more then a statistical average.
    1931 set two highest State temp ever, FL, 109 Jun. 29, and HI, 109 Jun. 29
    1931 Europe LOWEST temp ever in all of Europe −58.1 °C (−72.6°F)
    1931 The 20th centuries worst water related disaster was the Central China flooding of 1931, inundating 70,000 square miles and killing 3.5-4 million people.
    1931 July Western Russia heat wave 6 degrees F monthly anomaly above normal, 2nd warmest on 130 year record. Decade of 1930 to 1940 within 0.2 degrees of 2000 to 2010 western Russia July
    1931 Sept 10th The worst hurricane in Belize Central America history kills 1,500 people.
    This result does not include most of Europe, any of South America, or Africa, or really, a detailed search of most of the world except the USA.

  55. Roy Clark says:
    February 24, 2011 at 8:55 pm

    The observed increase in atmospheric CO2 concentration of 100 ppm has produced an increase in downward ‘clear sky’ long wave IR (LWIR) radiation of 1.7 W.m-2 over the last 200 years. Conservation of energy indicates that this increase in flux can only produce a maximum increase in water evaporation of 0.065 mm per day per square meter. The that’s right, 65 micron, or less than the width of a human hair.

    I looked at that, and my bad number detector went off. I didn’t have a clue what the real answer was, it just seemed that your answer of 65 microns was way too small. So I ran the numbers. Here’s what I get, check my figures.

    Latent heat of vaporization at 15°C ≈ 2,465 kJ needed to evaporate one kilo

    Since one joule is one watt-second,

    1.7 W/m2 = 1.7 J/m2-sec times 3.16E7 sec/year = 53,648 kJ/yr per square meter from a forcing of 1.7 W/m2

    53,684 kJ/m2-yr divided by 2,465 kJ / kilo = 21.6 kilos of water evaporated per square metre per year.

    Now, 1 mm of water over 1 square metre is one kg. So that means the evaporation would be on the order of 22 mm per square metre per year.

    Of course, that is the maximum, if all the 1.7 w/m2 goes into evaporation.

    Another way to estimate the number is like this. The total downwelling radiation (solar + IR) at the surface is about half a kilowatt. This evaporates an average on the order of one metre of water per year over the global surface. This means each W/m2 is evaporating on the order of 2 mm of water. That would give us 3.4 mm evaporated for 1.7 W increase … but the number is likely larger because both wind and Clausius-Clapeyron evaporation rates go up quickly at tropical SST temperatures.

    If my numbers are wrong, please let me know.

    The penetration depth of LWIR radiation into the ocean is also less than 100 micron, so it is impossible for a 100 ppm increase in atmospheric CO2 concentration to have any effect on ocean temperatures or rainfall extremes.

    Regardless of the penetration depth, the IR radiation is in fact absorbed by the ocean. Because of the constant motion of the surface due to wind and wave, some portion of that energy is entrained into the mixed layer. So while more absorbed IR energy is likely to be re-radiated or evaporated quickly compared to solar energy which is absorbed at depth, it is only able to be re-radiated or evaporated because it has been absorbed by the ocean, and by the “basic laws of physics”, in general that warms the ocean.

    In addition, since absorbed IR energy is more likely to quickly evaporated away than solar energy, how would that not affect the rainfall?

    Nature has chosen to ignore the basic laws of physics and publish climate astrology.
    The journal should be read ‘for entertainment only’.

    If only it were entertaining, I’d agree with you … reading it just gives me a headache.

    w.

  56. davidc says:
    February 24, 2011 at 8:20 pm

    Willis,

    Thanks for your work. Agree with everything except for this:

    “Again, this is why having access to the code and the data used is so vital.”

    I don’t think I would want to look at the code unless they demonstrated that the model actually reproduced some observations. Until then I think it would be a waste of time. You seem almost to be saying that you might be able to verify the model by reading the code, which I’m sure you don’t mean.

    Thanks, DavidC. Whether you or I would want to look at the code is immaterial, I have enough trouble reading my own code. The code should be published so that when people have questions that only the code can answer, it is available for their inspection to answer the question.

    Note that I’m not referring to the model code, that is done by someone outside the current study. I’m talking about the code and data for what they have done to stitch together their Frankenstein creation.

    w.

  57. The Daily Telegraph reported this ‘study’ back on the 16th.

    http://www.telegraph.co.uk/earth/earthnews/8328705/Floods-caused-by-climate-change.html

    I can only repeat now what I said then.

    If climate change has made floods such as we saw in 2000 approximately twice as likely, then we should be able to discern direct evidence of this. A careful study of the frequency and severity of flooding in the UK in recent years, compared to similar periods in the past, should show a significant change.
    This is what research means. Research means learning from nature. When significant results are found in this way then theories can be formed and tested by further research.
    In climate science this scientific process has increasingly been turned on its head. People tweak computer models, as apparently in this case, to try to support their chosen theory, and call that research. Computer models are not research. They are at best analysis of data and predictions from theory – at worst mere speculation.
    If the authors of this paper can present evidence that flooding is now more frequent or more severe, then they may offer theories to account for this fact which can be tested against future observations. If they can’t, then their computer modelling is just idle speculation. It is in no sense scientific research.

  58. I wonder where they got the sea-ice conditions in 1900 to start the models? The data that far back is very scrappy. There is reasonably good coverage in summer in the North Atlantic sector but practically nothing anywhere else.

  59. The referenced Nature article is beyond bizarre, intellectual bankruptcy at its

    “Robust,” eh? We all know what that word means.

    “To start my analysis, I had to consider the “Qualitative Law of Scientific Authorship”, which states that as a general rule: Q ≈ 1 / N^2.”

    I think you forgot to multiply by the square root of minus one to adjust for Climatological studies. Thus Q = i/N², since we’re dealing with imaginary science by pixillated scientists.

  60. Presumably, the following comment from Lance Wallace means that Nature is now going to cease publishing any more climate scare stories, as the purveyors of this ‘science’ are never prepared to disclose this.

    “Willis, Anthony, others–have you noticed that as of Feb 11, 2011, Science magazine will now require that authors provide the computer code and make the data available in a Web-based archive? See Science v331, p. 649.”

  61. The picture is of the River Ouse in the centre of York (in Yorkshire, north of England).

    As I child I used to visit there often in the 60’s (excellent museums, including the Rail Museum – essential visiting for small boys).

    We used to marvel at the marks painted on the side of the buildings showing the heights of various floods over the years. Heavy rain on the east side of the Pennines (hill ridge down the spine of England) leads to flooding in York. Has done for hundreds of years.

  62. Willis – Thanks for the admirable demonostration of your research methods. I was only one bridge out but ‘trying to help’. Having lived and worked in many countries throughout the globe and having been involved in some instances of natural disasters I can understand your considered reaction to the photo.

    Bonne journee

  63. Well if Peter Stott is involved, they’ve probably had access to the Hadley Centre super computer – we already know how accurate that can be.
    More Met Office propaganda.

  64. Noticed Stott he of the MetOff model man. I think he is the gardian of the model. I wonder why they buried his name in list of idiots.

  65. Mark Nutley says:
    February 24, 2011 at 5:09 pm
    I am not even reading the comments before hand, excuse the rudeness, But what a load of bollocks. The floods in the north of england was due to the last labour government cutting the budget for river and canal clearing, it was bugger all to do with Co2. and everything to do with incompetence.
    //////////////////////////////////////////////////////////
    Mark is right.

    It is land use and in particular mismanagement, not ‘climate change/disruption’ which has excaserbated flooding. We now build on flood plains and then appear dumb founded when every 15 or so years there is a significant flood in that area. We tarmac over land which would in the past have acted as a natural drain/soak but now causes large run offs. Our drains and suers have not been updated to cope with the extra houses etc.
    Water is a valuable commodity. Increased precipitation would be a good thing but it has to be properly managed and when properly managed, there is no significant problems.

  66. The real reason flooding in the UK has increased is that the Environment Agency has decided not to dredge rivers. Rivers naturally silt up and flood onto the flood plane named for this very reason. Dredging would reduce or even remove this risk. Stopping dredging is for money saving and probably to help persuade people that climate change is due to human input. It also means that homes built on flood plane can no longer get any insurance covering flood risk which was obtainable a few years ago.

  67. Willis Eschenbach at February 24, 2011 at 11:13 pm commenting upon a comment made by Roy Clark at February 24, 2011 at 8:55 pm

    Clark: “The penetration depth of LWIR radiation into the ocean is also less than 100 micron, so it is impossible for a 100 ppm increase in atmospheric CO2 concentration to have any effect on ocean temperatures or rainfall extremes.”

    Willis: “Regardless of the penetration depth, the IR radiation is in fact absorbed by the ocean. Because of the constant motion of the surface due to wind and wave, some portion of that energy is entrained into the mixed layer…”
    //////////////////////////////////////////////////////////
    My understanding is that 90% of the LWIR penetrates no more than 10 microns. The remaining 10% may penetrate upti a further 10 or so microns but for all practical purposes, the LWIR is fully absorbed within about 15 microns.

    Willis, I consider that your rebuttal comment to be pure speculation. Where is the empirical data proving your assertion? What experiments have been carried out substantiating this vital point?

    This is a vital point (and in my view one of the main failings with the AGW conjecture/hypothesis) since if this LWIR cannot heat the oceans, there can be no AGW (given that the oceans represent about 70% of the surface area of the Earth and given the substantial difference between the latent heat energy of air and water; ignoring the heat capacity below the cantle/crust, the oceans store probably about 99.9% of the heat energy of the Earth).

    It is extremely difficult to see any mechanism whereby this energy can become entrained in the ocean. It is probable that this energy simply evaporates the top 10 to 15 microns of the ocean and if anything has a cooling effect.

    Willis, I await to see your data and its sources.

  68. richard verney says:
    February 25, 2011 at 1:49 am (Edit)
    Willis Eschenbach at February 24, 2011 at 11:13 pm commenting upon a comment made by Roy Clark at February 24, 2011 at 8:55 pm

    Clark: “The penetration depth of LWIR radiation into the ocean is also less than 100 micron, so it is impossible for a 100 ppm increase in atmospheric CO2 concentration to have any effect on ocean temperatures or rainfall extremes.”

    Willis: “Regardless of the penetration depth, the IR radiation is in fact absorbed by the ocean. Because of the constant motion of the surface due to wind and wave, some portion of that energy is entrained into the mixed layer…”
    //////////////////////////////////////////////////////////
    My understanding is that 90% of the LWIR penetrates no more than 10 microns. The remaining 10% may penetrate upti a further 10 or so microns but for all practical purposes, the LWIR is fully absorbed within about 15 microns.

    Willis, I consider that your rebuttal comment to be pure speculation. Where is the empirical data proving your assertion? What experiments have been carried out substantiating this vital point?

    Well, there is realclimate’s writeup on Minnet’s theory and the experiment carried out using the Aeri pyrgeometer.

    lol.

    But the real point is that that the amount of energy from back radiation mixed down when the wind ruffles the ocean surface is negligible compared to the extra cooling effect caused by that same wind breaking the surface up and permitting additional convection and radiation of heat from the ocean to the air.

    The greenhouse effct doesn’t work by the direct warming of the ocean by back radiation. It works (to whatever extent it does) by thickening the atmosphere and causing the ocean to cool at a slightly slower rate relative to the insolation which actually does warm it.

  69. As a Yorkshire resident who once lived in a flat in the building on the bottom left of this picture, and as regular frequenter of the pub on King’s Staith, I can assure you all that flooding occurs there almost every year and has done so for a very long time.
    The pub itself is fitted out for a quick evacuation when the waters rise. The beer cellar is upstairs. All the electric points are also up above. The floors and benches are stone flagged. The soft furnishings quickly moveable.
    I suppose the lesson to be learned is that, rather than wasting money trying to stop the floods, it is perhaps wiser to accept that they happen and adjust accordingly.

  70. Thank you – a masterclass in pure logic.

    When I saw this study reported on the BBC I nearly threw a brick through the telly. The problem is that the “bloke down the pub” may think that if all of these clever people used thousands of computers to show this it must be true. I asked one bloke “if all of these conditions applied in 2000, why did we not have severe floods in 1999 and 2001 as well?” I am still waiting for an answer. This was the worst example of “cherry picking in hindsight” I have ever seen!

    I think that computer modelling is a valuable tool in science and engineering, but only with proper safeguards. In the 1980s we were told that computational fluid dynamics (CFD) would make wind tunnels and flight test a thing of the past – but they were wrong (for one thing, CFD can not model turbulence (or chaos)). However, when CFD codes are validated with wind tunnel and flight test data they are extremely valuable. The key word, IMHO, is VALIDATED!

  71. We seen a lot of these studies coming through…..
    Steve M talks constantly of watching the pea under the thimble.

    We are watching the construction of AR5 in real time……. it’s like the predictable script of a soap opera.

    This stuff isn’t science, it’s a script being written before our eyes. Nature magazine is just a part of the production crew.

    Wouldn’t it be great to use those supercomputers for something useful and constructive………

    What a waste

  72. Willis, another in your series of beautifully-crafted scientific deconstructions.
    This paper, of course, was promoted in the UK Guardian by George Monbiot as further irrefutable proof of CAGW and fiercely defended by his usual team of aggressive believers. The Daily Telegraph also featured a similar piece by Louise Gray, but that was pulled within twent-four hours without explanation.
    When I read the abstract of this paper, I found it impossible to believe that multiple model runs constitute any kind of ‘evidence’. As I read it, the countryman in me came to the fore and I pulled out my mental checklist of the causes of flooding and NONE of those causes featured in the paper.
    These are:
    1 Care and upkeep of all waterways, down to and including roadside drains
    2 Additional buildings and roadways for new subdivisions, etc, on floodplains
    3 Unusually heavy rains over a short period of time
    Growing up and spending most of my life in a mountainous country with high rainfall in most areas (average rainfall is over one inch per day in the Fiordland area), one tends to look at how well kept waterways are and how much of the area of historic floodplains are conserved for their natural and essential purpose. I have observed over almost a decade in the UK that central and regional government see ‘the countryside’ as a picturesque irrelevance which is largely ignored; regular and sensible maintainence of it is avoided, a practice which tends to store up perils.
    The UK will always be prone to severe follding while the locals persist in building in ancient watercourses, covering floodplains with concrete, bricks and tarmac and ignoring the need to keep every watercourse, no matter how insignificant, free of of impediment to flow.

  73. richard verney says:
    February 25, 2011 at 1:49 am

    Willis Eschenbach at February 24, 2011 at 11:13 pm commenting upon a comment made by Roy Clark at February 24, 2011 at 8:55 pm

    Clark: “The penetration depth of LWIR radiation into the ocean is also less than 100 micron, so it is impossible for a 100 ppm increase in atmospheric CO2 concentration to have any effect on ocean temperatures or rainfall extremes.”

    Willis:

    “Regardless of the penetration depth, the IR radiation is in fact absorbed by the ocean. Because of the constant motion of the surface due to wind and wave, some portion of that energy is entrained into the mixed layer…”

    //////////////////////////////////////////////////////////
    My understanding is that 90% of the LWIR penetrates no more than 10 microns. The remaining 10% may penetrate upti a further 10 or so microns but for all practical purposes, the LWIR is fully absorbed within about 15 microns.

    Willis, I consider that your rebuttal comment to be pure speculation. Where is the empirical data proving your assertion? What experiments have been carried out substantiating this vital point?

    Gads, sir, take a deep breath. If you don’t know if I have citations for them, how can you possibly say that my claims are “pure speculation”? Our word for today is “pre-judgement”. Can you say “pre-judgement”? I knew you could …

    You could start by reading Part 1 of scienceofdoom’s excellent discussion of this very topic. Then Part 2. Then Part 3. At the end of that, you should be able to at least talk lucidly about the question … whether you agree with scienceofdoom or not.

    Then consider the following question. The oceans receive on average about 170 W/m2 from the sun. Given their temperature, we know from Stefan Bolzmann that they are radiating about 390 W/m2 of IR upwards. We also estimate that they are losing about 70 W/m2 via evaporation, and another thirty or so to convection.

    So if the oceans are not receiving any IR as you claim … why are they not frozen solid? Warmed by 170 W/m2 from the sun, cooling at the rate of about 500 W/m2 … what’s wrong with this picture?
    w.

  74. This paper isn’t even handwaving the science anymore, it is wildly flapping its arms to distract casual observers from the facts.

    Well written, Mr. Eschenbach!

  75. John Marshall says:
    February 25, 2011 at 1:33 am

    The real reason flooding in the UK has increased is that the Environment Agency has decided not to dredge rivers. … Stopping dredging is for money saving and probably to help persuade people that climate change is due to human input.

    Another likely reason is that environmentalists recoiled from imposing man’s hand on nature. “Don’t touch it!” is their basic attitude. Dredging probably strikes them as an instance of Mastering Nature, a real no-no (to them).

  76. starzmom says:
    February 24, 2011 at 5:02 pm
    Thank you Willis. And soon coming to another journal somewhere near you, is a different author quoting this stuff as gospel.

    It has already been quoted with approval by Australia’s new Commissioner for Climate, a Prof Tim Flannery, palaeontologist. ABC TV, Q&A, 21 Feb 2011.

    My surmise is that he had not read the paper, but knew the spin. C’mon Tim, ‘fess up. Had you read the paper?

  77. Ron Furner says:
    February 24, 2011 at 3:38 pm
    Willis – Please rest assured that Fig 1 is not a photoshop fantasy. It is a photo of the River Ouse in the City of York (North Yorkshire UK) taken from Lendle Bridge close to the city centre. In the 21 years that I lived in the York area, before leaving for the Land of Nuclear Power Generation in 2005, this senario has been repeated at least on three occasions. The White Elizabethan style building ( under water on the lefthand side of the photo) is a well known pub which now has great difficulty getting insurance cover!

    All this water runs off the North Yorkshire Moors and has done for many hundreds of years.

    I look forward to your posts. Keep up the good work

    A nice pint of Sam Smith’s can be had at that pub, oh the memories come flooding back of my yoof! One can see the flood levels gouged into the brickwork outside, & dated, & even inside too all well above floor level? (I seem to recall but memory is dodgy on that score!) The actual river level is well below outside road level usually. Please note, these events don’t happen once adequate flood alleviation defences are constructed. Thames Water spent £Ms (of taxpayers money) in the 1970s/80s on flood defences for the Thames catchment area as a result of the severe flooding in the late 1940s when London got hit, (It’s amasing how things get done afte rthe capital city gets hit by anything & the surrounding areas where those in control live & work!) Even then the Thames Barrier was just a dream.

    I presume this study & “puter model” comes with the usual caviats?????????? Deja Vu, 1925 Pocket Oxford Dictionary…..Synthetic: “artifical, imitation, not existing in nature. Sophisticated: “spoil the purity or simplicity of, or adulterate”. (from sophist: A paid teacher of philosphy in anceint Greece willing to avail himself of fallacies to help his case). Simulate: “feign, pretend, wear the guise of, TAMPER with, act the part of, counterfiet, shadowy likeness of, mere pretence”. I don’t chose the words these guys use to describe their artworks, they do!!!

  78. What’s also missing is a control. They should have run the same “analysis” on an area in
    UK, say somewhere in Yorkshire, where no flooding occurred and see how that would fit in their models.

    Should a simulation where you feed the output from one model into another model and yet again not be properly called a “cascade”, or a “waterfall model”? Perhaps that would explain it.

  79. Actually, I pity those poor researchers who spent their lives in computer rooms, never getting out to feel an actual raindrop land on their nose …

    Thanks for this excellent and – sadly, for science – hilarious dissection of yet another Nature effort.
    Now one wonders again who pal-reviewed this one.

    Living in one of the affected areas, as the graphs linked by JurajV above show: sometimes it rains a lot, other times it doesn’t … managing flood defenses properly is the way to address this problem. Blaming AGW/CO2 most certainly isn’t.
    Out parks provide huge run-off areas for flash floods, which can happen extremely quickly due to the geography and geology of the catchment area. It has withstood several tests now, and for us dog walkers it is huge fun to see the ducks swimming on the inundated football pitches being chased by dogs who can’t believe their luck.

    As for this:
    the “Qualitative Law of Scientific Authorship”, which states that as a general rule:

    Q ≈ 1 / N^2

    where Q is the quality of the scientific study, and N^2 is the square of the number of listed authors.

    Yep. Have observed this for decades … and oddly, it always seem to be papers in Nature who give proof, even if the subject is not climatology …

    Thanks, Willis!

  80. Another excellent piece by Willis. I remember reading the report in the Daily Telegraph and, as always, it was completely uncritical. It’s been obvious for a long time that much of the bad science we are seeing is based on one or another of the climate models, and that the output of these models is treated almost as if it were empirical data. But this study does seem to represent a new low.

    As often seems to be the case, the one ray of sunshine comes from the actual data. That graph demonstrates simply and elegantly that the study is junk. I really think that Nature should be re-classified as a science fiction magazine. It seems they will print anything as long as it contributes to the global warming hysteria.

    I’m sure that computer models, including climate models, have their uses. But they cannot forecast future climate, just as they can’t correctly forecast temperatures for the coming winter. And their output is not empirical data. You can only get empirical data by measuring what’s happening in the real world.

    No, the problem is that climate models are being abused on an almost industrial scale.
    Chris

  81. Good to see York in the image here. Practically over the river from where I live.

    Happens every year does the flooding. Not always that bad, but most of the time it’s high enough for the Kings Arms (the pub to the left of the picture) to close its river side door with a flood barrier. I have lived in York since I was 19 and seen this a lot and they never sort it out properly. One year I was working just 30 meters from that place and from the river being at normal height when I started work (8:30 ish) it was up to the height on the picture by 12:30. It can come up quite fast. It you go into the pub, there is a gauge top how height the river has been in the past, and it has been much higher before.

    It also froze over in December, much to the joy of those foolish enough to ride bikes and write their names on the ice.

  82. Previous commenters have identified the picture as York. It also looks a bit like Tewkesbury, which is regularly flooded by the Avon and Severn confluence. In fact the only part of town that doesn’t get flooded is the Abbey.

    Did they know something in the 12th Century that we don’t ?

  83. It you go into the pub, there is a gauge top how height the river has been in the past, and it has been much higher before.

    corrected too…

    If you go into the pub, there is a gauge of how high the river has been in the past, and it has been much higher before.

    (chatting and typing at the same time – fail)

  84. Thanks for the analysis. How far has Nature fallen when it publishes something that belongs in a third-tier publication.

    It was always the case that Nature needed *good data* from a *good experiment* providing a *novel result*. No longer, it seems. For the present work only offers the last of these three (the novel result). It is obvious that the same work, had it not found a link between CO2 and the floods, would never have passed muster.

    However, I don’t think the graph shown (of 1-day extreme rain events) was the right one to use. The floods in question took days to fall out of the sky.

    That said, the catchments involved are so heavily modified from a ‘pristine’ state that not many conclusions about flood rates can be warranted. To the commenter above who made a political complaint about river clearance: actually, no. Natural dams and blockages slow the passage of the water downstream – they’re good at evening out extreme events. There has even been consideration of introducing beavers for their flood-control skills.

  85. Willis

    Your PS about computer models is bang on. I am a great believer in this.

    As for this paper…models through models…well, I’m waiting for the Cup Of Tea and The Improbability Drive to appear.

  86. The IPCC5 report may be being written as we sit here, but equally so, the rebuttal is being written in WUWT. Unlike previous “works of art”, there are well documented, well researched pieces to counter the nonsense being promulgated. I particularly enjoyed Willis using the “bet in the Casino” analogy.

    Having lived though (professionally) the uber hyped Robotics-fad followed by the AI-fad periods of “Computer Science” (if it needs science in the name, it’s not one), I see the same PR pieces, the same headline hunting behavior as those searching for funding back then. At least some of them were trying to start companies to produce things, the current crew seems content to suck at the teat of Government.

  87. Interestingly, there is an article in the Indy, that consists of a discussion carried out be email between the science correspondent and Freeman Dyson. At one point, in reply to the correspondents appeal to consensus, Dyson mentions computer models as being one of the greatest problems with the current consensus. In his view, decades of working with models has made researchers confuse the output from their models with reality.

    This particular fiasco fits Dyson’s argument to a Tee. You can almost get into the mind of the modellers and imagine them imagining that what they are doing is in some sense describing the real world. In fact, Dyson uses the word ‘delusional.’ Surely, if these people were not deluded by their dogma, they would never have produced such research, and tried to pass it off as science. If the editors of Nature weren’t also delusional, they would have thrown it into the garbage.

  88. Willis in response to your post at February 25 2011 at 2:30 am

    I enjoyed your article and the deconstruction of the Nature paper.

    My point is that your rebuttal to Clark’s comment was way too strong. Both sides are no doubt culpable of making statements which are way too strong and which should properly contain caveats as to uncertainties. You assert that it is a FACT that IR radiation is absorbed by the oceans and that this results in the entrainment of energy. I would accept that if IR radiation is absorbed by the oceans then energy would be entrained. However, I stand by my comment that the absorption of IR radiation by the oceans is a point yet to be proved and hence it is presently speculation.

    Sometime back I read the post on scienceofdoom to which you refer. I recall that it was an interesting post and that it accepted the point made by me that some 90% of all IR radiation is absorbed within the first 10 microns. My recollection of the article was that it went off track by failing to appreciate the significance of the aforementioned point (especially taking into account that approx 20% of IR is absorbed within just 1 micron and 50% within 5 microns), and instead analysed the position on the assumption that the IR somehow found its way into the well mixed ocean layer (if I recall correctly the author assumes that the IR found its way into the first 5 to 20 mm of the ocean). However, there is an overwhelming likelihood that there is no effective interface between the first few microns and the bulk ocean. With windswept spray, spume etc it is difficult to see how there could be an effective interface, the more so given the energy that this layer is receiving from IR goes to increases the rate of evaporation and convection. If there isn’t effective penetration, then there can be no mixing into the bulk ocean.

    My recollection of the scienceofdoom post was that no empirical observational data was set out in support of the proposition that IR radiation is absorbed and the author ran some model in support of his proposition. I personally place no reliance on model runs which do no more than analyse and reflect upon the assumptions made by and the short comings in the state of knowledge and understanding possessed by the programmer.

    If I recall correctly, the most emphatic point in favour of the proposition was that without IR, the oceans would quickly freeze over and the author as is typical in the AGW debate, sought to reverse the burden of proof and suggest that anyone who disputes what he says should prove him wrong, rather than the author proving the correctness of his theory/hypothesis. I note that you adopt a similar stance and that you provide no reference/citations to empirical observational data.

    One of the problems with the AGW hypothesis is that the proponents of the hypothesis always seek to discuss averages (average conditions, temperatures, radiation etc) when in practice the average condition is rarely encountered in the real world, and this use of averages does not give full recognition to what is going on in the real world. Parts of some oceans are permanently frozen, some seas freeze over from time to time, some never freeze. They all receive different amounts of solar energy and some of the solar energy received in one place is transported to other areas by way of currents etc. One needs to see an energy budget (diurnal) for say each and every say 100 sq miles of the Earth to even begin to build up a picture of what might be going on. As far as the oceans are concerned, this would have to include the energy from all geothermal/hydrothermal sources. As regards hydrothermal sources, the amount of this energy may be very small, or since we haven’t mapped the oceans it may be larger than we think. As regards geothermal energy, one has to consider the effect of the depths of the ocean and the fact that they are closer to the mantle. If the sea bed was not covered by water, the ground would no doubt be hot to walk on. As you are no doubt aware, there are various studies that show the temperature profiles of boreholes to increase by 1 deg C between every 10 and 30 m in depth. If a similar relationship holds true, given that the deepest oceans are about 11,000 metres deep and the average depth is about 4,000 metres this is like the oceans sitting on a hotplate with poor conduction but it could amount to quite a bit of energy.

    I consider that it is generally accepted that if we have erred with our assessment of cloud albedo by 1 or 2% then that could explain the warming noted in the various temperature sets (and that assumes that those sets are correct). Given that we have little data on cloud cover, this seems a candidate that certainly can’t be ruled out.

    If you actually have some real data showing that IR is absorbed by the oceans, I certainly would be interested in reading it.

  89. I had one comment on the study in question using CO2 data from 1900 and then assuming the delta between then and 2000 is anthropomorphic. It may be we often start discussions by stipulating to that assumption, but I do not remember seeing a study that determines what ALL of the natural sources of CO2 and other natural GHGs are (like methane, for example). It is logically possible there are natural mechanisms responsible for the increase in atmospheric GHG concentrations that have not been studied or accounted for because everyone has made the assumption it is Man that is responsible.

    I have seen some very pretty cartoons of the “carbon cycle” that include the contributions of Man, but they are presented at face value and there are no statements of the potential errors or uncertainies in the data represented. I am sure they are significant, the world is a vast place.

    To be able to verify the contibution of the natural world to the GHG cycle may be one of those unverifiable conditions, I get the impression it is a chaotic process as is weather. But I believe we need to properly characterize the uncertainty of the GHG rise in the same way we need to characterize the uncertainty in temperature measurements.

  90. Diagram:
    Model 1->Model 2 -> Model 3 Model 6 —/\

    Did I get that correct? Do these people not understand that in iterative models, offset errors don’t cancel? They propogate. One model feeding a second is already a questionable issue, but a group of six models feeding each other? If they conclusively showed the sky was blue, I’d question it.

    Come one. They teach this stuff in sophomore level engineering (when we first discuss iterative calculations and our models of stupidly simple systems routinely went to infinity). Even though they taught us how to fix the runaway model problems, they instilled in us the knowledge about offsets and error propogation.

    How can undergraduate engineers know this, but PHD holders get published in Nature producing this drivel?

  91. You can pretty much do what you like in Cloud Cuckoo Land.
    & they did by the look of it.
    Its not Science its a form of Science Fiction.
    It beggars belief that Nature published this

  92. richard verney says:
    February 25, 2011 at 10:00 am

    … You assert that it is a FACT that IR radiation is absorbed by the oceans and that this results in the entrainment of energy. I would accept that if IR radiation is absorbed by the oceans then energy would be entrained. However, I stand by my comment that the absorption of IR radiation by the oceans is a point yet to be proved and hence it is presently speculation.

    Richard, you have not answered my question. We know that about 170 W/m2 enters the ocean from the sun. We know that it is losing about 390 W/m2 from Stefan-Bolzmann.

    I am awaiting your answer about why, given those known, measurable quantities, the ocean is not currently a block of ice. Yes, those numbers might be wrong by 10% … still an ice block.

    Unless, of course, there is some other form of energy warming the ocean … like say downwelling IR … but since you say that doesn’t exist, what do you say is warming the ocean?

    w.

    PS – it’s not geothermal heat. Even figuring in big numbers for subsurface rift volcanoes, the numbers are still too small by a couple orders of magnitude. So what is it?

  93. When you say: “The kind of extreme rainfalls leading to the flooding of 2000 are seen in Figure 3, ” you must mean Figure 3.

    Nothing odd about Nature publishing this kind of thing. A magazine that calls AGW skeptics “deniers,” has obviously lost all objectivity.

    In any case, a fine post.

  94. Given the continual increase in CO2 concentrations since 2000, our UK brethren must surely count themselves lucky that they haven’t had a similar precipitation/flood event since. /sarc off

    Ignoring the daisy chain computer linkage for the moment, would a similar “analysis” by the “study” authors have yielded similar dire predictions if they had modeled any other recent year using the same method?

  95. Willis,

    In case you haven’t heard yet- in a post today- http://www.wattsupwiththat.com/2011/02/25/currys-2000-comment-question-can-anyone-defend-%e2%80%9chide-the-decline%e2%80%9d/

    “Al Gored says:
    February 25, 2011 at 12:10 pm
    OT but does anyone know which two new studies these Dems are hanging their hopes on?

    “Two key House Democrats called on Republicans Thursday to hold a hearing on the latest climate science amid efforts by the GOP to block the Environmental Protection Agency’s climate authority.

    In a letter to the top Republicans on the House Energy and Commerce Committee, Reps. Henry Waxman (D-Calif.) and Bobby Rush (D-Ill.) pointed to two new studies that link climate change to extreme weather.”

    www. thehill.com/blogs/e2-wire/677-e2-wire/145937-house-dems-call-for-climate-science-hearings-amid-gop-efforts-to-block-epa-climate-rules

    Methinks that they are confusing this process witha UK whitewash.”

    The Oxford study, that you reviewed in this post, is one of the reasons given for requesting a hearing (to ensure the EPA gets funded). Thought you might want to know about this- sorry for wasting your time if you already knew this info.

  96. “Ron Furner says:
    Please rest assured that Fig 1 is not a photoshop fantasy. It is a photo of the River Ouse in the City of York (North Yorkshire UK) taken from Lendle Bridge
    No, Ouse Bridge ( not Lendal, NOT Lendle)

    close to the city centre. In the 21 years that I lived in the York area, before leaving for the Land of Nuclear Power Generation in 2005, this senario has been repeated at least on three occasions. The White Elizabethan style building ( under water on the lefthand side of the photo) is a well known pub which now has great difficulty getting insurance cover!

    All this water runs off the North Yorkshire Moors

    No, Yorkshire Dales

    and has done for many hundreds of years.

    No Thousands

    I look forward to your posts. Keep up the good work
    rgf

  97. Willis in response to your post at February 25 2011 at 11:31 am

    I see that you are a subscriber to the Trenberth policy on burden of proof. It is your theory that the sea does not freeze because LWIR in some way heats it up and it is therefore up to you to prove your theory, not for me to disprove it. I would suggest that there is an obvious reason why Trenberth has been unable to find his missing energy in the oceans, namely, Co2 does not heat the oceans.

    We both know that whether an ocean freezes is much more complex than the energy budget you describe. In passing, it strikes me somewhat strange that although, on your figures, the direct input energy received by the ocean is only about 170 w/sqm by way of energy from the sun, this amount of energy supposedly produces about 330 w/sqm of back radiation to balance the budget. And there I thought that Trenberth et al, were proposing that the Earth receives about 1,366 w/sqm (less about 6% reflected by the atmosphere less 20% reflected by clouds, 4 to 6 % reflected off the water itself) which during the day is equivalent to about 683 w/sqm (less the reflected proportion). One should consider the input energy from the sun during the day but take into account that the ocean is radiating/evaporating/convecting heat 24 hours a day and that back radiation is supposedly a 24 hour energy source.

    Please detail the energy budgets for the following:

    1. Aral Sea at 45º30 N, 36º35E
    2. Aegean Sea at 44º55 N, 13º07E
    3. Caspian Sea at 40º58 N, 50º54E
    4. Mediterranean Sea at 43º0 N, 3º51E
    5, Baltic Sea at 61º0 N, 19º40E
    6. Atlantic Sea at 61º0 N, 6º40W
    7. 75 miles North of Suez and 75 miles South of Suez. If you have ever sailed through Suez, you will know that there is a substantial temperature drop between the Red Sea and the Med (in the region of 4 to 5 degs C) although the energy budget will be broadly similar for both these locations.

    Please detail the precise energy budget at which an ocean begins to freeze. Please explain the different temperature profiles of these oceans/seas in accordance to the energy budget they receive,

    As I noted in my previous post, you will not see what is going on in the real world if you only ever consider the notional average condition.

  98. which way did they say the winds were blowing in the model to produce the flooding ?
    I thought we all knew that a rainfall volume does not, per se, create the flood.
    The direction and speed of travel of the rainfall volume either will, or will not, create a flood in any given river system.
    to sum up, floods can be created by a rainfall voume lower than a larger volume that does not.
    why do these guys ever try out on the old Einstein trick of using simple logic and simple words it worked for some, still relevant, very clever stuff 105 years ago

  99. richard verney says:
    February 25, 2011 at 6:13 pm

    Willis in response to your post at February 25 2011 at 11:31 am

    I see that you are a subscriber to the Trenberth policy on burden of proof. It is your theory that the sea does not freeze because LWIR in some way heats it up and it is therefore up to you to prove your theory, not for me to disprove it.

    Not a bit. I’m simply pointing out that the sea does not freeze because something heats it up. I say it’s LWIR.

    What do you say it is?

    w.

  100. What I’m referring to are the initial conditions input to the HadAM3 model for the kickoff of the A2000N simulations. Or as I said, the conditions that “start the A2000N simulations”. As I understand it, the only changes in the starting conditions are the SSTs, not the land temperature, but that’s not clear, which is why I asked …

    ———–
    A small ground surface temperature offset at the start of the model run would only cause a significant error if ground temperature is very persistent. Just how persistence do you imagine a ground temperature offset could be when ground temperature can vary by several degrees in an afternoon?

  101. Richard Telford says:
    February 26, 2011 at 8:29 am

    What I’m referring to are the initial conditions input to the HadAM3 model for the kickoff of the A2000N simulations. Or as I said, the conditions that “start the A2000N simulations”. As I understand it, the only changes in the starting conditions are the SSTs, not the land temperature, but that’s not clear, which is why I asked …

    ———–
    A small ground surface temperature offset at the start of the model run would only cause a significant error if ground temperature is very persistent. Just how persistence do you imagine a ground temperature offset could be when ground temperature can vary by several degrees in an afternoon?

    Depends on the model, the depth of the penetration of the heat, the nature of the ground. Significant amounts of heat are stored in the ground during the summer and released during the winter, for example. This can be seen in borehole records.

    So in spite of the fact that (as you say) surface temperatures vary several degrees in an afternoon, the slower-changing ground temperature at greater depths is very relevant at annual scales. That’s why heat pumps are able to heat houses in the winter using sub-surface pipes. There’s heat in the ground, even though the surface is frozen.

    And this is important because the HadAR3 runs were very short, starting on April 1st 2000 to forecast conditions in the autumn of that same year.

    On another matter I take all of that to mean that, despite your excoriating me as being “out of my depth” for not knowing how they set the initial conditions … you don’t know either.

    Not wanting to acknowledge that, and perhaps even daring to retract your aggressive tone, you now want to claim that how the initial land conditions were set makes no difference … not true, and not responsive.

    w.

  102. Another excellent review Willis, thank you. As a (former) computer modeler, I cannot believe the crap some people will believe. “You can fool some of the people all of the time…” rings so true.

  103. A reanalysis is not just a model. It is constrained to fit all observational data, and would not be a reanalysis unless it did. Reanalyses including ERA-40 are just gridded syntheses of observations. It is important to use real atmospheric data in such studies, and it was used via the reanalysis and river gauge data. There is a slight inference from reading this article that atmospheric and surface data weren’t used.

  104. I would also agree with Richard Telford that, especially in a maritime climate, SST, which directly impacts surface air temperature has a much larger effect than deep soil temperature that has minimal impact at the surface by comparison with the maritime air.

  105. Dear W.
    Thank you for taking the effort to debunk this nonsense. The world is apparently still full of people, some of them scientists, who believe a computer can create something other than you put into it, and that what comes out of a computer replaces observation/science.

    In the centre of the city of Worcester ( UK, South West), the river Severn has its flood high stands noted over the centuries on a wall near the cathedral. Three of the highest stands occurred in the five year period 1946-1950. If we look on the presented graph of day maximum rainfall these years do not stand out, yet they were highly significant in flood damage.

  106. (sarc on) At least they call the runs simulations and not experiments….

    https://www.rms.com/Publications/Nature_DLohmann_0211.pdf

    (sarc off)
    As you point out Willis, the major culpability here lies with Nature. Now for those who don’t know, both Nature and Scientific American are owned by Macmillan. Take a look at what Macmillan is all about….

    http://international.macmillan.com/AboutUs.aspx?id=590

    I know, maybe just boilerplate CSR junk …. but, really, look at that picture in the top left hand corner of their “About Us” web page.

  107. I wonder if the focus on the flooding inYork has ever bothered to find out what improved drainage in the catchment area in the last 50 years has done to speed up the rate at which high rainfall flows into York rather than being soaked up into fields ?

  108. Eschenbach claims, on the basis of a graph he supplied, that looking at the annual maximum one day rainfall events, that there has been no substantial increasing trend for this statistic. However this noisy statistic does not seem to be a definitive estimate of flooding according to experts in the field.

    http://www.staff.ncl.ac.uk/h.j.fowler/extremerain.htm

    ….Climate model integrations predict increases in both the frequency and intensity of heavy rainfall in the high latitudes under enhanced greenhouse conditions. These projections are consistent with recent increases in rainfall intensity seen in the UK and worldwide.

    We use two methods to examine changes to the frequency and intensity of extreme rainfall events: a regional frequency analysis (RFA) based on L-moments (Hosking and Wallis, 1997) and a peak-over-threshold (POT) analysis. The RFA uses regional pooling (Hosking and Wallis, 1997) of rainfall maxima, standardised by median (RMED), to fit Generalised Extreme Value distribution curves and allow the estimation of long return period rainfall events for 1-, 2-, 5- and 10-day durations. These include estimates of uncertainty, measured using a bootstrap method. For the POT analysis, we consider the POT event, defined to occur if the total daily rainfall exceeds two standard deviations above the long term (1961-2000) mean wet day at a specific location….

    It is clear from our research that there have been significant changes to both the timing and occurrence of multi-day intense rainfall events over the past decade. We estimate that the magnitude of multi-day extreme rainfall has increased two-fold over parts of the UK since the 1960s. Annual recurrence probabilities are quadrupled in some regions, with intensities previously experienced every 25 years now occurring at 6 year intervals. This is comparable to climate model projections for the end of the 21st century

    In addition although the abstract doesn’t contain references to data, the supplemental information linked on the page containing the abstract, that Eschenbach said he consulted, does contain analysis of actual data and compared it with the simulations.

    http://www.nature.com/nature/journal/v470/n7334/extref/nature09762-s1.pdf

    Actual data is part of figures 1 and 2 and tables 1 and 2. The data in figure 1 looks at at the distribution in mm, of the largest 10% of daily rainfall, rather than a single number which is the absolute maximum of the year’s rainfall.

    It does not appear that Eschenbach’s criticism is based on an accurate picture of what was done.

  109. David Mayhew says:
    February 27, 2011 at 9:40 am

    Dear W.
    Thank you for taking the effort to debunk this nonsense. The world is apparently still full of people, some of them scientists, who believe a computer can create something other than you put into it, and that what comes out of a computer replaces observation/science.

    Scientists normally put forces into and equation run it over time, and get a trajectory. They often do this with a computer, in order to trace the trajectory of a space craft or planet. This is a valid calculation and the output is different from what is put into the computer. It is used to predict what will happen to the spacecraft. There is no way to use observations if you are trying to predict the future of the space craft. The same is true of the future of climate.

    In the centre of the city of Worcester ( UK, South West), the river Severn has its flood high stands noted over the centuries on a wall near the cathedral. Three of the highest stands occurred in the five year period 1946-1950. If we look on the presented graph of day maximum rainfall these years do not stand out, yet they were highly significant in flood damage.

    This is a good observation. What it shows is that the graph Willis Eschenbach used as proof that the paper he criticized didn’t correlate with reality was inappropriate. I mentioned this problem in one of posts on this subject, which was critical of Eschenbach’s piece. The observation you have made shows that Eschenbach needs to get different data to show that the paper he criticized doesn’t in fact represent reality.

  110. eadler says:
    February 27, 2011 at 6:49 pm

    Eschenbach claims, on the basis of a graph he supplied, that looking at the annual maximum one day rainfall events, that there has been no substantial increasing trend for this statistic. However this noisy statistic does not seem to be a definitive estimate of flooding according to experts in the field.

    http://www.staff.ncl.ac.uk/h.j.fowler/extremerain.htm

    Thanks for the link, Eadler. They say:

    This research suggests that causal mechanisms such as the frequency, duration and timing of extreme rainfall events are changing. These seasonal changes may be caused by atmospheric circulation anomalies in the Scandinavia pattern or the North Atlantic Oscillation and help to explain recent severe flood events in the European region. However, further research is needed in this area to firmly establish links between flood generating mechanisms and large-scale circulation patterns if we are to fully understand the risk implications of the estimated changes in extreme rainfall occurrence for flooding.

    In other words, their results show the climate is actual changing, as it has for a really long time. Shocking news, film at 11. However, the folks who wrote your analysis seem to have forgotten the uncertainty … niggling of me to mention it, I know, but rainfall records generally have high Hurst coefficients that drive the uncertainties through the roof. They mention uncertainty exactly once, where they say they’ve estimated it by the bootstrap method, which always makes me nervous. But then, they don’t tell us what the uncertainties are. Given the short nature of British rainfall records (most don’t start until after 1960), this is a serious hole in the study. When someone comes around with what they claim is science predicting doom, gloom, and flood increases without serious uncertainty boundaries on their result, they go in my circular file.

    In addition although the abstract doesn’t contain references to data, the supplemental information linked on the page containing the abstract, that Eschenbach said he consulted, does contain analysis of actual data and compared it with the simulations.

    http://www.nature.com/nature/journal/v470/n7334/extref/nature09762-s1.pdf

    Actual data is part of figures 1 and 2 and tables 1 and 2. The data in figure 1 looks at at the distribution in mm, of the largest 10% of daily rainfall, rather than a single number which is the absolute maximum of the year’s rainfall.

    It does not appear that Eschenbach’s criticism is based on an accurate picture of what was done.

    The analysis you refer to in the SI looked at the difference in the 1990-2000 averages and the models. The caption says:

    Supplementary Figure 1. Thermodynamic change in daily precipitation extremes for England and Wales autumns. a-d, Each panel identically shows the distribution of observed daily precipitation extremes from autumns 1990-2000 (blue circles, with interpolated curve). Non-blue coloured curves are different for each panel and show the distribution of thermodynamically reduced daily precipitation extremes for this same period, which is deduced by scaling down the 90th percentile and above (delineated by the dotted vertical line) of the observed distribution according to the Clausius- Clapeyron relation and pattern-estimates of attributable twentieth-century surface warming from, HadCM3 (a; brown), GFDLR30 (b; purple), NCARPCM1 (c; pink), and MIROC3.2 (d; orange). Ten such non-blue curves per panel result from the ten amplitude-scalings per pattern-estimate (see Methods). Bars represent 5-95% confidence intervals estimated using a Monte Carlo bootstrap sampling procedure similar to that in Fig. 3.

    So they’ve compared actual data with “distribution of thermodynamically reduced daily precipitation extremes for this same period, which is deduced by scaling down the 90th percentile and above (delineated by the dotted vertical line) of the observed distribution according to the Clausius- Clapeyron relation and pattern-estimates of attributable twentieth-century surface warming”.

    Perhaps for everyone’s benefit you could explain a) what a “thermodynamically reduced daily precipitation extreme” is when it’s at home, b) why they are comparing data with “thermodynamically reduced daily precipitation extremes”, c) what the comparison actually means, and d) the uncertainty of the results. Before writing the piece I struggled with that, and it was so opaque I finally gave up. In any case, it’s not what I meant when I said:

    5. Since the P-R model is calibrated using the ERA-40 reanalysis results, how well does it replicate the actual river flows year by year, and how much uncertainty is there in the calculated result?

    6. Given an April 1 starting date for each of the years for which we have records, how well does the procedure outlined in this paper (start the HadAM3-N144 on April Fools Day to predict autumn rainfall) predict the measured 80 years or so of rainfall for which we have actual records?

    7. Given an April 1 starting date for each of the years for which we have records, how well does the procedure outlined in this paper (start the HadAM3-N144 on April Fools Day to predict river flows and floods) predict the measured river flows for the years and rivers for which we have actual records?

    If we had that, we could see how well their models actually do. Their comparison with data is useless in that regard.

    In any case, eadler, they are using four models to feed a fifth model which feeds a sixth model tuned to a seventh model. I’m still awaiting your uncertainty estimates for that.

    w.

  111. Wow. kudos to Eshenbach, and anyone for that matter, who has the patience to continue to wade through such a ridiculous, ludicrous maze once it becomes obvious that the entire enchalada avoids any contact with earthly data.

  112. Over at the article about Booker’s piece, “He whose name causes comments to drop into the spam filter” said Willis didn’t even read this paper, and said “He himself said he only consulted the abstract and the supplementary data page, which he obviously didn’t look at it carefully,because he claimed that no reference to data was present in the paper.”

    http://wattsupwiththat.com/2011/02/27/willis-hits-the-news-stands-in-london/#comment-609762

    I noticed Willis doesn’t have a single comment over there, can’t tell if he’s reading those comments at all. Here, where Willis has been commenting and obviously reading the other comments, offhand I don’t see said person making the same claim. Take that as you will. I’ve already formed my opinion about the matter, which is best not said where small children might hear it.

  113. Willis Eschenbach says:

    In other words, their results show the climate is actual changing, as it has for a really long time. Shocking news, film at 11. However, the folks who wrote your analysis seem to have forgotten the uncertainty … niggling of me to mention it, I know, but rainfall records generally have high Hurst coefficients that drive the uncertainties through the roof. They mention uncertainty exactly once, where they say they’ve estimated it by the bootstrap method, which always makes me nervous. But then, they don’t tell us what the uncertainties are. Given the short nature of British rainfall records (most don’t start until after 1960), this is a serious hole in the study. When someone comes around with what they claim is science predicting doom, gloom, and flood increases without serious uncertainty boundaries on their result, they go in my circular file.

    So now you agree that the graph, which you showed in your critique, the annual maximum rainfall in a single day, doesn’t reflect the increase in flooding that has been found by experts. Your are abandonaing your claim, that an increase in flooding did not happen in England and Wales, therefore the paper is garbage.

    Now the argument is one of attribution: Was it GHG’s or ocean current oscillations? The computer simulations are a piece of evidence that show it was likely to be GHG’s. Now your argument is reduced to quibbling about uncertainty. At least that is progress.

  114. eadler says:
    March 1, 2011 at 6:19 am

    Willis Eschenbach says:

    In other words, their results show the climate is actual changing, as it has for a really long time. Shocking news, film at 11. However, the folks who wrote your analysis seem to have forgotten the uncertainty … niggling of me to mention it, I know, but rainfall records generally have high Hurst coefficients that drive the uncertainties through the roof. They mention uncertainty exactly once, where they say they’ve estimated it by the bootstrap method, which always makes me nervous. But then, they don’t tell us what the uncertainties are. Given the short nature of British rainfall records (most don’t start until after 1960), this is a serious hole in the study. When someone comes around with what they claim is science predicting doom, gloom, and flood increases without serious uncertainty boundaries on their result, they go in my circular file.

    So now you agree that the graph, which you showed in your critique, the annual maximum rainfall in a single day, doesn’t reflect the increase in flooding that has been found by experts. Your are abandonaing your claim, that an increase in flooding did not happen in England and Wales, therefore the paper is garbage.

    Huh? Read it again. The uncertainty in the study you quoted is not given, nor is how they have adjusted for high Hurst coefficient. So although an “increase in flooding … has been found by experts” (actually an increase in rainfall, different subject entirely), they weren’t expert enough to include the uncertainties so we could see if the change is statistically significant. English rain may be increasing in some areas or over some regions, but that study doesn’t allow us to conclude anything at all about the significance of that finding.

    Which is what I said before. Read it again.

    w.

  115. kadaka (KD Knoebel) says:
    February 28, 2011 at 5:22 pm

    Over at the article about Booker’s piece, “He whose name causes comments to drop into the spam filter” said Willis didn’t even read this paper, and said “He himself said he only consulted the abstract and the supplementary data page, which he obviously didn’t look at it carefully,because he claimed that no reference to data was present in the paper.”

    Eadler is mistaken when he says that, as a simple search shows. Find me anywhere in my article that I even mention the word “abstract”.

    w.

  116. I once did an experiment on my University’s mainframe computer, a DEC PDP-10 (which ages me!!). I plotted the difference between the programming language’s built-in sine function and a manually programmed version using the generalised continued fraction method (to many levels), and found the difference quite startling. Just this ‘introduced error’ in a simple computation makes me very wary of anything coming out of single complex simulation programme such as a climate model (relatively speaking, knowing how simplistic the models are vs. the real climate), let alone several chained together.

    Have the climate modellers ever considered what computational errors are introduced due to the finite precision of their computers?

  117. gee, all of you “experts” blabbing about modellng & supercomputers didn’t even catch at least one basic fact in the paper – that the “supercomputer” used was the distributed/volunteer computers of the climateprediction.net experiment. I guess that shows your critiques of the science is equally as laughable. Plus who the hell (other than Willis & other idiots here) calls it “Nature Magazine?”

    [REPLY] None of us cared what kind of computer it was run on … and only one person other than yourself used the word “supercomputer”, in a joking fashion. Nice try, though. Oh, I called it “Nature Magazine, that is” to distinguish it from the “nature” in the first part of the sentence. A bit of a play on words. – w.

  118. Carl C,

    I think saying “Nature Magazine” was Willis’ way of giving them a little well deserved jab. So it’s a journal, so what? It’s not much more scientific these days than New Scientist. The alarmist crowd likes to try and promote CAGW to a “theory”, too, when it’s only a debunked conjecture, so turnabout is fair play. Heck, even AGW isn’t a “theory.”

    And I may be an idiot, but this idiot is very much enjoying the slow motion implosion of the runaway global warming scam.☺☺☺

  119. it’s obvious you guys, including Willis, didn’t even go through the paper; and just took the (free) supplemental information and ran away & made absurd conclusions & “analyses” from it.

    [REPLY] I said “But now I’ve had a chance to look at the other paywalled Nature paper in the same issue …”. To those who can read, this indicates that I did go through the paper. A comprehension course might do you some good. – w.

  120. Carl C,

    We don’t all claim to be experts, but we know when we’re being spun a tale.

    Take note of what’s coming out of the latest interview of one of the ‘experts’, the one that Dr Mann asked to delete emails/materials, and did, thereby confirming the breach of FoI law, and effectively confirming that they know their ‘science’ is baloney. Otherwise, why delete the evidence?

Comments are closed.