Nature Magazine's Folie à Deux, Part Deux

Guest Post by Willis Eschenbach

Well, in my last post I thought that I had seen nature at its worst … Nature Magazine, that is. But now I’ve had a chance to look at the other paywalled Nature paper in the same issue, entitled Anthropogenic greenhouse gas contribution to flood risk in England and Wales in autumn 2000, by Pardeep Pall, Tolu Aina, Dáithí A. Stone, Peter A. Stott, Toru Nozawa, Arno G. J. Hilberts, Dag Lohmann and Myles R. Allen (hereinafter Pall2011). The supplementary information is available here, and contains much of the concepts of the paper. In the autumn of 2000, there was extreme rainfall in southwest England and Wales that led to widespread flooding. Pall2011 explores the question of the expected frequency of this type of event They conclude (emphasis mine):

… in nine out of ten cases our model results indicate that twentieth century anthropogenic greenhouse gas emissions increased the risk of floods occurring in England and Wales in autumn 2000 by more than 20%, and in two out of three cases by more than 90%.

Figure 1. England in the image of Venice, Autumn 2000. Or maybe Wales. Picture reproduced for pictorial reasons only, if it is Wales, please, UKPersons, don’t bust me, I took enough flak for the New Orleans photo in Part 1. Photo Source

To start my analysis, I had to consider the “Qualitative Law of Scientific Authorship”, which states that as a general rule:

Q ≈ 1 / N^2

where Q is the quality of the scientific study, and N^2 is the square of the number of listed authors. More to the point, however, let’s begin instead with this. How much historical UK river flow data did they analyze to come to their conclusions about UK flood risk?

Unfortunately, the answer is, they didn’t analyze any historical river flow data at all.

You may think I’m kidding, or that this is some kind of trick question. Neither one. Here’s what they did.

They used a single seasonal resolution atmospheric climate computer model (HadAM3-N144) to generate some 2,268 single-years of synthetic autumn 2000 weather data. The observed April 2000 climate variables (temperature, pressure, etc) were used as the initial values input to the HadAM3-N144 model. The model was kicked off using those values as a starting point, and run over and over a couple thousand times. The authors of Pall2011 call this 2,268 modeled single years of computer-generated weather “data” the “A2000 climate”. I will refer to it as the A2000 synthetic climate, to avoid confusion with the real thing.

The A2000 synthetic climate is a universe of a couple thousand single-year outcomes of one computer model (with a fixed set of internal parameter settings), so presumably the model space given those parameters is well explored … which means nothing about whether the actual variation in the real world is well explored by the model space. But I digress.

The 2,268 one-year climate model simulations of the A2000 autumn weather dataset were then fed into a second much simpler model, called a “precipitation runoff model” (P-R). The P-R model estimates the individual river runoff in SW England and Wales, given the gridcell scale precipitation.

In turn, this P-R model was calibrated using the output of a third climate model, the ERA-40 computer model reanalysis of the historical data. The ERA-40, like other models, outputs variables on a global grid. The authors have used multiple linear regression to calibrate the P-R model so it provides the best match between the river flow gauge data for the 11 UK rainfall catchments studied, and the ERA-40 computer reanalysis gridded data. How good is the match with reality? Dunno, they didn’t say …

So down at the bottom there is some data. But they don’t analyze that data in any way at all. Instead, they just use it to set the parameters of the P-R model.

Summary to date:

•  Actual April 2000 data and actual patterns of surface temperatures, air pressure, and other variables are used repeatedly as the starting point for 2,268 one-year modeled weather runs. The result is called the A2000 synthetic climate. This 2,268 single years of synthetic weather is used as input to a second Precipitation-Runoff model. The P-R model is tuned to the closest match with the gridcell precipitation output of the ERA-40 climate reanalysis model. Using the A2000 weather data, the P-R model generates 2,268 years of synthetic river flow and flood data.

So that’s the first half of the game.

For the second half, they used the output of four global circulation climate models (GCMs). They used those four GCMs to generate what a synthetic world would have looked like if there were no 20th century anthropogenic forcing. Or in the words of Pall2011, each of the four models generated “a hypothetical scenario representing the “surface warming patterns” as they might have been had twentieth-century anthropogenic greenhouse gas emissions not occurred (A2000N).” Here is their description of the changes between A2000 and A2000N:

The A2000N scenario attempts to represent hypothetical autumn 2000 conditions in the [HadAM3-N144] model by altering the A2000 scenario as follows: greenhouse gas concentrations are reduced to year 1900 levels; SSTs are altered by subtracting estimated twentieth-century warming attributable to greenhouse gas emissions, accounting for uncertainty; and sea ice is altered correspondingly using a simple empirical SST–sea ice relationship determined from observed SST and sea ice.

Interesting choice of things to alter, worthy of some thought … fixed year 1900 greenhouse gases, cooler ocean, more sea ice, but no change in land temperatures … seems like that would end up with a warm UK embedded in a cooler ocean. And that seems like it would definitely affect the rainfall. But let us not be distracted by logical inconsistencies …

Then they used the original climate model (HadAM3-N144), initialized with those changes in starting conditions from the four GCM models, combined with the same initial perturbations used in A2000 to generate another couple thousand one-year simulations. In other words, same model, same kickoff date (I just realized the synthetic weather data starts on April Fools Day), different global starting conditions from output of the four GCMs. The result is called the A2000N synthetic climate, although of course they omit the “synthetic”. I guess the N is for “no warming”.

These couple of thousand years of model output weather, the A2000N synthetic climate, then followed the path of the A2000 synthetic climate. They were fed into the second model, the P-R model that had been tuned using the ERA-40 reanalysis model. They emerged as a second set of river flow and flood predictions.

Summary to date:

•  Two datasets of computer generated 100% genuine simulated UK river flow and flood data have been created. Neither dataset is related to actual observational data, either by blood, marriage, or demonstrated propinquity, although to be fair one of the models had its dials set using a comparison of observational data with a third model’s results. One of these two datasets is described by the authors as “hypothetical” and the other as “realistic”.

Finally, of course, they compare the two datasets to conclude that humans are the cause:

The precise magnitude of the anthropogenic contribution remains uncertain, but in nine out of ten cases our model results indicate that twentieth century anthropogenic greenhouse gas emissions increased the risk of floods occurring in England and Wales in autumn 2000 by more than 20%, and in two out of three cases by more than 90%.

Summary to date

•  The authors have conclusively shown that in a computer model of SW England and Wales, synthetic climate A is statistically more prone to synthetic floods than is synthetic climate B.

I’m not sure what I can say besides that, because they don’t say much beside that.

Yes, they show that their results are pretty consistent with this over here, and they generally agree with that over, and by and large they’re not outside the bounds of these conditions, and that the authors estimated uncertainty by Monte Carlo bootstrapping and are satisfied with the results … but considering the uncertainties that they have not included, well, you can draw your own conclusions about whether the authors have established their case in a scientific sense. Let me just throw up a few of the questions raised by this analysis.

QUESTIONS FOR WHICH I HAVE ABSOLUTELY NO ANSWER

1.  How were the four GCMs chosen? How much uncertainty does this bring in? What would four other GCMs show?

2.  What are the total uncertainties when the averaged output of one computer model is used as the input to a second computer model, then the output of the second computer model is used as the input to a third simpler computer model, which has been calibrated against a separate climate reanalysis computer model?

3.  With over 2000 one-year realizations, we know that they are exploring the HadAM3-N144 model space for a given setting of the model parameters. But are the various models fully exploring the actual reality space? And if they are, does the distribution of their results match the distribution of real climate variations? That is an unstated assumption which must be verified for their “nine out of ten” results to be valid. Maybe nine out of ten model runs are unrealistic junk, maybe they’re unalloyed gold … although my money is on the former, the truth is there’s no way to tell at this point.

4.  Given the warnings in the source of the data (see below) that “seldom is it safe to allow the [river gauge] data series to speak for themselves”, what quality control was exercised on the river gauge data to ensure accuracy in the setting of the P-R modeled parameters? In general, flows have increased as more land is rendered impermeable (roads, parking lots, buildings) and as land has been cleared of native vegetation. This increases runoff for a given rainfall pattern, and thus introduces a trend of increasing flow in the results. I cannot tell if this is adjusted for in the analysis, despite the fact that the river gauge records are used to calibrate the P-R model.

5.  Since the P-R model is calibrated using the ERA-40 reanalysis results, how well does it replicate the actual river flows year by year, and how much uncertainty is there in the calculated result?

6.  Given an April 1 starting date for each of the years for which we have records, how well does the procedure outlined in this paper (start the HadAM3-N144 on April Fools Day to predict autumn rainfall) predict the measured 80 years or so of rainfall for which we have actual records?

7.  Given an April 1 starting date for each of the years for which we have records, how well does the procedure outlined in this paper (start the HadAM3-N144 on April Fools Day to predict river flows and floods) predict the measured river flows for the years and rivers for which we have actual records?

8.  In a casino game, four different computer model results are compared to reality. Since they predict different outcomes, if one is right, then three are wrong. All four may be wrong to a greater or lesser degree. Payoff on the bet is proportional to correlation of model to reality. What is the mathematical expectation of return on a $1 bet on one of the models in that casino … and what is the uncertainty of that return? Given that there are four models, will betting on the average of the models improve my odds? And how is that question different from the difficulties and the unknowns involved in estimating only this one part of the total uncertainty of this study, using only the information we’ve been given in the study?

9.  There are a total of six climate models involved, each of which has different gridcell sizes and coordinates. There are a variety of methods used to average from one gridcell scheme to another scheme with different gridcell sizes. What method was used, and what is the uncertainty introduced by that step?

10.  The study describes the use of one particular model to create the two sets of 2,000+ single years of synthetic weather … how different would the sets be if a different climate model were used?

11.  Given that the GCMs forecast different rainfall patterns than those of the ERA-40 reanalysis model, and given that the P-R model is calibrated to the ERA-40 model results, how much uncertainty is introduced by using those same ERA-40 calibration settings with the GCM results?

12.  Did they really start the A2000N simulations by cooling the ocean and not the land as they seem to say?

As you can see, there are lots of important questions left unanswered at this point.

Reading over this, there’s one thing that I’d like to clarify. I am not scornful of this study because it is wrong. I am scornful of this study because it is so very far from being science that there is no hope of determining if this study is wrong or not. They haven’t given us anywhere near the amount of information that is required to make even the most rough judgement as to the validity of their analysis.

BACK TO BORING OLD DATA …

As you know, I like facts. Robert Heinlein’s comment is apt:

What are the facts? Again and again and again-what are the facts? Shun wishful thinking, ignore divine revelation, forget what “the stars foretell,” avoid opinion, care not what the neighbors think, never mind the unguessable “verdict of history”–what are the facts, and to how many decimal places? You pilot always into an unknown future; facts are your single clue. Get the facts!

Because he wrote that in 1973, the only thing Heinlein left out was “beware computer model results.” Accordingly, I went to the river flow gauge data site referenced in Pall2011, which is here. I got as far as the part where it says (emphasis mine):

Appraisal of Long Hydrometric Series

… Data precision and consistency can be a major problem with many early hydrometric records. Over the twentieth century instrumentation and data acquisition facilities improved but these improvements can themselves introduce inhomogeneities into the time series – which may be compounded by changes (sometimes undocumented) in the location of the monitoring station or methods of data processing employed. In addition, man’s influence on river flow regimes and aquifer recharge patterns has become increasingly pervasive, over the last 50 years especially. The resulting changes to natural river flow regimes and groundwater level behaviour may be further affected by the less perceptible impacts of land use change; although these have been quantified in a number of important experimental catchments generally they defy easy quantification.

So like most long-term records of natural phenomena, this one also has its traps for the unwary. Indeed, the authors close out the section by saying:

It will be appreciated therefore that the recognition and interpretation of trends relies heavily on the availability of reference and spatial information to help distinguish the effects of climate variability from the impact of a range of other factors; seldom is it safe to allow the data series to speak for themselves.

Clearly, the authors of Pall2011 have taken that advice to heart, as they’ve hardly let the data say a single word … but on a more serious note, since this is the data they used regarding “climate variability” to calibrate the P-R model, did the Pall2011 folks follow the advice of the data curator? I see no evidence of that either way.

In any case, I could see that the river flow gauge data wouldn’t be much help to me. I was intrigued, however, by the implicit claim in the paper that extreme precipitation events were on the rise in the UK. I mean, they are saying that the changing climate will bring more floods, and the only way that can happen is if the UK has more extreme rains.

Fortunately, we do have another dataset of interest here. Unfortunately it is from the Hadley Centre again, this time the Hadley UK Precipitation dataset of Alexander and Jones, and yes, it is Phil Jones (HadUKP). Fortunately, the reference paper doesn’t show any egregious issues. Unfortunately but somewhat unavoidably, it uses a complex averaging system. Fortunately, the average results are not much different from a straight average on the scale of interest here. Unfortunately, there’s no audit trail so while averages may only be slightly changed, there’s no way to know exactly what was done to a particular extreme in a particular place and time.

In any case, it’s the best we have. It lists total daily rainfall by section of the UK, and one of these sections is South West England and Wales, which avoids the problems in averaging the sections into larger areas. Figure 2 shows the autumn maximum one-day rainfall for SW England and Wales, which was the area and time-frame Pall2011 studied regarding the autumn 2000 floods:

Figure 2. Maximum autumn 1-day rainfall, SW England and Wales, Sept-Oct-Nov. The small trend is obviously not statistically different from zero.

The extreme rainfall shown in this record is typical of records of extremes. In natural records, the extremes rarely have a normal (Gaussian or bell-shaped) distribution. Instead, typically these records contain a few extremely large values, even when we’re just looking at the extremes. The kind of extreme rainfalls leading to the flooding of 2000 are seen in Figure 3. I see this graph as a cautionary tale, in that if the record had started a year later, the one-day rainfall in 2000 would be by far the largest in the record.

In any case, for the 70 years of this record there is no indication of increasing flood risk from climate factors. Pall2011 has clearly shown that in two out of three of the years of synthetic climate B, the chance of a synthetic autumn flood in a synthetic SW England and Wales went up by 90%, over the synthetic flood risk in synthetic climate A.

But according to the observational data, there’s no sign of any increase in autumn rainfall extremes in SW England and Wales, so it seems very unlikely they were talking about our SW England and Wales … gives new meaning to the string theory claim of multiple parallel universes, I guess.

IMPLICATIONS OF THE PUBLICATION OF THIS STUDY

It is very disturbing that Nature Magazine would publish this study. There is one and only one way in which this study might have stood the slightest chance of scientific respectability. This would have been if the authors had published the exact datasets and code used to produce all of their results. A written description of the procedures is pathetically inadequate for any analysis of the validity of their results.

At an absolute minimum, to have any hope of validity the study requires the electronic publication of the A2000 and A2000N climates in some accessible form, along with the results of simple tests of the models involved (e.g. computer predictions of autumn river flows, along with the actual river flows). In addition, the study needs an explanation of the ex-ante criteria used to select the four GCMs and the lead model, and the answers to the questions I pose above, to be anywhere near convincing as a scientific study. And even then, when people finally get a chance to look at the currently unavailable A2000 and A2000N synthetic climates, we may find that they bear no resemblance to any reality, hypothetical or otherwise …

As as result, I put the onus on Nature Magazine on this one. Given the ephemeral nature of the study, the reviewers should have asked the hard questions. Nature Editors, on the other hand, should have required that the authors post sufficient data and code so that other scientists can see if what they have done is correct, or if it would be correct if some errors were fixed, or if it is far from correct, or just what is going on.

Because at present, the best we can say of the study is a) we don’t have a clue if it’s true, and b) it is not falsifiable … and while that looks good in the “Journal of Irreproducible Results“, for a magazine like Nature that is ostensibly about peer-reviewed science, that’s not a good thing.

w.

PS – Please don’t construe this as a rant against computer models. I’ve been programming computers since 1963, longer than many readers have been around. I’m fluent in R, C, VBA, and Pascal, and I can read and write (slowly) in a half-dozen other computer languages. I use, have occasionally written, and understand the strengths, weaknesses, and limitations of a variety computer models of real-world systems. I am well aware that “all models are wrong, and some models are useful”, thats why I use them and study them and occasionally write them.

My point is that until you test, really test your model by comparing the output to reality in the most exacting tests you can imagine, you have nothing more than a complicated toy of unknown veracity. And even after extensive testing, models can still be wrong about the real world. That’s why Boeing still has test flights of new planes, despite using the best computer models that billion$ can buy, and despite the fact that modeling airflow around a plane is orders of magnitude simpler than the modeling global climate …

I and others have shown elsewhere (see my thread here, the comment here, and the graphic here) that the annual global mean temperature output of NASA’s pride and joy climate model, the GISS-E GCM, can be replicated to 98% accuracy by the simple one-line single-variable equation T(n) = [lambda * Forcings(n-1)/tau + T(n-1) ] exp(-1/tau) with T(n) being temperature at time n, and lambda and tau being constants of climate sensitivity and lag time …

Which, given the complexity of the climate, makes it very likely that the GISSE model is both wrong and not all that useful. And applying four of that kind of GCMs to the problem of UK floods certainly doesn’t improve the accuracy of your results …

The problem is not computer models. The problem is Nature Magazine trying to pass off the end results of a long computer model daisy-chain of specifically selected, untested, unverified, un-investigated computer models as valid, falsifiable, peer-reviewed science. Call me crazy, but when your results represent the output of four computer models, which are fed into a fifth computer model, whose output goes to a sixth computer model, which is calibrated against a seventh computer model, and then your results are compared to a series of different results from the fifth computer model but run with different parameters, in order to demonstrate that flood risks have changed from increasing GHGs … well, when you do that, you need to do more than wave your hands to convince me that your flood risk results are not only a valid representation of reality, but are in fact a sufficiently accurate representation of reality to guide our future actions.

Advertisements

  Subscribe  
newest oldest most voted
Notify of

The first time I recall computer modelling being presented as “proof” was a very long time ago. I no longer recall even what the topic was, but the question I asked was along the lines of “so what evidence do you have that your computer model reflects the real world”
I shall always remember two things the answer, and the reaction in the room.
Answer; oh, we ran the model several thousand times and we got the same answer every time.
Reaction in the room; Me and perhaps three of four other people laughing hystericaly. The other 150 so or at the lecture…puzzled looks.
30 years later I’m watching the climate debate and thinking…so those 150 morons got their degrees I see…

DirkH

Maybe they’re just trying to beat the record for the longest chain of computer models.

A whole new meaning to “tera-flops” (terror-flops?)
Also, typo ” Here is their description of the changes between A2000 and A200N:” missing a “0”

Richard Telford

12. Did they really start the A2000N simulations by cooling the ocean and not the land as they seem to say?
—————–
Do you never worry that your are out of your depth?
The HadAM3 is an atmosphere model – that is it does not calculate what goes on in the ocean. Instead it has ocean surface temperatures prescribed as a boundary condition. The land temperatures are calculated by the model.

Michael in Sydney

The only thing you forgot to mention is that these computer modelers live in their own synthetic virtual world.
Kind Regards
Michael

rxc

Bravo!

4

Bingo! Though it is particularly pervasive in climate science, it can be said of nearly all sciences that computers and modeling have replaced data collection and analysis. Shame.

Chris S

Scientifically, it’s the Mother of all screw-ups.

biddyb

Looks like York to me.

DeNihilist

Well Willis, the thought occured to me that the old saying, “science only changes when the old guard dies” is a double edged sword. When folk like you and Dr. Spencer and Christy, et. al. pass from this mortal coil, then I believe all science will be done by computer modeling.
Sad really.

David Jay

RE: Question #8
It depends on whether or not the Casino is in Monte Carlo!

jaypan

OMG. Is it really that bad?
Where is science going?
And why is something like that called “science”?

Dena

Rube Goldberg lives!

Willis just for fun, what would the trend plot look like if you removed the two peak outliers?

View from the Solent

As was said a few decades ago by Pauli, it’s not even wrong.

Sean Peake

In ten out of ten cases, my model results indicate that they’re complete frauds.

Nathan Schmidt

One of my favorite “prediction” papers is Mailhot et al. (2007) from the Journal of Hydrology (http://dx.doi.org/10.1016/j.jhydrol.2007.09.019).
After 12 pages of simulating (sorry, estimating “the expected changes in”) future extreme rainfall intensities for southern Quebec, it adds that “Results obtained in this study remain model dependent since the use the output of different global climate models (GCM) might bring very different results.”

Ron Furner

Willis – Please rest assured that Fig 1 is not a photoshop fantasy. It is a photo of the River Ouse in the City of York (North Yorkshire UK) taken from Lendle Bridge close to the city centre. In the 21 years that I lived in the York area, before leaving for the Land of Nuclear Power Generation in 2005, this senario has been repeated at least on three occasions. The White Elizabethan style building ( under water on the lefthand side of the photo) is a well known pub which now has great difficulty getting insurance cover!
All this water runs off the North Yorkshire Moors and has done for many hundreds of years.
I look forward to your posts. Keep up the good work
rgf

Willis Eschenbach

Murray Grainger says:
February 24, 2011 at 2:57 pm

Also, typo ” Here is their description of the changes between A2000 and A200N:” missing a “0″

Thanks, fixed.
w.

Cold Englishman

Did they get paid for this worthless effort?

My guess is they were looking to make some red noise with their daisy chain. Because as we know, red noise generates a hockey stick.

Doug in Seattle

Sorry Willis, I just couldn’t finish this tale. Models constructed of model data? What a steaming pile that is!
I suppose they had to push something (anything?) out the door to prop up the dying cause, but this one is really sad.

Bruce

You are right, Willis, to call this a travesty of science. The purpose of a scientific paper is to present an experiment that others can reproduce, offering a hypothesis to be verified. Without the code and details nobody can reproduce their results. But even if you did, it would be like running a program again with the same input. If you failed to reproduce the results of the experiment in that case, it would just mean that your hardware is broken!
The purpose of this paper is not science, it is propaganda – pure and simple. I use models all the time to estimate probable future results based on known conditions. At no moment do I assume that the models know more than the modellers that created them. You cannot discover truth by running a model. You only “discover” the initial assumptions that generated the model. Cascading models as if they were observations and cooking the data only makes it worse. If a rounding error makes the result 1% wrong, you cannot make it right by running the same error 2,648 times.
I guess this is obvious to programmers but completely unbelievable to neophytes. That must be why the ivory tower programmers with their abstruse models and statistical sleight of hand seem to dominate the Climate “Science” argument. Like we always used to say in school – if you can’t dazzle them with your brilliance, baffle them with your BS!

Randy Links

The summary was printed in our local paper yesterday. AGW is a political campaign and the warmists have the upper hand when they can get press releases printed. Doesn’t matter whether the papers are right, wrong or indifferent. The prize goes to the team that can sway public opinion.

Dave Springer

Hi Willis,
Interesting formula you have there for quality of scientific papers SciQual = 1/AuthNum^2.
I can confirm a similar phenomenon exists for patents in large corporations too. It’s political. I’m a named inventor on four granted patents. In two of them I’m the sole inventor and those two I thought were innovative and valuable so I kept them to myself for personal aggrandizement. The other two – not so much. On those other two I named a couple of colleagues along with me (up to three people could share a patent with each getting the full financial incentive) in order to either repay a favor or have a future favor owed to me. I suspect it was more or less that way everywhere at all the big corporation patent mills and don’t see any reason why it wouldn’t apply to published papers from the halls of academia too.

Robert of Ottawa

Let’s see.
They throw a cubic dice 2000+ times and conclude the average is 3.5.
They then throw a tetrahedral dice 2000+ times and conclude the average is 2.5
Conclusion: Pythagoras causes global warming.

Tom in Florida

OK, I am going to date myself but Joe South sang it then and it is very appropriate now.

daniel

Science fiction is definitively close to consensus climate science, and vice versa

mike g

So, Dick Telford, are you justifying this crap, or what? To me, it places Nature below the level of the National Enquirer. I’m starting to wonder if there aren’t any inquiring minds left out there in the world of government funded science.

Robert of Ottawa

OK OK Pythagoras with a 90% certainty; 10% says it may be Euclid.

In re the PS: When I worked at MIT LL, there was a fellow scientist who used to
resubmit his program with attached data twice, just to make sure the results were
not a fluke. Just by doing a model run again doesn’t reinforce the results. Arrgh!

Dave Springer

Richard Telford says:
February 24, 2011 at 2:59 pm
“Do you never worry that your are out of your depth?”
You say that like climatologists writing computer programs aren’t out of their depth. Which of course they are as any programming professionals who saw the sphaghetti code created by the East Anglia miscreants will tell you.
So the way I see it is “When in Rome, do as the Romans do” and “People who live in glass houses shouldn’t throw stones”. Pretty much everyone in this CAGW brouhaha is out of their depth in one way or another.
When in Rome, do as the Romans do is what I say.

3x2

Is this what climate science has become? Fantasy results from fantasy worlds. I suppose that once you can justify cold as being a symptom of warming then anything is possible.
If I’m not mistaken the first building on the left of your photo is The Kings Arms in York. Now all the authors needed to do had they wanted some real data was to go into The Kings Arms and order a pint. Fixed next to the bar is a floor to ceiling brass strip marked with flood levels going right back to the Civil War. The English one. IIRC 1640 was the year to beat.
Of course these days any flooding of York is all about global warming, the other 350 years of flooding having been down to witchcraft or something.

Evan Jones

It rained all night the day I left,
The weather it was dry;
The sun so hot I froze to death;
Susanna, don’t you cry.

3x2

It would be more than interesting to see the full source code used in the paper; oh that is assuming they’ve lived up to their scientific obligation and made the source code available for review by peers and readers of their paper.
“Because of the critical importance of methods, scientific papers
must include a description of the procedures used to produce the
data, sufficient to permit reviewers and readers of a scientific paper
to evaluate not only the validity of the data but also the reliability
of the methods used to derive those data
. If this information is not
available, other researchers may be less likely to accept the data
and the conclusions drawn from them. They also may be unable
to reproduce accurately the conditions under which the data were
derived.” – US National Academy of Sciences (NAS),
http://www.btc.iitb.ac.in/library/On_being_a_scientist.pdf

Willis Eschenbach

Richard Telford says:
February 24, 2011 at 2:59 pm

12. Did they really start the A2000N simulations by cooling the ocean and not the land as they seem to say?

—————–
Do you never worry that your [sic] are out of your depth?
The HadAM3 is an atmosphere model – that is it does not calculate what goes on in the ocean. Instead it has ocean surface temperatures prescribed as a boundary condition. The land temperatures are calculated by the model.

Perhaps unlike you, Richard, I ask questions when I don’t know the answer. It’s an ugly habit, I’m aware of that, one that’s frowned on at RealClimate, but asking questions is the only way I know of to learn. Does asking questions mean someone is “out of their depth”? Generally not, on my planet. I get worried when people stop asking questions …
I understand that the SSTs are prescribed and the land surface temperatures are calculated in the HadAGM3. What I’m referring to are the initial conditions input to the HadAM3 model for the kickoff of the A2000N simulations. Or as I said, the conditions that “start the A2000N simulations”. As I understand it, the only changes in the starting conditions are the SSTs, not the land temperature, but that’s not clear, which is why I asked …
Now, given your certitude above, I’m sure that you can show us where the Pall2011 folks talk about setting the starting conditions for the HadAM3 runs, in particular the method they used to set the starting land temperatures, soil wetness, and other conditions. I couldn’t find it, but I’m aware that I might have missed it, which is why I asked. Once you provide that, we can move on to the other 11 questions …
Again, this is why having access to the code and the data used is so vital. If I had that, there’d be no question of what the input climate variables were. Instead, we waste time with this.
Next, speaking of depths that one is in or over, do you never worry that the count of your unrelenting personal attacks on me is inversely proportional to the depth of your actual belief in your scientific claims? Every moment you spend speculating on whether I’m out of my depth is time not spent explaining your view of the science … coincidence? You be the judge.
Finally, whether you or I are out of our depth is immaterial. I say I don’t know how they initialized the land temperatures for the A2000N runs. It seems you are saying you know how they set them, but it’s not clear. If you know, you’ll let us know. Or someone else will. Or not.
But what does “depth”, whatever that means, have to do with that process that we’re engaged in? Someone totally “out of their depth”, a rank beginner, may point me to the correct answer to any of my 12 questions above. Depth is meaningless.
w.

richard verney

And they call this science?
Extraordinary, what a worthless study, and no doubt paid for by the tax payer.
When I read the summary of this report in the newspapers, I do not recall seeing it reported that this was simply the results of a computer model run (or worse still a model run based upon another model run). Quite frankly, any such report should make it clear that it is simply based upon computer models and the findings are therefore likely to be complete and utter bo**ocks.

starzmom

Thank you Willis. And soon coming to another journal somewhere near you, is a different author quoting this stuff as gospel.

D. Patterson

daniel says:
February 24, 2011 at 4:06 pm
Science fiction is definitively close to consensus climate science, and vice versa

Science fiction is fiction based upon speculations about science within the laws of nature.
Science fantasy is fiction based upon speculations about supernatural fantasies.
Supernatural fantasies are often represented by simulacra.
One form of simulacra are numerical models such as the climate models used by consensus Climate Science.
Climate Science models which incorporate supernatural simulacra to represent climate are by definition pseudo-scientific fantasies.
Climate Science models which incorporate supernatural simulacra to represent climate are by definition pseudo-scientific fantasies akin to science fantasy fiction.
Supernatural Climate Science models cannot by definition be natural science fiction.

Willis Eschenbach

Anthony Watts says:
February 24, 2011 at 3:24 pm

Willis just for fun, what would the trend plot look like if you removed the two peak outliers?


WUWT is nothing if not a full service blog. The trend has increased by a whacking great eight-tenths of a millimetre … per century … over the trend of the complete dataset. Still far, far from significant.
w.

Dave Springer

It appears the authors have begun the scientific method. They formed a hypothesis and used a computer model to generate predictions. The next step is to test the predictions.
Where they go astray is they seem to believe they can test the predictions with a computer model. This isn’t how it works in science or engineering. The model outputs are tested against reality. What they’re doing is just about the same as designing an aircraft on an engineering workstation then plugging it into Microsoft Flight simulator to test the design. If it flies as expected in MS Flight Simulator they skip building an actual prototype, skip over the hassle of using a test pilot to verify flight characteristics, and go straight into production and loading up the new planes with paying passengers for the first actual flight.
That’s how absurd this climate prediction science really is… only worse because the aircraft is the entire globe and they’re loading it up with 7 billion paying passengers on its maiden flight. I’m one of the paying passengers and I not only don’t want to be a guinea pig in this grand scheme – I want the cost of my ticket refunded!

Mark Nutley

I am not even reading the comments before hand, excuse the rudeness, But what a load of bollocks. The floods in the north of england was due to the last labour government cutting the budget for river and canal clearing, it was bugger all to do with Co2. and everything to do with incompetence.

Willis Eschenbach

Ron Furner says:
February 24, 2011 at 3:38 pm

Willis – Please rest assured that Fig 1 is not a photoshop fantasy. It is a photo of the River Ouse in the City of York (North Yorkshire UK) taken from Lendle Bridge close to the city centre. In the 21 years that I lived in the York area, before leaving for the Land of Nuclear Power Generation in 2005, this senario has been repeated at least on three occasions. The White Elizabethan style building ( under water on the lefthand side of the photo) is a well known pub which now has great difficulty getting insurance cover!
All this water runs off the North Yorkshire Moors and has done for many hundreds of years.
I look forward to your posts. Keep up the good work
rgf

I didn’t think it looked like it was photoshopped. It looked to me like a huge pile of heartbreak and loss. No surprise, as you point out, nature does that. I just wish that folks who are concerned about possible CO2 effects on the weather were as concerned about current weather effects on the poor … that’s the real problem. The rich, by and large, are not hurt by the weather. It is the poor who suffer, and have for centuries. Claiming to be concerned about the “climate refugees” that may be the result of 2050 weather, while looking away from the issue of people dying today of the current weather, is a loser in my book.

Theo Goodwin

Brilliant work, once again, Willis. Thanks so much for your tireless efforts to reveal the so-called “peer review” process for what it is, namely, a pal review process.
What you describe in the modeler’s work reminds me of the sort of thing I have done when acclimating new technicians to a computer model.

Al Gored

Another marvellous dissection! Though it is difficult to actually digest mush.
I recall the first papers I read about models. It was actually referring to their use in bear biology but made the basic point. It was called ‘Models and Reality’ and the authors tried to emphasize how different the two could be, as a warning to the bear biologists who were eagerly starting to use them for various things. Unfortunately, they didn’t listen and now they are the basis of much of that research and are an integral part of the pseudoscience called Conservation Biology. Thus we have predictions of polar bear extinction, etc.

ferdberple

“I guess this is obvious to programmers but completely unbelievable to neophytes. ”
Agreed, I’ve been coding since ’73. Computers are useful tools, but anyone that thinks models are anything other than models has missed the plot.
Stock market forecasting has less variables than climate forecasting. Like the weather you can quite often predict where the market will be in 2 or 3 days, but you will also make mistakes, just like the weather forecast.
Now try and predict the value of the DOW 50 years from now in constant dollars. Yes, the DOW will likely go up, but will go up faster than inflation?
The simple fact is that if climate models could really predict something meaningful about the future, they would use the models to predict something with $$ value other than scare stories to try and drum up more funding.

JimF

Well I hope these long modeling sessions feature screen output with little round critters with big and voracious mouths that go “glom, glom” as they fall from the sky, and the modeler can control a modeled laser beam to blast them before they hit the ground and cause a 1000 year flood. Otherwise, it must be terribly boring to be a “scientist” these days.
sarc?/
I was in the business of economic modeling and forecasting for a long time. The results mean little. Only the assumptions count. These folk are, simply put, foolish.

Al Gored

Oops. “digest” should have been “dissect”

Coldfinger

The Autumn 2000 floods weren’t the most disastrous floods in British History, even within living memory.
http://www.exmoor-nationalpark.gov.uk/index/looking_after/climate/the_lynmouth_floods_of_1952_exmoor.htm
http://en.wikipedia.org/wiki/North_Sea_flood_of_1953
Investigation after the Lynton & Lynmouth flood showed that “past floods had occurred at greater magnitudes”.