Nature Magazine's Folie à Deux, Part Deux

Guest Post by Willis Eschenbach

Well, in my last post I thought that I had seen nature at its worst … Nature Magazine, that is. But now I’ve had a chance to look at the other paywalled Nature paper in the same issue, entitled Anthropogenic greenhouse gas contribution to flood risk in England and Wales in autumn 2000, by Pardeep Pall, Tolu Aina, Dáithí A. Stone, Peter A. Stott, Toru Nozawa, Arno G. J. Hilberts, Dag Lohmann and Myles R. Allen (hereinafter Pall2011). The supplementary information is available here, and contains much of the concepts of the paper. In the autumn of 2000, there was extreme rainfall in southwest England and Wales that led to widespread flooding. Pall2011 explores the question of the expected frequency of this type of event They conclude (emphasis mine):

… in nine out of ten cases our model results indicate that twentieth century anthropogenic greenhouse gas emissions increased the risk of floods occurring in England and Wales in autumn 2000 by more than 20%, and in two out of three cases by more than 90%.

Figure 1. England in the image of Venice, Autumn 2000. Or maybe Wales. Picture reproduced for pictorial reasons only, if it is Wales, please, UKPersons, don’t bust me, I took enough flak for the New Orleans photo in Part 1. Photo Source

To start my analysis, I had to consider the “Qualitative Law of Scientific Authorship”, which states that as a general rule:

Q ≈ 1 / N^2

where Q is the quality of the scientific study, and N^2 is the square of the number of listed authors. More to the point, however, let’s begin instead with this. How much historical UK river flow data did they analyze to come to their conclusions about UK flood risk?

Unfortunately, the answer is, they didn’t analyze any historical river flow data at all.

You may think I’m kidding, or that this is some kind of trick question. Neither one. Here’s what they did.

They used a single seasonal resolution atmospheric climate computer model (HadAM3-N144) to generate some 2,268 single-years of synthetic autumn 2000 weather data. The observed April 2000 climate variables (temperature, pressure, etc) were used as the initial values input to the HadAM3-N144 model. The model was kicked off using those values as a starting point, and run over and over a couple thousand times. The authors of Pall2011 call this 2,268 modeled single years of computer-generated weather “data” the “A2000 climate”. I will refer to it as the A2000 synthetic climate, to avoid confusion with the real thing.

The A2000 synthetic climate is a universe of a couple thousand single-year outcomes of one computer model (with a fixed set of internal parameter settings), so presumably the model space given those parameters is well explored … which means nothing about whether the actual variation in the real world is well explored by the model space. But I digress.

The 2,268 one-year climate model simulations of the A2000 autumn weather dataset were then fed into a second much simpler model, called a “precipitation runoff model” (P-R). The P-R model estimates the individual river runoff in SW England and Wales, given the gridcell scale precipitation.

In turn, this P-R model was calibrated using the output of a third climate model, the ERA-40 computer model reanalysis of the historical data. The ERA-40, like other models, outputs variables on a global grid. The authors have used multiple linear regression to calibrate the P-R model so it provides the best match between the river flow gauge data for the 11 UK rainfall catchments studied, and the ERA-40 computer reanalysis gridded data. How good is the match with reality? Dunno, they didn’t say …

So down at the bottom there is some data. But they don’t analyze that data in any way at all. Instead, they just use it to set the parameters of the P-R model.

Summary to date:

•  Actual April 2000 data and actual patterns of surface temperatures, air pressure, and other variables are used repeatedly as the starting point for 2,268 one-year modeled weather runs. The result is called the A2000 synthetic climate. This 2,268 single years of synthetic weather is used as input to a second Precipitation-Runoff model. The P-R model is tuned to the closest match with the gridcell precipitation output of the ERA-40 climate reanalysis model. Using the A2000 weather data, the P-R model generates 2,268 years of synthetic river flow and flood data.

So that’s the first half of the game.

For the second half, they used the output of four global circulation climate models (GCMs). They used those four GCMs to generate what a synthetic world would have looked like if there were no 20th century anthropogenic forcing. Or in the words of Pall2011, each of the four models generated “a hypothetical scenario representing the “surface warming patterns” as they might have been had twentieth-century anthropogenic greenhouse gas emissions not occurred (A2000N).” Here is their description of the changes between A2000 and A2000N:

The A2000N scenario attempts to represent hypothetical autumn 2000 conditions in the [HadAM3-N144] model by altering the A2000 scenario as follows: greenhouse gas concentrations are reduced to year 1900 levels; SSTs are altered by subtracting estimated twentieth-century warming attributable to greenhouse gas emissions, accounting for uncertainty; and sea ice is altered correspondingly using a simple empirical SST–sea ice relationship determined from observed SST and sea ice.

Interesting choice of things to alter, worthy of some thought … fixed year 1900 greenhouse gases, cooler ocean, more sea ice, but no change in land temperatures … seems like that would end up with a warm UK embedded in a cooler ocean. And that seems like it would definitely affect the rainfall. But let us not be distracted by logical inconsistencies …

Then they used the original climate model (HadAM3-N144), initialized with those changes in starting conditions from the four GCM models, combined with the same initial perturbations used in A2000 to generate another couple thousand one-year simulations. In other words, same model, same kickoff date (I just realized the synthetic weather data starts on April Fools Day), different global starting conditions from output of the four GCMs. The result is called the A2000N synthetic climate, although of course they omit the “synthetic”. I guess the N is for “no warming”.

These couple of thousand years of model output weather, the A2000N synthetic climate, then followed the path of the A2000 synthetic climate. They were fed into the second model, the P-R model that had been tuned using the ERA-40 reanalysis model. They emerged as a second set of river flow and flood predictions.

Summary to date:

•  Two datasets of computer generated 100% genuine simulated UK river flow and flood data have been created. Neither dataset is related to actual observational data, either by blood, marriage, or demonstrated propinquity, although to be fair one of the models had its dials set using a comparison of observational data with a third model’s results. One of these two datasets is described by the authors as “hypothetical” and the other as “realistic”.

Finally, of course, they compare the two datasets to conclude that humans are the cause:

The precise magnitude of the anthropogenic contribution remains uncertain, but in nine out of ten cases our model results indicate that twentieth century anthropogenic greenhouse gas emissions increased the risk of floods occurring in England and Wales in autumn 2000 by more than 20%, and in two out of three cases by more than 90%.

Summary to date

•  The authors have conclusively shown that in a computer model of SW England and Wales, synthetic climate A is statistically more prone to synthetic floods than is synthetic climate B.

I’m not sure what I can say besides that, because they don’t say much beside that.

Yes, they show that their results are pretty consistent with this over here, and they generally agree with that over, and by and large they’re not outside the bounds of these conditions, and that the authors estimated uncertainty by Monte Carlo bootstrapping and are satisfied with the results … but considering the uncertainties that they have not included, well, you can draw your own conclusions about whether the authors have established their case in a scientific sense. Let me just throw up a few of the questions raised by this analysis.

QUESTIONS FOR WHICH I HAVE ABSOLUTELY NO ANSWER

1.  How were the four GCMs chosen? How much uncertainty does this bring in? What would four other GCMs show?

2.  What are the total uncertainties when the averaged output of one computer model is used as the input to a second computer model, then the output of the second computer model is used as the input to a third simpler computer model, which has been calibrated against a separate climate reanalysis computer model?

3.  With over 2000 one-year realizations, we know that they are exploring the HadAM3-N144 model space for a given setting of the model parameters. But are the various models fully exploring the actual reality space? And if they are, does the distribution of their results match the distribution of real climate variations? That is an unstated assumption which must be verified for their “nine out of ten” results to be valid. Maybe nine out of ten model runs are unrealistic junk, maybe they’re unalloyed gold … although my money is on the former, the truth is there’s no way to tell at this point.

4.  Given the warnings in the source of the data (see below) that “seldom is it safe to allow the [river gauge] data series to speak for themselves”, what quality control was exercised on the river gauge data to ensure accuracy in the setting of the P-R modeled parameters? In general, flows have increased as more land is rendered impermeable (roads, parking lots, buildings) and as land has been cleared of native vegetation. This increases runoff for a given rainfall pattern, and thus introduces a trend of increasing flow in the results. I cannot tell if this is adjusted for in the analysis, despite the fact that the river gauge records are used to calibrate the P-R model.

5.  Since the P-R model is calibrated using the ERA-40 reanalysis results, how well does it replicate the actual river flows year by year, and how much uncertainty is there in the calculated result?

6.  Given an April 1 starting date for each of the years for which we have records, how well does the procedure outlined in this paper (start the HadAM3-N144 on April Fools Day to predict autumn rainfall) predict the measured 80 years or so of rainfall for which we have actual records?

7.  Given an April 1 starting date for each of the years for which we have records, how well does the procedure outlined in this paper (start the HadAM3-N144 on April Fools Day to predict river flows and floods) predict the measured river flows for the years and rivers for which we have actual records?

8.  In a casino game, four different computer model results are compared to reality. Since they predict different outcomes, if one is right, then three are wrong. All four may be wrong to a greater or lesser degree. Payoff on the bet is proportional to correlation of model to reality. What is the mathematical expectation of return on a $1 bet on one of the models in that casino … and what is the uncertainty of that return? Given that there are four models, will betting on the average of the models improve my odds? And how is that question different from the difficulties and the unknowns involved in estimating only this one part of the total uncertainty of this study, using only the information we’ve been given in the study?

9.  There are a total of six climate models involved, each of which has different gridcell sizes and coordinates. There are a variety of methods used to average from one gridcell scheme to another scheme with different gridcell sizes. What method was used, and what is the uncertainty introduced by that step?

10.  The study describes the use of one particular model to create the two sets of 2,000+ single years of synthetic weather … how different would the sets be if a different climate model were used?

11.  Given that the GCMs forecast different rainfall patterns than those of the ERA-40 reanalysis model, and given that the P-R model is calibrated to the ERA-40 model results, how much uncertainty is introduced by using those same ERA-40 calibration settings with the GCM results?

12.  Did they really start the A2000N simulations by cooling the ocean and not the land as they seem to say?

As you can see, there are lots of important questions left unanswered at this point.

Reading over this, there’s one thing that I’d like to clarify. I am not scornful of this study because it is wrong. I am scornful of this study because it is so very far from being science that there is no hope of determining if this study is wrong or not. They haven’t given us anywhere near the amount of information that is required to make even the most rough judgement as to the validity of their analysis.

BACK TO BORING OLD DATA …

As you know, I like facts. Robert Heinlein’s comment is apt:

What are the facts? Again and again and again-what are the facts? Shun wishful thinking, ignore divine revelation, forget what “the stars foretell,” avoid opinion, care not what the neighbors think, never mind the unguessable “verdict of history”–what are the facts, and to how many decimal places? You pilot always into an unknown future; facts are your single clue. Get the facts!

Because he wrote that in 1973, the only thing Heinlein left out was “beware computer model results.” Accordingly, I went to the river flow gauge data site referenced in Pall2011, which is here. I got as far as the part where it says (emphasis mine):

Appraisal of Long Hydrometric Series

… Data precision and consistency can be a major problem with many early hydrometric records. Over the twentieth century instrumentation and data acquisition facilities improved but these improvements can themselves introduce inhomogeneities into the time series – which may be compounded by changes (sometimes undocumented) in the location of the monitoring station or methods of data processing employed. In addition, man’s influence on river flow regimes and aquifer recharge patterns has become increasingly pervasive, over the last 50 years especially. The resulting changes to natural river flow regimes and groundwater level behaviour may be further affected by the less perceptible impacts of land use change; although these have been quantified in a number of important experimental catchments generally they defy easy quantification.

So like most long-term records of natural phenomena, this one also has its traps for the unwary. Indeed, the authors close out the section by saying:

It will be appreciated therefore that the recognition and interpretation of trends relies heavily on the availability of reference and spatial information to help distinguish the effects of climate variability from the impact of a range of other factors; seldom is it safe to allow the data series to speak for themselves.

Clearly, the authors of Pall2011 have taken that advice to heart, as they’ve hardly let the data say a single word … but on a more serious note, since this is the data they used regarding “climate variability” to calibrate the P-R model, did the Pall2011 folks follow the advice of the data curator? I see no evidence of that either way.

In any case, I could see that the river flow gauge data wouldn’t be much help to me. I was intrigued, however, by the implicit claim in the paper that extreme precipitation events were on the rise in the UK. I mean, they are saying that the changing climate will bring more floods, and the only way that can happen is if the UK has more extreme rains.

Fortunately, we do have another dataset of interest here. Unfortunately it is from the Hadley Centre again, this time the Hadley UK Precipitation dataset of Alexander and Jones, and yes, it is Phil Jones (HadUKP). Fortunately, the reference paper doesn’t show any egregious issues. Unfortunately but somewhat unavoidably, it uses a complex averaging system. Fortunately, the average results are not much different from a straight average on the scale of interest here. Unfortunately, there’s no audit trail so while averages may only be slightly changed, there’s no way to know exactly what was done to a particular extreme in a particular place and time.

In any case, it’s the best we have. It lists total daily rainfall by section of the UK, and one of these sections is South West England and Wales, which avoids the problems in averaging the sections into larger areas. Figure 2 shows the autumn maximum one-day rainfall for SW England and Wales, which was the area and time-frame Pall2011 studied regarding the autumn 2000 floods:

Figure 2. Maximum autumn 1-day rainfall, SW England and Wales, Sept-Oct-Nov. The small trend is obviously not statistically different from zero.

The extreme rainfall shown in this record is typical of records of extremes. In natural records, the extremes rarely have a normal (Gaussian or bell-shaped) distribution. Instead, typically these records contain a few extremely large values, even when we’re just looking at the extremes. The kind of extreme rainfalls leading to the flooding of 2000 are seen in Figure 3. I see this graph as a cautionary tale, in that if the record had started a year later, the one-day rainfall in 2000 would be by far the largest in the record.

In any case, for the 70 years of this record there is no indication of increasing flood risk from climate factors. Pall2011 has clearly shown that in two out of three of the years of synthetic climate B, the chance of a synthetic autumn flood in a synthetic SW England and Wales went up by 90%, over the synthetic flood risk in synthetic climate A.

But according to the observational data, there’s no sign of any increase in autumn rainfall extremes in SW England and Wales, so it seems very unlikely they were talking about our SW England and Wales … gives new meaning to the string theory claim of multiple parallel universes, I guess.

IMPLICATIONS OF THE PUBLICATION OF THIS STUDY

It is very disturbing that Nature Magazine would publish this study. There is one and only one way in which this study might have stood the slightest chance of scientific respectability. This would have been if the authors had published the exact datasets and code used to produce all of their results. A written description of the procedures is pathetically inadequate for any analysis of the validity of their results.

At an absolute minimum, to have any hope of validity the study requires the electronic publication of the A2000 and A2000N climates in some accessible form, along with the results of simple tests of the models involved (e.g. computer predictions of autumn river flows, along with the actual river flows). In addition, the study needs an explanation of the ex-ante criteria used to select the four GCMs and the lead model, and the answers to the questions I pose above, to be anywhere near convincing as a scientific study. And even then, when people finally get a chance to look at the currently unavailable A2000 and A2000N synthetic climates, we may find that they bear no resemblance to any reality, hypothetical or otherwise …

As as result, I put the onus on Nature Magazine on this one. Given the ephemeral nature of the study, the reviewers should have asked the hard questions. Nature Editors, on the other hand, should have required that the authors post sufficient data and code so that other scientists can see if what they have done is correct, or if it would be correct if some errors were fixed, or if it is far from correct, or just what is going on.

Because at present, the best we can say of the study is a) we don’t have a clue if it’s true, and b) it is not falsifiable … and while that looks good in the “Journal of Irreproducible Results“, for a magazine like Nature that is ostensibly about peer-reviewed science, that’s not a good thing.

w.

PS – Please don’t construe this as a rant against computer models. I’ve been programming computers since 1963, longer than many readers have been around. I’m fluent in R, C, VBA, and Pascal, and I can read and write (slowly) in a half-dozen other computer languages. I use, have occasionally written, and understand the strengths, weaknesses, and limitations of a variety computer models of real-world systems. I am well aware that “all models are wrong, and some models are useful”, thats why I use them and study them and occasionally write them.

My point is that until you test, really test your model by comparing the output to reality in the most exacting tests you can imagine, you have nothing more than a complicated toy of unknown veracity. And even after extensive testing, models can still be wrong about the real world. That’s why Boeing still has test flights of new planes, despite using the best computer models that billion$ can buy, and despite the fact that modeling airflow around a plane is orders of magnitude simpler than the modeling global climate …

I and others have shown elsewhere (see my thread here, the comment here, and the graphic here) that the annual global mean temperature output of NASA’s pride and joy climate model, the GISS-E GCM, can be replicated to 98% accuracy by the simple one-line single-variable equation T(n) = [lambda * Forcings(n-1)/tau + T(n-1) ] exp(-1/tau) with T(n) being temperature at time n, and lambda and tau being constants of climate sensitivity and lag time …

Which, given the complexity of the climate, makes it very likely that the GISSE model is both wrong and not all that useful. And applying four of that kind of GCMs to the problem of UK floods certainly doesn’t improve the accuracy of your results …

The problem is not computer models. The problem is Nature Magazine trying to pass off the end results of a long computer model daisy-chain of specifically selected, untested, unverified, un-investigated computer models as valid, falsifiable, peer-reviewed science. Call me crazy, but when your results represent the output of four computer models, which are fed into a fifth computer model, whose output goes to a sixth computer model, which is calibrated against a seventh computer model, and then your results are compared to a series of different results from the fifth computer model but run with different parameters, in order to demonstrate that flood risks have changed from increasing GHGs … well, when you do that, you need to do more than wave your hands to convince me that your flood risk results are not only a valid representation of reality, but are in fact a sufficiently accurate representation of reality to guide our future actions.

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

153 Comments
Inline Feedbacks
View all comments
Alan the Brit
February 25, 2011 3:08 am

Ron Furner says:
February 24, 2011 at 3:38 pm
Willis – Please rest assured that Fig 1 is not a photoshop fantasy. It is a photo of the River Ouse in the City of York (North Yorkshire UK) taken from Lendle Bridge close to the city centre. In the 21 years that I lived in the York area, before leaving for the Land of Nuclear Power Generation in 2005, this senario has been repeated at least on three occasions. The White Elizabethan style building ( under water on the lefthand side of the photo) is a well known pub which now has great difficulty getting insurance cover!
All this water runs off the North Yorkshire Moors and has done for many hundreds of years.
I look forward to your posts. Keep up the good work
A nice pint of Sam Smith’s can be had at that pub, oh the memories come flooding back of my yoof! One can see the flood levels gouged into the brickwork outside, & dated, & even inside too all well above floor level? (I seem to recall but memory is dodgy on that score!) The actual river level is well below outside road level usually. Please note, these events don’t happen once adequate flood alleviation defences are constructed. Thames Water spent £Ms (of taxpayers money) in the 1970s/80s on flood defences for the Thames catchment area as a result of the severe flooding in the late 1940s when London got hit, (It’s amasing how things get done afte rthe capital city gets hit by anything & the surrounding areas where those in control live & work!) Even then the Thames Barrier was just a dream.
I presume this study & “puter model” comes with the usual caviats?????????? Deja Vu, 1925 Pocket Oxford Dictionary…..Synthetic: “artifical, imitation, not existing in nature. Sophisticated: “spoil the purity or simplicity of, or adulterate”. (from sophist: A paid teacher of philosphy in anceint Greece willing to avail himself of fallacies to help his case). Simulate: “feign, pretend, wear the guise of, TAMPER with, act the part of, counterfiet, shadowy likeness of, mere pretence”. I don’t chose the words these guys use to describe their artworks, they do!!!

Ed Zuiderwijk
February 25, 2011 3:10 am

What’s also missing is a control. They should have run the same “analysis” on an area in
UK, say somewhere in Yorkshire, where no flooding occurred and see how that would fit in their models.
Should a simulation where you feed the output from one model into another model and yet again not be properly called a “cascade”, or a “waterfall model”? Perhaps that would explain it.

Keith Davies
February 25, 2011 4:01 am

GIGO

Viv Evans
February 25, 2011 4:02 am

Actually, I pity those poor researchers who spent their lives in computer rooms, never getting out to feel an actual raindrop land on their nose …
Thanks for this excellent and – sadly, for science – hilarious dissection of yet another Nature effort.
Now one wonders again who pal-reviewed this one.
Living in one of the affected areas, as the graphs linked by JurajV above show: sometimes it rains a lot, other times it doesn’t … managing flood defenses properly is the way to address this problem. Blaming AGW/CO2 most certainly isn’t.
Out parks provide huge run-off areas for flash floods, which can happen extremely quickly due to the geography and geology of the catchment area. It has withstood several tests now, and for us dog walkers it is huge fun to see the ducks swimming on the inundated football pitches being chased by dogs who can’t believe their luck.
As for this:
the “Qualitative Law of Scientific Authorship”, which states that as a general rule:
Q ≈ 1 / N^2
where Q is the quality of the scientific study, and N^2 is the square of the number of listed authors.

Yep. Have observed this for decades … and oddly, it always seem to be papers in Nature who give proof, even if the subject is not climatology …
Thanks, Willis!

Chris Wright
February 25, 2011 4:25 am

Another excellent piece by Willis. I remember reading the report in the Daily Telegraph and, as always, it was completely uncritical. It’s been obvious for a long time that much of the bad science we are seeing is based on one or another of the climate models, and that the output of these models is treated almost as if it were empirical data. But this study does seem to represent a new low.
As often seems to be the case, the one ray of sunshine comes from the actual data. That graph demonstrates simply and elegantly that the study is junk. I really think that Nature should be re-classified as a science fiction magazine. It seems they will print anything as long as it contributes to the global warming hysteria.
I’m sure that computer models, including climate models, have their uses. But they cannot forecast future climate, just as they can’t correctly forecast temperatures for the coming winter. And their output is not empirical data. You can only get empirical data by measuring what’s happening in the real world.
No, the problem is that climate models are being abused on an almost industrial scale.
Chris

Ross H
February 25, 2011 5:04 am

Good to see York in the image here. Practically over the river from where I live.
Happens every year does the flooding. Not always that bad, but most of the time it’s high enough for the Kings Arms (the pub to the left of the picture) to close its river side door with a flood barrier. I have lived in York since I was 19 and seen this a lot and they never sort it out properly. One year I was working just 30 meters from that place and from the river being at normal height when I started work (8:30 ish) it was up to the height on the picture by 12:30. It can come up quite fast. It you go into the pub, there is a gauge top how height the river has been in the past, and it has been much higher before.
It also froze over in December, much to the joy of those foolish enough to ride bikes and write their names on the ice.

John Barrett
February 25, 2011 5:05 am

Previous commenters have identified the picture as York. It also looks a bit like Tewkesbury, which is regularly flooded by the Avon and Severn confluence. In fact the only part of town that doesn’t get flooded is the Abbey.
Did they know something in the 12th Century that we don’t ?

Ross H
February 25, 2011 5:07 am

It you go into the pub, there is a gauge top how height the river has been in the past, and it has been much higher before.
corrected too…
If you go into the pub, there is a gauge of how high the river has been in the past, and it has been much higher before.
(chatting and typing at the same time – fail)

Jit
February 25, 2011 5:45 am

Thanks for the analysis. How far has Nature fallen when it publishes something that belongs in a third-tier publication.
It was always the case that Nature needed *good data* from a *good experiment* providing a *novel result*. No longer, it seems. For the present work only offers the last of these three (the novel result). It is obvious that the same work, had it not found a link between CO2 and the floods, would never have passed muster.
However, I don’t think the graph shown (of 1-day extreme rain events) was the right one to use. The floods in question took days to fall out of the sky.
That said, the catchments involved are so heavily modified from a ‘pristine’ state that not many conclusions about flood rates can be warranted. To the commenter above who made a political complaint about river clearance: actually, no. Natural dams and blockages slow the passage of the water downstream – they’re good at evening out extreme events. There has even been consideration of introducing beavers for their flood-control skills.

Micky H Corbett
February 25, 2011 5:59 am

Willis
Your PS about computer models is bang on. I am a great believer in this.
As for this paper…models through models…well, I’m waiting for the Cup Of Tea and The Improbability Drive to appear.

wsbriggs
February 25, 2011 6:29 am

The IPCC5 report may be being written as we sit here, but equally so, the rebuttal is being written in WUWT. Unlike previous “works of art”, there are well documented, well researched pieces to counter the nonsense being promulgated. I particularly enjoyed Willis using the “bet in the Casino” analogy.
Having lived though (professionally) the uber hyped Robotics-fad followed by the AI-fad periods of “Computer Science” (if it needs science in the name, it’s not one), I see the same PR pieces, the same headline hunting behavior as those searching for funding back then. At least some of them were trying to start companies to produce things, the current crew seems content to suck at the teat of Government.

Vince Causey
February 25, 2011 8:06 am

Interestingly, there is an article in the Indy, that consists of a discussion carried out be email between the science correspondent and Freeman Dyson. At one point, in reply to the correspondents appeal to consensus, Dyson mentions computer models as being one of the greatest problems with the current consensus. In his view, decades of working with models has made researchers confuse the output from their models with reality.
This particular fiasco fits Dyson’s argument to a Tee. You can almost get into the mind of the modellers and imagine them imagining that what they are doing is in some sense describing the real world. In fact, Dyson uses the word ‘delusional.’ Surely, if these people were not deluded by their dogma, they would never have produced such research, and tried to pass it off as science. If the editors of Nature weren’t also delusional, they would have thrown it into the garbage.

richard verney
February 25, 2011 10:00 am

Willis in response to your post at February 25 2011 at 2:30 am
I enjoyed your article and the deconstruction of the Nature paper.
My point is that your rebuttal to Clark’s comment was way too strong. Both sides are no doubt culpable of making statements which are way too strong and which should properly contain caveats as to uncertainties. You assert that it is a FACT that IR radiation is absorbed by the oceans and that this results in the entrainment of energy. I would accept that if IR radiation is absorbed by the oceans then energy would be entrained. However, I stand by my comment that the absorption of IR radiation by the oceans is a point yet to be proved and hence it is presently speculation.
Sometime back I read the post on scienceofdoom to which you refer. I recall that it was an interesting post and that it accepted the point made by me that some 90% of all IR radiation is absorbed within the first 10 microns. My recollection of the article was that it went off track by failing to appreciate the significance of the aforementioned point (especially taking into account that approx 20% of IR is absorbed within just 1 micron and 50% within 5 microns), and instead analysed the position on the assumption that the IR somehow found its way into the well mixed ocean layer (if I recall correctly the author assumes that the IR found its way into the first 5 to 20 mm of the ocean). However, there is an overwhelming likelihood that there is no effective interface between the first few microns and the bulk ocean. With windswept spray, spume etc it is difficult to see how there could be an effective interface, the more so given the energy that this layer is receiving from IR goes to increases the rate of evaporation and convection. If there isn’t effective penetration, then there can be no mixing into the bulk ocean.
My recollection of the scienceofdoom post was that no empirical observational data was set out in support of the proposition that IR radiation is absorbed and the author ran some model in support of his proposition. I personally place no reliance on model runs which do no more than analyse and reflect upon the assumptions made by and the short comings in the state of knowledge and understanding possessed by the programmer.
If I recall correctly, the most emphatic point in favour of the proposition was that without IR, the oceans would quickly freeze over and the author as is typical in the AGW debate, sought to reverse the burden of proof and suggest that anyone who disputes what he says should prove him wrong, rather than the author proving the correctness of his theory/hypothesis. I note that you adopt a similar stance and that you provide no reference/citations to empirical observational data.
One of the problems with the AGW hypothesis is that the proponents of the hypothesis always seek to discuss averages (average conditions, temperatures, radiation etc) when in practice the average condition is rarely encountered in the real world, and this use of averages does not give full recognition to what is going on in the real world. Parts of some oceans are permanently frozen, some seas freeze over from time to time, some never freeze. They all receive different amounts of solar energy and some of the solar energy received in one place is transported to other areas by way of currents etc. One needs to see an energy budget (diurnal) for say each and every say 100 sq miles of the Earth to even begin to build up a picture of what might be going on. As far as the oceans are concerned, this would have to include the energy from all geothermal/hydrothermal sources. As regards hydrothermal sources, the amount of this energy may be very small, or since we haven’t mapped the oceans it may be larger than we think. As regards geothermal energy, one has to consider the effect of the depths of the ocean and the fact that they are closer to the mantle. If the sea bed was not covered by water, the ground would no doubt be hot to walk on. As you are no doubt aware, there are various studies that show the temperature profiles of boreholes to increase by 1 deg C between every 10 and 30 m in depth. If a similar relationship holds true, given that the deepest oceans are about 11,000 metres deep and the average depth is about 4,000 metres this is like the oceans sitting on a hotplate with poor conduction but it could amount to quite a bit of energy.
I consider that it is generally accepted that if we have erred with our assessment of cloud albedo by 1 or 2% then that could explain the warming noted in the various temperature sets (and that assumes that those sets are correct). Given that we have little data on cloud cover, this seems a candidate that certainly can’t be ruled out.
If you actually have some real data showing that IR is absorbed by the oceans, I certainly would be interested in reading it.

oeman50
February 25, 2011 10:10 am

I had one comment on the study in question using CO2 data from 1900 and then assuming the delta between then and 2000 is anthropomorphic. It may be we often start discussions by stipulating to that assumption, but I do not remember seeing a study that determines what ALL of the natural sources of CO2 and other natural GHGs are (like methane, for example). It is logically possible there are natural mechanisms responsible for the increase in atmospheric GHG concentrations that have not been studied or accounted for because everyone has made the assumption it is Man that is responsible.
I have seen some very pretty cartoons of the “carbon cycle” that include the contributions of Man, but they are presented at face value and there are no statements of the potential errors or uncertainies in the data represented. I am sure they are significant, the world is a vast place.
To be able to verify the contibution of the natural world to the GHG cycle may be one of those unverifiable conditions, I get the impression it is a chaotic process as is weather. But I believe we need to properly characterize the uncertainty of the GHG rise in the same way we need to characterize the uncertainty in temperature measurements.

Ben of Houston
February 25, 2011 10:15 am

Diagram:
Model 1->Model 2 -> Model 3 Model 6 —/\
Did I get that correct? Do these people not understand that in iterative models, offset errors don’t cancel? They propogate. One model feeding a second is already a questionable issue, but a group of six models feeding each other? If they conclusively showed the sky was blue, I’d question it.
Come one. They teach this stuff in sophomore level engineering (when we first discuss iterative calculations and our models of stupidly simple systems routinely went to infinity). Even though they taught us how to fix the runaway model problems, they instilled in us the knowledge about offsets and error propogation.
How can undergraduate engineers know this, but PHD holders get published in Nature producing this drivel?

Ben of Houston
February 25, 2011 11:15 am

Please my attempts to diagram the model usage. The html interpretation ate it.

Darkinbad the Brightdayler
February 25, 2011 11:27 am

You can pretty much do what you like in Cloud Cuckoo Land.
& they did by the look of it.
Its not Science its a form of Science Fiction.
It beggars belief that Nature published this

Tim Clark
February 25, 2011 11:57 am

You need to adjust the 1931 value down a bit in Fig. 2.
J. Hansen can show you how it’s done.

February 25, 2011 12:32 pm

When you say: “The kind of extreme rainfalls leading to the flooding of 2000 are seen in Figure 3, ” you must mean Figure 3.
Nothing odd about Nature publishing this kind of thing. A magazine that calls AGW skeptics “deniers,” has obviously lost all objectivity.
In any case, a fine post.

Doctor Gee
February 25, 2011 1:20 pm

Given the continual increase in CO2 concentrations since 2000, our UK brethren must surely count themselves lucky that they haven’t had a similar precipitation/flood event since. /sarc off
Ignoring the daisy chain computer linkage for the moment, would a similar “analysis” by the “study” authors have yielded similar dire predictions if they had modeled any other recent year using the same method?

February 25, 2011 3:27 pm

Willis,
In case you haven’t heard yet- in a post today- http://www.wattsupwiththat.com/2011/02/25/currys-2000-comment-question-can-anyone-defend-%e2%80%9chide-the-decline%e2%80%9d/
“Al Gored says:
February 25, 2011 at 12:10 pm
OT but does anyone know which two new studies these Dems are hanging their hopes on?
“Two key House Democrats called on Republicans Thursday to hold a hearing on the latest climate science amid efforts by the GOP to block the Environmental Protection Agency’s climate authority.
In a letter to the top Republicans on the House Energy and Commerce Committee, Reps. Henry Waxman (D-Calif.) and Bobby Rush (D-Ill.) pointed to two new studies that link climate change to extreme weather.”
www. thehill.com/blogs/e2-wire/677-e2-wire/145937-house-dems-call-for-climate-science-hearings-amid-gop-efforts-to-block-epa-climate-rules
Methinks that they are confusing this process witha UK whitewash.”
The Oxford study, that you reviewed in this post, is one of the reasons given for requesting a hearing (to ensure the EPA gets funded). Thought you might want to know about this- sorry for wasting your time if you already knew this info.

Mark Coates
February 25, 2011 3:44 pm

“Ron Furner says:
Please rest assured that Fig 1 is not a photoshop fantasy. It is a photo of the River Ouse in the City of York (North Yorkshire UK) taken from Lendle Bridge
No, Ouse Bridge ( not Lendal, NOT Lendle)
close to the city centre. In the 21 years that I lived in the York area, before leaving for the Land of Nuclear Power Generation in 2005, this senario has been repeated at least on three occasions. The White Elizabethan style building ( under water on the lefthand side of the photo) is a well known pub which now has great difficulty getting insurance cover!
All this water runs off the North Yorkshire Moors
No, Yorkshire Dales
and has done for many hundreds of years.
No Thousands
I look forward to your posts. Keep up the good work
rgf

richard verney
February 25, 2011 6:13 pm

Willis in response to your post at February 25 2011 at 11:31 am
I see that you are a subscriber to the Trenberth policy on burden of proof. It is your theory that the sea does not freeze because LWIR in some way heats it up and it is therefore up to you to prove your theory, not for me to disprove it. I would suggest that there is an obvious reason why Trenberth has been unable to find his missing energy in the oceans, namely, Co2 does not heat the oceans.
We both know that whether an ocean freezes is much more complex than the energy budget you describe. In passing, it strikes me somewhat strange that although, on your figures, the direct input energy received by the ocean is only about 170 w/sqm by way of energy from the sun, this amount of energy supposedly produces about 330 w/sqm of back radiation to balance the budget. And there I thought that Trenberth et al, were proposing that the Earth receives about 1,366 w/sqm (less about 6% reflected by the atmosphere less 20% reflected by clouds, 4 to 6 % reflected off the water itself) which during the day is equivalent to about 683 w/sqm (less the reflected proportion). One should consider the input energy from the sun during the day but take into account that the ocean is radiating/evaporating/convecting heat 24 hours a day and that back radiation is supposedly a 24 hour energy source.
Please detail the energy budgets for the following:
1. Aral Sea at 45º30 N, 36º35E
2. Aegean Sea at 44º55 N, 13º07E
3. Caspian Sea at 40º58 N, 50º54E
4. Mediterranean Sea at 43º0 N, 3º51E
5, Baltic Sea at 61º0 N, 19º40E
6. Atlantic Sea at 61º0 N, 6º40W
7. 75 miles North of Suez and 75 miles South of Suez. If you have ever sailed through Suez, you will know that there is a substantial temperature drop between the Red Sea and the Med (in the region of 4 to 5 degs C) although the energy budget will be broadly similar for both these locations.
Please detail the precise energy budget at which an ocean begins to freeze. Please explain the different temperature profiles of these oceans/seas in accordance to the energy budget they receive,
As I noted in my previous post, you will not see what is going on in the real world if you only ever consider the notional average condition.

richard verney
February 25, 2011 6:16 pm

Correction to my last post. No.2 is the Adriatic not Aegian.