Why Reanalysis Data Isn't …

Guest Post by Willis Eschenbach

I was reading through the recent Trenberth paper on ocean heat content that’s been discussed at various locations around the web. It’s called “Distinctive climate signals in reanalysis of global ocean heat content”,  paywalled, of course. [UPDATE: my thanks to Nick Stokes for locating the paper here.] Among the “distinctive climate signals” that they claim to find are signals from the massive eruptions of Mt. Pinatubo in mid-1991 and El Chichon in mid-1982. They show these claimed signals in my Figure 1 below, which is also Figure 1 in their paper.

ORAS4 OHC joulesORIGINAL CAPTION: Figure 1. OHC integrated from 0 to 300 m (grey), 700 m (blue), and total depth (violet) from ORAS4, as represented by its 5 ensemble members. The time series show monthly anomalies smoothed with a 12 month running mean, with respect to the 1958–1965 base period. Hatching extends over the range of the ensemble members and hence the spread gives a measure of the uncertainty as represented by ORAS4 (which does not cover all sources of uncertainty). The vertical colored bars indicate a two year interval following the volcanic eruptions with a 6 month lead (owing to the 12 month running mean), and the 1997–98 El Niño event again with 6 months on either side. On lower right, the linear slope for a set of global heating rates (W m-2) is given.

I looked at that and I said “Whaaa???”. I’d never seen any volcanic signals like that in the ocean heat content data. What was I missing?

Well, what I was missing is that Trenberth et al. are using what is laughably called “reanalysis data”. But as the title says, reanalysis “data” isn’t data in any sense of the word. It is the output of a computer climate model masquerading as data.

Now, the basic idea of a “reanalysis” is not a bad one. If you have data with “holes” in it, if you are missing information about certain times and/or places, you can use some kind of “best guess” algorithm to fill in the holes. In mining, this procedure is quite common. You have spotty data about what is happening underground. So you use a kriging procedure employing all the available information, and it gives you the best guess about what is happening in the “holes” where you have no data. (Please note, however, that if you claim the results of your kriging model are real observations, if you say that the outputs of the kriging process are “data”, you can be thrown in jail for misrepresentation … but I digress, that’s the real world and this is climate “science” at its finest.)

The problems arise as you start to use more and more complex procedures to fill in the holes in the data. Kriging is straight math, and it gives you error bars on the estimates. But a global climate model is a horrendously complex creature, and gives no estimate of error of any kind.

Now, as Steven Mosher is fond of pointing out, it’s all models. Even something as simple as

Force = Mass times Acceleration 

is a model. So in that regard, Steven is right.

The problem is that there are models and there are models. Some models, like kriging, are both well-understood and well-behaved. We have analyzed and tested the model called “kriging”, to the point where we understand its strengths and weakness, and we can use it with complete confidence.

Then there is another class of models with very different characteristics. These are called “iterative” models. They differ from models like kriging or F = M A because at each time step, the previous output of the model is used as the new input for the model. Climate models are iterative models. In a climate model, for example, it starts with the present weather, and predicts where the weather will go at the next time step (typically a half hour).

Then that result, the prediction for a half hour from now, is taken as input to the climate model, and the next half-hour’s results are calculated. Do that about 9,000 times, and you’ve simulated a year of weather … lather, rinse, and repeat enough times, and voila! You now have predicted the weather, half-hour by half-hour, all the way to the year 2100.

There are two very, very large problems with iterative models. The first is that errors tend to accumulate. If you calculate one half hour even slightly incorrectly, the next half hour starts with bad data, so it may be even further out of line, and the next, and the next, until the model goes completely off the rails. Figure 2 shows a number of runs from the Climateprediction climate model …

climateprediction_bad_dataFigure 2. Simulations from climateprediction.net. Note that a significant number of the model runs plunge well below ice age temperatures … bad model, no cookies!

See how many of the runs go completely off the rails and head off into a snowball earth, or take off for stratospheric temperatures? That’s the accumulated error problem in action.

The second problem with iterative models is that often we have no idea how the model got the answer. A climate model is so complex and is iterated so many times that the internal workings of the model are often totally opaque. As a result, suppose that we get three very different answers from three different runs. We have no way to say that one of them is more likely right than the other … except for the one tried and true method that is often used in climate science, viz:

If it fits our expectations, it is clearly a good, valid, solid gold model run. And if it doesn’t fit our expectations, obviously we can safely ignore it.

So how many “bad” reanalysis runs end up on the cutting room floor because the modeler didn’t like the outcome? Lots and lots, but how many nobody knows.

With that as a prelude, let’s look at Trenberth’s reanalysis “data”, which of course isn’t data at all … Figure 3 compares the ORAS4 reanalysis model results to the Levitus data:

oras4 reanalysis vs levitus dataFigure 3. ORAS4 reanalysis results for the 0-2000 metre layer (blue) versus Levitus data for the same layer. ORAS4 results are digitized from Figure 1. Note that the ORAS4 “data” prior to about 1980 has error bars from floor to ceiling, and so is of little use (see Figure 1). The data is aligned to their common start in 1958 (1958=0)

In Figure 3, the shortcomings of the reanalysis model results are laid bare. The computer model predicts a large drop in OHC from the volcanoes … which obviously didn’t happen. But instead of building on that reality of no OHC change after the eruptions, the reanalysis model has simply warped the real data so that it can show the putative drop after the eruptions.

And this is the underlying problem with treating reanalysis results as real data—they are nothing of the sort. All that the reanalysis model is doing is finding the most effective way to reshape the data to meet the fantasies, preconceptions, and errors of the modelers. Let me re-post the plot with which I ended my last post. This shows all of the various measurements of oceanic temperature, from the surface down to the deepest levels that we have measured extensively, two kilometers deep.

changes in sea surface and sub correctedFigure 4. Oceanic temperature measurements. There are two surface measurements, from ERSST and ICOADS, along with individual layer measurements for three separate levels, from Levitus. NOTE—Figure 4 is updated after Bob Tisdale pointed out that I was inadvertently using smoothed data for the SSTs.

Now for me, anyone who looks at Figure 4 and claims that they can see the effects of the eruptions of Pinatubo and El Chichon and Mt. Agung in that actual data is hallucinating. There is no effect visible. Yes, there is a drop in SST during the year after Pinatubo … but the previous two drops were larger, and there is no drop during the year after El Chichon or Mt. Agung. In addition, temperatures rose more in the two years before Pinatubo than they dropped in the two years after. All that taken together says to me that it’s just random chance that Pinatubo has a small drop after it.

But the poor climate modelers are caught. The only way that they can claim that CO2 will cause the dreaded Thermageddon is to set the climate sensitivity quite high.

The problem is that when the modelers use a very high sensitivity like 3°C/doubling of CO2, they end up way overestimating the effect of the volcanoes. We can see this clearly in Figure 3 above, showing the reanalysis model results that Trenberth speciously claims are “data”. Using the famous Procrustean Bed as its exemplar, the model has simply modified and adjusted the real data to fit the modeler’s fantasy of high climate sensitivity. In a nutshell, the reanalysis model simply moved around and changed the real data until it showed big drops after the volcanoes … and this is supposed to be science?

Now, does this mean that all reanalysis “data” is bogus?

Well, the real problem is that we don’t know the answer to that question. The difficulty is that it seems likely that some of the reanalysis results are good and some are useless, but in general we have no way to distinguish between the two. This case of Levitus et al. is an exception, because the volcanoes have highlighted the problems. But in many uses of reanalysis “data”, we have no way to tell if it is valid or not.

And as Trenberth et al. have proven, we certainly cannot depend on the scientists using the reanalysis “data” to make even the slightest pretense of investigating whether it is valid or not …

(In passing, let me point out one reason that computer climate models don’t do well at reanalyses—nature generally does edges and blotches, while climate models generally do smooth transitions. I’ve spent a good chunk of my life on the ocean. I can assure you that even in mid-ocean, you’ll often see a distinct line between two kinds of water, with one significantly warmer than the other. Nature does that a lot. Clouds have distinct edges, and they pop into and out of existence, without much in the way of “in-between”. The computer is not very good at that blotchy, patchy stuff. If you leave the computer to fill in the gap where we have no data between two observations, say 10°C and 15°C, the computer can do it perfectly—but it will generally do it gradually and evenly, 10, 11, 12, 13, 14, 15.

But when nature fills in the gap, you’re more likely to get something like 10, 10, 10, 14, 15, 15 … nature usually doesn’t do “gradually”. But I digress …)

Does this mean we should never use reanalyses? By no means. Kriging is an excellent example of a type of reanalysis which actually is of value.

What these results do mean is that we should stop calling the output of reanalysis models  “data”, and that we should TEST THE REANALYSIS MODEL OUTPUTS EXTENSIVELY before use.

These results also mean that one should be extremely cautious when reanalysis “data” is used as the input to a climate model. If you do that, you are using the output of one climate model as the input to another climate model … which is generally a Very Bad Idea™ for a host of reasons.

In addition, in all cases where reanalysis model results are used, the exact same analysis should be done using the actual data. I have done this in Figure 3 above. Had Trenberth et al. presented that graph along with their results … well … if they’d done that, likely their paper would not have been published at all.

Which may or may not be related to why they didn’t present that comparative analysis, and to why they’re trying to claim that computer model results are “data” …

Regards to everyone,

w.

NOTES:

The Trenberth et al. paper identifies their deepest layer as from the surface to “total depth”. However, the reanalysis doesn’t have any changes below 2,000 metres, so that is their “total depth”.

DATA:

The data is from NOAA , except the ERSST and HadISST data, which are from KNMI.

The NOAA ocean depth data is here.

The R code to extract and calculate the volumes for the various Levitus layers is here.

Advertisements

  Subscribe  
newest oldest most voted
Notify of
The Ghost Of Big Jim Cooley

Another superb posting, Willis. I would love to hear what a scientist (who supports the idea of AGW) makes of the ‘science’ of reanalysis data. For how long do we have to put up with the denigration of real science? I used to tell the children of my family how science is ‘real’ and not like religion. I used to tell them that they can trust it completely, that the very nature of science meant that it had to be accurate – that it was our best guess on something after rigorous examination and testing. Well, those days are gone now.

James Evans

The UK Met Office continues to label Mt Pinatubo on its temperature graphs, for instance here:
http://www.metoffice.gov.uk/research/climate/seasonal-to-decadal/long-range/decadal-fc
I don’t understand why. For a start it looks as though the eruption happened just *after* temps had dropped. Perhaps this is due to an inaccurate graph. But more puzzling is that the dip in temperatures that happened around that time looks completely normal. There’s a very similar dip in the mid 50s. And one about 1964. Another one in the mid 70s. There’s one in the mid 80s, and one in the late 90s. Etc.

Trond A

Thanks for an illustrative post. I liked your allution to the Procrustean Bed. So now we have procrustean data.

Good article.
I’d add one thing. It’s generally accepted that in order for a theory to be scientific, it must be ‘well articulated’. Which means someone with an appropriate background, with no prior knowledge of the theory, can take the theory and produce the same predictions from it, the originator and anyone else correctly interpreting the theory would.
This is the criteria the climate models fail, and why I argue, their output isn’t science. Which is not to say a well articulated climate model isn’t possible, but the current crop don’t make the grade.

Lew Skannen

I am perfecting a model for the weekly “Six from Thirty Six” lottery. I ran it about sixty million times and averaged all the models.
I then bet on number 18 coming up six times…

Greg

Good work Willis.
As I posted in you last thread, over rating volcanics is the key the exaggerating CO2. Since the real data does not support either it is now necessary to ‘reanalise’ it until it does.
The other thing that look pretty screwed in the model is that El Nino _boosts_ OHC. They rather foolishly hightlight this feature too.

jc

Very good exposition of the True Nature Of Climate Models – And Modelers.
I can’t claim to have paid any meaningful attention to the internal workings of these things, but I am unaware of having come across a straightforward description before. Too much of “Climate Science” is lost in the earnestness of parsing details so as to establish the validity or not of a preferred interpretation, whilst not really paying attention to the actual validity of the whole damn process in the first place.
Lots of people quite rightly point out problematic aspects, or again rightly say that these are models not reality, but to illustrate the essentially manufactured nature of models in this simple way is rarely done.

Joe Public

Thanks for your lucid explanations.

Cees de Valk

Two comments.
Re-analysis of weather works to some extent because a lot of relevant data have been collected which can constrain the model quite effectively. It can be useful, if only to produce “complete” atmospheric states in a regular format including many poorly observed variables, so there is more to learn from and it can be done easier.
Ocean reanalysis is much less effective: there is a severe lack of data which could effectively constrain the model (most data is at or near the surface, you can’t easily collect profiles), while the spatial scales of the processes are much smaller than in the atmosphere so you would need a lot more. People call it “ocean reanalysis” but this type of product is in no way comparable to an atmospheric reanalysis. This is not likely to change.
About all reanalysis: it is hard to verify a state-of-the-art product, since almost all data went into it (in some cases it can be done though). For the atmosphere, the same models are used for weather forecasting, so quite a lot is known about (forecast) skill, which helps. This is not the case with ocean models.

Greg

One thing that stands out in 0-100m line in figure 4 is that there are three notable events
Big drops in OHC, as well as I can read off that scale they’re centred on 1971 1985 1999. Even 14 year intervals. One of them coincidentally matches a volcanic eruption.
This needs to be examined in the annual or 3monthly data to avoid getting fooled by runny mean distortions, but It has been noted elsewhere that the drop in SST around El Chichon actually starts well before the eruption.

Thank you, Willis. I learn something every time I read one of your posts. Excellent stuff and crystal clear.

Willis,
If what you describe is correct then Fig 1. in the Trenberth paper would be classified as fraud in any other field. For example a similar procedure applied in the search for the Higgs Boson at CERN would have generated a signal from nothing by “correcting” the raw data by a (complex) model that assumes its existance. Instead you can only compare the measured raw data with the simulated signal predicted by the model. Only if the two agree can you begin to claim a discovery. You show clearly that in fact the raw Leviticus data indeed show no such volcanic signal.

lgl

“Now for me, anyone who looks at Figure 4 and claims that they can see the effects of the eruptions of Pinatubo and El Chichon in that actual data is hallucinating. There is no effect visible.”
Put in a 5 year filter and you will see it too.

Paywalled? Should be here.
[Thanks, Nick, I’ve added it to the head post and acknowledged you for finding it. -w.]

Willis Eschenbach

Cees de Valk says:
May 11, 2013 at 1:18 am

Two comments.
Re-analysis of weather works to some extent because a lot of relevant data have been collected which can constrain the model quite effectively. It can be useful, if only to produce “complete” atmospheric states in a regular format including many poorly observed variables, so there is more to learn from and it can be done easier.
Ocean reanalysis is much less effective: there is a severe lack of data which could effectively constrain the model (most data is at or near the surface, you can’t easily collect profiles), while the spatial scales of the processes are much smaller than in the atmosphere so you would need a lot more. People call it “ocean reanalysis” but this type of product is in no way comparable to an atmospheric reanalysis. This is not likely to change.

Thanks for the thoughts, Cees. However, I disagree. Look at the problems in Figure 1 with the pre-1980 results from the five reanalysis model runs. That wide range in results is because the reanalyses are poorly constrained by the pre-1980 data. However, after 1980 this is much less the case, with the five model runs becoming very similar.
And since the introduction of the Argo data, the constraints have gotten even tighter.
So your claim, that the problem is that the data doesn’t constrain the reanalysis, is clearly untrue. The more recent results shown in Figure 1 are very close together, meaning that they are tightly constrained … but unfortunately, despite being well constrained they are also wrong …
w.

Willis: I will agree that I’ve never seen dips and rebounds from volcanic eruptions in global ocean heat content data, but it should be visible in sea surface temperature data. The sea surface temperature data in your Figure 4 appears to be smoothed with a 5-year filter… http://oi43.tinypic.com/2ztal54.jpg
…while the ocean heat content data looks as though it’s annual data. Please confirm.
The 1982/83 El Nino and the response to the eruption of El Chichon were comparable in size so they were a wash in sea surface temperature data, but Mount Pinatubo was strong enough to overcome the response to the 1991/92 El Nino, so there should be a dip then. The 5-year filter seems to suppress the response of the sea surface temperature data to Mount Pinatubo.
Also if you present the sea surface temperature data in annual form in your Figure 4, then the dip in the subsurface temperatures for 0-100 meters caused by the 1997/98 El Nino will oppose the rise in sea surface temperatures then.
Regards

Willis Eschenbach

lgl says:
May 11, 2013 at 1:49 am

“Now for me, anyone who looks at Figure 4 and claims that they can see the effects of the eruptions of Pinatubo and El Chichon in that actual data is hallucinating. There is no effect visible.”
Put in a 5 year filter and you will see it too.

Been there, tried that with a 5-year centered Gaussian filter, and I still couldn’t see the slightest sign of an effect from the eruptions. Your move.
w.

Bloke down the pub

As Tamsin Edwards would say, ‘all models are wrong, but some can be useful’.

Kasuha

Overall I think most observations and conclusions in this article are correct. There are still things with which I don’t agree, though.
For instance in Figure 2, model runs “plunging to snowball earth” are not significant part of the dataset. Their presence does not make the whole simulation invalid. Significant part, i.e. majority of runs actually holds up to constant value. The important thing there is that the object of the research was comparison of (simulated) situation with “normal CO2” versus situation with “doubled CO2” and the result definitely allows to perform such comparison. Of course the result is limited to accuracy of the modelling itself and it is certain that these models are not perfect as there is no perfect climate model on the Earth yet. There’s of course no guarantee that even the average result is reliable, but that’s not because some runs diverge but because of unknown amount of physics not simulated by the model which may have significant influence on climate.
Regarding Figure 3, Trenberth’s reanalysis is about the 0-700 m layer so comparing 0-2000 m data is somewhat irrelevant to it. Levitus sure produced also 0-700 m measurements so I guess you could have compared these. But I guess I can see the problem. They actually have dents corresponding to Trenberth’s “reanalysis” data, don’t they? Maybe just not as big.
ftp://kakapo.ucsd.edu/pub/sio_220/e03%20-%20Global%20warming/Levitus_et_al.GRL12.pdf
Figure 4 makes up for Figure 3 a bit except Trenberth’s data are not present in it for comparison (and in corresponding format) I agree that smoothing might obscure the data but so does presenting data in different formats to make direct comparison hard. The data processing is different from Trenberth (annual means instead of smoothed monthly means) but it contains observable signal for surface temperature and 0-100 m layer (very noisy, probably statistically insignificant but observable), but definitely no signal for bigger depths.
It would be nice to have all three volcanic eruptions marked in your graphs, though.

QUESTION: “We need more money! How do we get more money without doing science?”
ANSWER: “Easy! When you run out of science, just baffle them with bullnalysis….”

lgl

Willis
If you give me your fig.4 data on .txt or .xls 🙂

Willis Eschenbach

Bob Tisdale says:
May 11, 2013 at 2:14 am

Willis: I will agree that I’ve never seen dips and rebounds from volcanic eruptions in global ocean heat content data, but it should be visible in sea surface temperature data. The sea surface temperature data in your Figure 4 appears to be smoothed with a 5-year filter… http://oi43.tinypic.com/2ztal54.jpg
…while the ocean heat content data looks as though it’s annual data. Please confirm.
The 1982/83 El Nino and the response to the eruption of El Chichon were comparable in size so they were a wash in sea surface temperature data, but Mount Pinatubo was strong enough to overcome the response to the 1991/92 El Nino, so there should be a dip then. The 5-year filter seems to suppress the response of the sea surface temperature data to Mount Pinatubo.
Also if you present the sea surface temperature data in annual form in your Figure 4, then the dip in the subsurface temperatures for 0-100 meters caused by the 1997/98 El Nino will oppose the rise in sea surface temperatures then.
Regards

Thanks, Bob. As usual, you are right. I was still inadvertently using the 5-year average data from the previous analysis. I’ve updated Figure 4 with the correct annual SST data.
I still don’t see any volcanic effect, though. The drop post 1991 is absolutely bog-standard and indistinguishable from half-a-dozen other such drops in the record.
w.

lgl

Bob
“the dip in the subsurface temperatures for 0-100 meters caused by the 1997/98 El Nino will oppose the rise in sea surface temperatures then.”
No Bob, the 1997/98 El Nino heated the ocean. That heat is found two years later in the 100-700m layer, which has misled you to believe La Nina is heating the ocean.
http://virakkraft.com/SST-SSST.png

Willis Eschenbach

Kasuha says:
May 11, 2013 at 2:27 am

Overall I think most observations and conclusions in this article are correct. There are still things with which I don’t agree, though.
For instance in Figure 2, model runs “plunging to snowball earth” are not significant part of the dataset. Their presence does not make the whole simulation invalid.

First, I posted that graphic to show that the effect of accumulated error can send a model into a tailspin.
Second, by appearances about 1% of the models fell off of the rails. The earth (despite huge provocation) has not fallen off the rails in the last half a billion years. If the real earth had a 1% failure rate, it would have gone off the rails long, long ago. This means that there is some serious problem with the model.

Significant part, i.e. majority of runs actually holds up to constant value. The important thing there is that the object of the research was comparison of (simulated) situation with “normal CO2″ versus situation with “doubled CO2″ and the result definitely allows to perform such comparison.

Perhaps that impresses you. Me, I see that the earth doesn’t have a 1% failure rate, which means that the model contains some kind of fundamental errors. Does that affect the “comparison of (simulated) situation with “normal CO2″ versus situation with “doubled CO2″”?
Who knows … but it certainly doesn’t give me the slightest desire to draw any conclusion from the results.

Of course the result is limited to accuracy of the modelling itself and it is certain that these models are not perfect as there is no perfect climate model on the Earth yet. There’s of course no guarantee that even the average result is reliable, but that’s not because some runs diverge but because of unknown amount of physics not simulated by the model which may have significant influence on climate.

Oh, please. That’s splitting hairs. The model is going off of the rails because of the “physics not simulated by the model”, so what’s the difference between the model going off the rails and the physics not being properly represented in the model? End result is the same, it goes off the rails.

Regarding Figure 3, Trenberth’s reanalysis is about the 0-700 m layer so comparing 0-2000 m data is somewhat irrelevant to it. Levitus sure produced also 0-700 m measurements so I guess you could have compared these. But I guess I can see the problem. They actually have dents corresponding to Trenberth’s “reanalysis” data, don’t they? Maybe just not as big.
ftp://kakapo.ucsd.edu/pub/sio_220/e03%20-%20Global%20warming/Levitus_et_al.GRL12.pdf

Please don’t make accusations that I’m avoiding graphs because of what they show. You might do that kind of thing, I have no idea.
I don’t do that, and I don’t appreciate your nasty insinuations. I have in fact shown the 0-700m Levitus measurements in Figure 4, and the Trenberth results are in Figure 1. But if you’d like them separated out, here they are:

Figure 3.

Figure S1. Levitus and ORAS4 data for the 0-700 m layer.
As you can see, the 0-700 metre layer shows nothing more about the effects of the volcanoes than does Figure 3 showing the 0-2000 metre layer. I considered putting in the 1-700 metre data, but I left it out. However, I did so for the OPPOSITE REASON from what you speculate—not because it contradicted my thesis, but because it added no new information that was not shown in Figure 3. Which is hardly surprising, since the post-1980 correlation between the 0-700 and the 0-2000 m ORAS4 layers is about 0.9.

Figure 4 makes up for Figure 3 a bit except Trenberth’s data are not present in it for comparison (and in corresponding format) I agree that smoothing might obscure the data but so does presenting data in different formats to make direct comparison hard. The data processing is different from Trenberth (annual means instead of smoothed monthly means) but it contains observable signal for surface temperature and 0-100 m layer (very noisy, probably statistically insignificant but observable), but definitely no signal for bigger depths.

There is a limit to how much data I can put on one graph, and on the number of graphs folks will look at before dropping it. I try to balance them, so at times I leave things off of graphs.
Your complaint that Trenberth uses monthly means and I have processed it differently ignores the fact that the real data is annual, not monthly. So if there is a fault here it is not mine, I can’t manufacture monthly data the way that Trenberth did …

It would be nice to have all three volcanic eruptions marked in your graphs, though.

I put Mt Agung on Figure 4. Comparing it to Trenberths results is meaningless given the huge error bars. With error bars like that, we have no clue even as to whether the data is rising or falling, because in those early model results, one year is not statistically different from any other.
In any case, there is no sign of Mt. Agung in the actual records … so whether it shows up in the reanalysis nonsense is not particularly meaningful.
Thanks,
w.

Willis Eschenbach

lgl says:
May 11, 2013 at 2:42 am

Willis
If you give me your fig.4 data on .txt or .xls 🙂

Why go the long way around via txt or xls? Here’s the data in comma-separated (CSV) format:

YEAR,  0 to 100 m,  100 to 700 m,  700 to 2000 m,  Surface: ERSST,  Surface: ICOADS SST
1955.5, -0.106, -0.003, 0.005, -0.224, -0.225
1956.5, -0.096, 0.003, 0.005, -0.202, -0.193
1957.5, -0.063, -0.028, -0.003, -0.14, -0.18
1958.5, 0.000, 0.000, 0.000, 0.00, 0.00
1959.5, -0.044, -0.001, -0.001, -0.04, -0.05
1960.5, -0.020, 0.005, -0.002, -0.14, -0.15
1961.5, -0.028, -0.002, -0.001, -0.07, -0.08
1962.5, -0.043, 0.013, 0.000, -0.08, -0.08
1963.5, 0.008, -0.011, -0.003, -0.10, -0.16
1964.5, -0.116, 0.000, 0.002, -0.08, -0.11
1965.5, -0.088, -0.003, 0.003, -0.23, -0.31
1966.5, -0.067, -0.019, 0.004, -0.10, -0.13
1967.5, -0.135, -0.012, 0.000, -0.13, -0.18
1968.5, -0.110, -0.034, -0.003, -0.21, -0.28
1969.5, -0.042, -0.029, -0.001, 0.08, 0.07
1970.5, -0.116, -0.027, -0.001, 0.04, 0.02
1971.5, -0.232, 0.012, 0.004, -0.12, -0.19
1972.5, -0.107, -0.027, -0.002, -0.09, -0.16
1973.5, -0.063, -0.014, 0.002, 0.14, 0.11
1974.5, -0.116, 0.005, 0.004, -0.17, -0.19
1975.5, -0.129, 0.022, 0.008, -0.09, -0.18
1976.5, -0.112, 0.007, 0.008, -0.25, -0.39
1977.5, 0.054, 0.009, 0.008, 0.06, -0.04
1978.5, 0.047, 0.012, 0.009, 0.01, -0.05
1979.5, 0.059, -0.003, 0.006, 0.03, -0.02
1980.5, 0.103, 0.015, 0.009, 0.13, 0.07
1981.5, 0.054, 0.011, 0.005, 0.00, -0.08
1982.5, 0.025, -0.014, 0.001, 0.04, -0.01
1983.5, 0.091, -0.031, 0.007, 0.18, 0.13
1984.5, -0.009, 0.014, 0.006, 0.05, 0.00
1985.5, -0.015, 0.023, 0.011, -0.02, -0.10
1986.5, 0.016, 0.003, 0.008, -0.04, -0.10
1987.5, 0.159, -0.019, 0.005, 0.06, -0.02
1988.5, 0.085, 0.018, 0.006, 0.22, 0.19
1989.5, 0.069, 0.019, 0.006, 0.01, -0.07
1990.5, 0.160, -0.007, 0.007, 0.08, 0.03
1991.5, 0.167, 0.023, 0.003, 0.16, 0.12
1992.5, 0.162, -0.002, 0.003, 0.10, 0.03
1993.5, 0.155, 0.000, 0.009, 0.08, 0.03
1994.5, 0.105, 0.019, 0.009, 0.04, 0.00
1995.5, 0.142, 0.021, 0.009, 0.16, 0.12
1996.5, 0.120, 0.050, 0.012, 0.09, 0.02
1997.5, 0.165, 0.012, 0.010, 0.10, 0.02
1998.5, 0.242, 0.012, 0.009, 0.40, 0.34
1999.5, 0.070, 0.055, 0.007, 0.17, 0.10
2000.5, 0.102, 0.052, 0.011, 0.12, 0.09
2001.5, 0.167, 0.030, 0.008, 0.18, 0.12
2002.5, 0.233, 0.058, 0.011, 0.25, 0.25
2003.5, 0.254, 0.081, 0.020, 0.29, 0.25
2004.5, 0.286, 0.092, 0.024, 0.30, 0.27
2005.5, 0.274, 0.073, 0.019, 0.28, 0.26
2006.5, 0.264, 0.093, 0.024, 0.23, 0.20
2007.5, 0.217, 0.094, 0.026, 0.31, 0.29
2008.5, 0.176, 0.109, 0.027, 0.11, 0.07
2009.5, 0.282, 0.093, 0.027, 0.20, 0.16
2010.5, 0.294, 0.097, 0.031, 0.36, 0.29
2011.5, 0.224, 0.115, 0.033
2012.5, 0.242, 0.113, 0.037

Rock on …
w.

DennisA

Kevin Trenberth seems to have re-discovered the faith in climate models that deserted him in this Nature Climate Change blog post from June 2007. it has been posted many times in many places, but people forget:
http://blogs.nature.com/climatefeedback/2007/06/predictions_of_climate.html
“I have often seen references to predictions of future climate by the Intergovernmental Panel on Climate Change (IPCC), presumably through the IPCC assessments.
In fact, since the last report it is also often stated that the science is settled or done and now is the time for action. In fact there are no predictions by IPCC at all. And there never have been.
“None of the models used by IPCC are initialized to the observed state and none of the climate states in the models correspond even remotely to the current observed climate. In particular, the state of the oceans, sea ice, and soil moisture has no relationship to the observed state at any recent time in any of the IPCC models.
There is neither an El Niño sequence nor any Pacific Decadal Oscillation that replicates the recent past; yet these are critical modes of variability that affect Pacific rim countries and beyond.
The Atlantic Multi-decadal Oscillation, that may depend on the thermohaline circulation and thus ocean currents in the Atlantic, is not set up to match today’s state, but it is a critical component of the Atlantic hurricanes and it undoubtedly affects forecasts for the next decade from Brazil to Europe.
Moreover, the starting climate state in several of the models may depart significantly from the real climate owing to model errors.”
These were quite revealing statements because only some 3 months earlier he had presented the AR4 report conclusion to the Committee on Science and Technology of the US House of Representatives.
“The iconic summary statement of the observations section of the IPCC (2007) report is “Warming of the climate system is unequivocal, as is now evident from observations of increases in global average air and ocean temperatures, widespread melting of snow and ice, and rising global mean sea level.”
Sometimes models have to be changed to fit the political narrative as with Tom Wigley’s MAGICC model, part funded by the US EPA. You can download the manual here:
http://www.cgd.ucar.edu/cas/wigley/magicc/UserMan5.3.v2.pdf
“Changes have been made to MAGICC to ensure, as nearly as possible, consistency with the IPCC AR4.”
There is more on the politics behind models here: – “Undeniable Global Warming And Climate Models”- http://scienceandpublicpolicy.org/originals/undeniable_models.html

thingodonta

I give up. Why don’t these people just get on a time machine, and go back to the Soviet Union’s heyday, when they can make up whatever ‘reanalysis data’ they like and present it as true and sound?
Kriging has well-understood limitations, unlike what is used by Trenberth et al. above. Bendigo Gold had a $250 million write off a few years ago, fooling everyone-including the banks- because some fancy statistician fudged the resource numbers-in this case the ‘nugget effect’ in the drilling data, which any 1850s miner could have told them the Bendigo gold field was famous for. The gold that was supposed to be between the drillholes just wasn’t there.
I would have thought a lot of well educated, out of work statisticians could make themselves a useful career auditing the shenanigans of climate science. (But of course, like in the field of mining, what usually happens is that the auditors-in the 3rd world that means the local government- usually just get their snouts in the trough and the whole regulatory process breaks down. Same as climate science, I suppose).

Louis Hooffstetter

Willis you have a gift. I admire (and slightly envy) your ability to grasp what’s relevant from what’s BS and clearly explain it to others. Thanks again.
Clive Best says:
“If what you describe is correct then Fig 1. in the Trenberth paper would be classified as fraud in any other field.”
Absolutely! Only climastrology warps data to fit models. Every other scientific discipline uses empirical data to test their models.As time progresses and IPCC climate model projections go farther and farther “off the rails” the more climastrologists will resort to this kind of fraud to try to convince ‘low information voters’ that they were really correct. This fraud should be pointed out at every opportunity.

Bill Illis

The Argo actual measurements are 0.46 W/m2 is being absorbed into the 0-2000 metre ocean.
Trenberth says a climate model reanalysis provides an estimate of 1.1 W/m2.
I think we should just thank Dr. Trenberth, for finding yet another example of the climate models overestimating the warming rate / climate impacts by more than double.
So far, that makes about 12 out of 13 key climate aspects that the climate models miss by 50%:
– surface temperature:
– troposphere temperature;
– volcanic impact;
– Ocean Heat Content;
– water vapor;
– precipitation;
– CO2 growth rate feedback;
– cloud optical depth;
– OLR;
– Antarctic sea ice;
– stratosphere temps (after correcting for ozone loss from volcanoes)
– sea level increase;
I’ll give them the
– Arctic sea ice.
So Trenberth did not find (some of) the missing energy, he just pointed out where the mssing energy error originates:
– in the climate models and in the theory.

Tom in Florida

The simplified version of this post is GIGO.

lgl

Thanks Willis
http://virakkraft.com/SSST-change.png
No you are not hallucinating. The change really goes negative after all three major eruptions (and the latest strong Ninas)

“…if you say that the outputs of the kriging process are “data”, you can be thrown in jail…”
Careful there Willis. You know how you can get into trouble for saying obvious things such as these. 😉 Ask Anthony and his Fox interview.

Frank K.

Thank Willis for an excellent article.
As someone with years of experience in computational fluid dynamics, there is in fact a third BIG problem with climate models, and that is that they are highly NON-LINEAR. What this means is that a seemingly small error in one variable can amplify (by quite a lot) as you march the numerical solution in time. Given that you are solving numerous coupled, non-linear differential equations with uncertainties in the boundary and initial conditions, the potential for producing erroneous solution is large. And there is no way with non-linear equations to prove or ensure that the time step you are using and/or spatial resolution of your mesh will yield a valid solution for a given problem definition.
All of means that it is imperative that the modelers document their model equations, solution techniques and software design. And, actually, NCAR does a pretty good job of this. Others, like NASA/GISS, do a horrible job (because they really don’t care about model documentation…they’re more into blogging and tweeting).

TimTheToolMan

“It is hard to make data where none exist.”
– Kevin Trenberth

David L. Hagen

Willis
Thanks for showing Fig. 2 with the very wide distribution in outputs from the same climate model. That shows both iterative errors and chaotic impacts.
S. Fred Singer modeled the errors and recommends 400 model years of output for the mean results to settle out the chaotic effects. e.g 20 model runs for 20 years or 10 model runs for 40 years, or 40 model runs for 10 years. This is much more than the 1-5 runs that the IPCC typically reports. See:
S. Fred Singer Overcoming Chaotic Behavior of Climate Models, SEPP July 2012

Clouds have distinct edges, and they pop into and out of existence, without much in the way of “in-between”. The computer is not very good at that blotchy, patchy stuff.
===========
Do we even know why this happens? Why does water clump together to form clouds? Why doesn’t it mix evenly with the air to form an even haze across the sky?

Björn

Willis , when I try to run your R script from the link at the end of the posting it breaks down on the execution of the line:
“… mydepths=read.csv(“levitus depth.csv”,header=FALSE)…”
and spits out the error message in the following quote:
” Error in file(file, “rt”) : cannot open the connection
In addition: Warning message:
In file(file, “rt”) :
cannot open file ‘levitus depth.csv’: No such file or directory ”
Did a code line for the creation of the comma separted file, perhaps fall out of the script and through the rifts in the floorboard when you uploaded to dropbox?.
Here is how the scipt lines upto to (and including ) the offending line look when I click on the link given.
————————————————————————————–
#URL ftp://ftp.nodc.noaa.gov/pub/WOA09/MASKS/landsea.msk
url <- "ftp://ftp.nodc.noaa.gov/pub/WOA09/MASKS/landsea.msk&quot;
file <- "levitus depths.txt"
download.file(url, file)
surf_area=511207740688000 # earth surface, square metres
# depths by code from ftp://ftp.nodc.noaa.gov/pub/WOA09/DOC/woa09documentation.pdf
depthcode=c(0, 10, 20, 30, 50, 75, 100, 125, 150, 200, 250,
300, 400, 500, 600, 700, 800, 900, 1000, 1100, 1200,
1300, 1400, 1500, 1750, 2000, 2500, 3000, 3500, 4000,
4500, 5000, 5500, 6000, 6500, 7000, 7500,
8000, 8500, 9000) # depths in metres for codes 1-40, where 1 is land.
levitus_depths=as.matrix(read.fwf(file, widths=rep(8,10)))
depthmatrix=matrix(as.vector(aperm(levitus_depths)),byrow=TRUE,nrow=180)
mydepths=read.csv("levitus depth.csv",header=FALSE)
……
————————————————————————————–

RockyRoad

So what Trenberth is doing is making models of models? Have I got that right? Or is it models of models of models?
Oh, I see no problem there, considering how ineffective and non-robust their climate models are to begin with. Sure, models of models ad nauseum–that fixes everything. /sarc
I’d suggest to Trenberth that he dispense with his original models and quit daydreaming about his missing heat. It’s a dead end career move.
BTW, Good analysis, Willis. Always an education.

David L. Hagen says:
May 11, 2013 at 5:52 am
S. Fred Singer modeled the errors and recommends 400 model years of output for the mean results to settle out the chaotic effects
==============
you can’t settle out the chaotic effects, which is something completely misunderstood by climate science.
Say you use a pair of dice as your model of a pair of dice. This should be a perfect model – but in fact it isn’t. If you throw the dice 400 times you will get 7 as the most likely throw. So, this is your “climate prediction” as to what will happen when you roll the real dice.
However, when you roll the real dice, you will get a result between 2 and 12. 7 is the most likely, but this doesn’t mean 7 is what will happen in reality.
This is the same problem with trying to predict the future with climate models. No matter how perfect the model, you still can’t predict what will actually happen in the future.
Maybe the future temperature will be “7”, but it might also be “2” or “12” and there is no way at present given our understanding of mathematics and physics to say which it will be.
We can see this in the models above, where sometimes the model predicts heating, sometimes it predicts cooling, with no change in the forcings.
This is the fallacy of using models to predict the future. The universe is not a 19th century clockwork. The future is not written. There is no “ACTUAL” future to be predicted.
Our minds fool us into believing the future is a place at which we will arrive, because we assume that the future is like the present, only it is “ahead” of us in time. But this is not what the dice tell us.

Jim Cripwell

“If you torture data long enough, it will confess”. Ronald Coase.

RockyRoad

thingodonta says:
May 11, 2013 at 4:34 am


Kriging has well-understood limitations, unlike what is used by Trenberth et al. above. Bendigo Gold had a $250 million write off a few years ago, fooling everyone-including the banks- because some fancy statistician fudged the resource numbers-in this case the ‘nugget effect’ in the drilling data, which any 1850s miner could have told them the Bendigo gold field was famous for. The gold that was supposed to be between the drillholes just wasn’t there.

High nugget effect is an immediate “red flag” that indicates a possible mix of problems, including: 1) poor analysis reproducibility (probably no controls in the assay samples); 2) spatial irregularities caused by down-hole drift with no survey tool to correct for it; 3) down-hole contamination from zones of mineralization above. If the data is of poor quality, any model will be of poor quality. The only fix in the real world is to go back to the data.
But you’re right–like many “climate scientists” that make things up to keep their jobs, the same thing happened at Bemdogp. I’ve known companies that will employ several geostaticians and only keep those that give them the rosiest outlook. Of course, the find out later that those projections weren’t factual at all. Oops! Writeoff! The guys that gave them the straight story were let go because they didn’t add to their company’s “reserve base” and stock value.
Write-offs should be assessed to mining CEOs who suffer from a “bling” mentality. That would help fix the problem.

RockyRoad

Sorry, it’s “Bendigo”; fingers got in my way.

Gary

Willis,
Excellent point about edges. The action is almost always on the margin. Like emergent properties, this topic deserves a whole post of its own.
Isn’t it somewhat of a problem with trying to find the volcanic effects in OHC that volcanic aerosols are primarily regional and this analysis is looking at the globally averaged ocean? The mesh size of the net may be too coarse to catch this fish.

What the models are telling us is quite a different story than what the modellers are telling us. Look at Figure 2. The model delivers a whole range of results. Without any change in the forcings, the model predicts both warming and cooling.
This is very important to understand. The model shows us that both warming and cooling are possible with a doubling of CO2. Now the assumption in climate science is that “the future” is some sort of “average” of the model runs, which gives a sensitivity of between 2 and 3 K on Figure 1. However, this is a nonsense. The future is not on average of anything. We will not arrive at any sort of “average future”.
If the model is 100 percent perfect, then our future lies along one of the lines predicted by the model and there is no way to predict which one. We could have cooling with a doubling, or we could have warming, without making any change. This is what the model is actually telling us.
What is surprising is that more scientists don’t take the model builders to task on this point. In effect the models themselves are showing us that “natural variability” exists without any change in the atmosphere, or the sun, or the earth’s orbit. Rather, that even if we keep everything exactly the same, the models show us that climate will still change, and it may change dramatically.

phlogiston

ferdberple says:
May 11, 2013 at 6:10 am
Clouds have distinct edges, and they pop into and out of existence, without much in the way of “in-between”. The computer is not very good at that blotchy, patchy stuff.
===========
Do we even know why this happens? Why does water clump together to form clouds? Why doesn’t it mix evenly with the air to form an even haze across the sky?
Clouds must be attractors. When something is contrained in a phase-space for no apparent reason, e.g. smoke ribbons rising in still air, an attractor is at work. Here is a paper on the nonlinear dynamics of clouds (paywalled unfortunately).

Well, now we can add ‘data’ to the list.
Terry Oldberg recently wrote this:
“Climatologists often use polysemic terms. Some of these terms are words. Others are word pairs. The two words of a word pair sound alike and while they have different meanings climatologists treat the two words as though they were synonyms in making arguments. ”
See his guest-post at the Briggs blog to see the implications of this developed to a dramatic conclusion: http://wmbriggs.com/blog/?p=7923

Matthew Benefiel

The Ghost Of Big Jim Cooley says:
“I used to tell the children of my family how science is ‘real’ and not like religion. I used to tell them that they can trust it completely, that the very nature of science meant that it had to be accurate – that it was our best guess on something after rigorous examination and testing. Well, those days are gone now.”
To be honest that is part of the problem. We tend to treat science like it is an entity or something, like it holds weight in and of itself. It is a tool used by humans. Like all tools it can be used wrong (or flawed by manufacturing defects but science is a process so I suppose it doesn’t have that). Math is real to, but look how many mistakes can be made using it. To trust science is to trust those that use it, the scientist. We know very well that these men/women are not perfect (neither are we). The process is supposed to help minimize the impact of these perfections, but if history is any help we can never really remove it completely.
As far as science being ‘real’ and not like religion, that is a interesting comparison. When it really comes down to it, unless you are doing the study yourself you are placing faith in someone else’s work, kinda like a religion (even if you did the work yourself your placing faith in the peer review process). Sometimes we generalize religion to be a blind faith (that exists) but faith generally requires a basic understanding of something, like sitting in a chair. You wouldn’t sit in a chair if you didn’t believe it would hold your weight in the first place, yet you don’t test every chair you sit in (you have faith in the process that made that chair). As far as religion requiring an initial belief in something unseen and non-provable like God, but on the flip side some people aren’t content with putting any belief in that we are creatures of chance and slow mutations (also non-provable without a time machine and that ever elusive missing link).
Long story short, I really think science will be scrutinized a little more as we continue to trust a little less and require more proof of this MGW monstrosity. Anyway, I’m done.

phlogiston

I wonder if the authors of this paper have any comment on why ocean warming apparently started only in 1975. And before that an apparent cooling trend. They try of course to dismiss the pre-1975 period with a “high uncertainty” comment”. Why did rising CO2 only become effective in warming the oceans after 1975?

The error path began with the assumption that ‘science’ could be funded to give a desired result, rather than reality….it is therefore, government funded, commodity speculation MARKETING. Since it is a demonstrable failure, it is amusing to review another, little known marketing failure.
When the suits at Ford Motor Company were preparing the Ford Taurus concept, they wanted to overcome the dull family sedan reality with ‘sports car’ pretensions, so they contacted racing legend, Scotsman, Jackie Stewart, who agreed to endorse the new auto, given a list of ‘performance’ and quality features. Corporate bean counters ‘re-adjusted’ this list, without notice and invited Jackie to the product launch at the Detroit airport.
Jackie had been paid a million dollars for this endorsement and flew over in his private jet. To capture the ‘spontaneity’ of the moment, an advertising video team was set up to document the impromptu review. Jackie walked up to the Taurus, lifted the flimsy plastic door handle and said…”What is this? This is crap”. He got inside, noticed the goofy interior, said, “What is this? This is crap.” Jackie drove the Taurus a few laps around the taxiway, repeating the same refrain. Finally he got out of the car, walked to the gas fill door, opened it up and the plastic cap was held by a plastic cable and dangled against the body of the car.
Jackie stepped back, saying….”This is crap, THIS IS ALL CRAP”….at which point, he got into his still running private jet and departed. The Taurus dropped their racing legend pretensions. We have examined the hypothesis, the real data, the altered data, the ridiculous predictions, the dire warnings. We can only conclude that AGW is another Wall Street created marketing failure deserving of the Jackie Stewart quote.
BTW, the MAGICC mentioned above si “Model for the Assessment of Greenhouse Induced Climate Change….for the simpletons who chose ‘magic’ over science.

Jim G

Excellent example of how liars find new ways to lie about their lies to keep the lies going. Kind of like the Obama administration and Benghazi.