Why Reanalysis Data Isn’t …

Guest Post by Willis Eschenbach

I was reading through the recent Trenberth paper on ocean heat content that’s been discussed at various locations around the web. It’s called “Distinctive climate signals in reanalysis of global ocean heat content”,  paywalled, of course. [UPDATE: my thanks to Nick Stokes for locating the paper here.] Among the “distinctive climate signals” that they claim to find are signals from the massive eruptions of Mt. Pinatubo in mid-1991 and El Chichon in mid-1982. They show these claimed signals in my Figure 1 below, which is also Figure 1 in their paper.

ORAS4 OHC joulesORIGINAL CAPTION: Figure 1. OHC integrated from 0 to 300 m (grey), 700 m (blue), and total depth (violet) from ORAS4, as represented by its 5 ensemble members. The time series show monthly anomalies smoothed with a 12 month running mean, with respect to the 1958–1965 base period. Hatching extends over the range of the ensemble members and hence the spread gives a measure of the uncertainty as represented by ORAS4 (which does not cover all sources of uncertainty). The vertical colored bars indicate a two year interval following the volcanic eruptions with a 6 month lead (owing to the 12 month running mean), and the 1997–98 El Niño event again with 6 months on either side. On lower right, the linear slope for a set of global heating rates (W m-2) is given.

I looked at that and I said “Whaaa???”. I’d never seen any volcanic signals like that in the ocean heat content data. What was I missing?

Well, what I was missing is that Trenberth et al. are using what is laughably called “reanalysis data”. But as the title says, reanalysis “data” isn’t data in any sense of the word. It is the output of a computer climate model masquerading as data.

Now, the basic idea of a “reanalysis” is not a bad one. If you have data with “holes” in it, if you are missing information about certain times and/or places, you can use some kind of “best guess” algorithm to fill in the holes. In mining, this procedure is quite common. You have spotty data about what is happening underground. So you use a kriging procedure employing all the available information, and it gives you the best guess about what is happening in the “holes” where you have no data. (Please note, however, that if you claim the results of your kriging model are real observations, if you say that the outputs of the kriging process are “data”, you can be thrown in jail for misrepresentation … but I digress, that’s the real world and this is climate “science” at its finest.)

The problems arise as you start to use more and more complex procedures to fill in the holes in the data. Kriging is straight math, and it gives you error bars on the estimates. But a global climate model is a horrendously complex creature, and gives no estimate of error of any kind.

Now, as Steven Mosher is fond of pointing out, it’s all models. Even something as simple as

Force = Mass times Acceleration 

is a model. So in that regard, Steven is right.

The problem is that there are models and there are models. Some models, like kriging, are both well-understood and well-behaved. We have analyzed and tested the model called “kriging”, to the point where we understand its strengths and weakness, and we can use it with complete confidence.

Then there is another class of models with very different characteristics. These are called “iterative” models. They differ from models like kriging or F = M A because at each time step, the previous output of the model is used as the new input for the model. Climate models are iterative models. In a climate model, for example, it starts with the present weather, and predicts where the weather will go at the next time step (typically a half hour).

Then that result, the prediction for a half hour from now, is taken as input to the climate model, and the next half-hour’s results are calculated. Do that about 9,000 times, and you’ve simulated a year of weather … lather, rinse, and repeat enough times, and voila! You now have predicted the weather, half-hour by half-hour, all the way to the year 2100.

There are two very, very large problems with iterative models. The first is that errors tend to accumulate. If you calculate one half hour even slightly incorrectly, the next half hour starts with bad data, so it may be even further out of line, and the next, and the next, until the model goes completely off the rails. Figure 2 shows a number of runs from the Climateprediction climate model …

climateprediction_bad_dataFigure 2. Simulations from climateprediction.net. Note that a significant number of the model runs plunge well below ice age temperatures … bad model, no cookies!

See how many of the runs go completely off the rails and head off into a snowball earth, or take off for stratospheric temperatures? That’s the accumulated error problem in action.

The second problem with iterative models is that often we have no idea how the model got the answer. A climate model is so complex and is iterated so many times that the internal workings of the model are often totally opaque. As a result, suppose that we get three very different answers from three different runs. We have no way to say that one of them is more likely right than the other … except for the one tried and true method that is often used in climate science, viz:

If it fits our expectations, it is clearly a good, valid, solid gold model run. And if it doesn’t fit our expectations, obviously we can safely ignore it.

So how many “bad” reanalysis runs end up on the cutting room floor because the modeler didn’t like the outcome? Lots and lots, but how many nobody knows.

With that as a prelude, let’s look at Trenberth’s reanalysis “data”, which of course isn’t data at all … Figure 3 compares the ORAS4 reanalysis model results to the Levitus data:

oras4 reanalysis vs levitus dataFigure 3. ORAS4 reanalysis results for the 0-2000 metre layer (blue) versus Levitus data for the same layer. ORAS4 results are digitized from Figure 1. Note that the ORAS4 “data” prior to about 1980 has error bars from floor to ceiling, and so is of little use (see Figure 1). The data is aligned to their common start in 1958 (1958=0)

In Figure 3, the shortcomings of the reanalysis model results are laid bare. The computer model predicts a large drop in OHC from the volcanoes … which obviously didn’t happen. But instead of building on that reality of no OHC change after the eruptions, the reanalysis model has simply warped the real data so that it can show the putative drop after the eruptions.

And this is the underlying problem with treating reanalysis results as real data—they are nothing of the sort. All that the reanalysis model is doing is finding the most effective way to reshape the data to meet the fantasies, preconceptions, and errors of the modelers. Let me re-post the plot with which I ended my last post. This shows all of the various measurements of oceanic temperature, from the surface down to the deepest levels that we have measured extensively, two kilometers deep.

changes in sea surface and sub correctedFigure 4. Oceanic temperature measurements. There are two surface measurements, from ERSST and ICOADS, along with individual layer measurements for three separate levels, from Levitus. NOTE—Figure 4 is updated after Bob Tisdale pointed out that I was inadvertently using smoothed data for the SSTs.

Now for me, anyone who looks at Figure 4 and claims that they can see the effects of the eruptions of Pinatubo and El Chichon and Mt. Agung in that actual data is hallucinating. There is no effect visible. Yes, there is a drop in SST during the year after Pinatubo … but the previous two drops were larger, and there is no drop during the year after El Chichon or Mt. Agung. In addition, temperatures rose more in the two years before Pinatubo than they dropped in the two years after. All that taken together says to me that it’s just random chance that Pinatubo has a small drop after it.

But the poor climate modelers are caught. The only way that they can claim that CO2 will cause the dreaded Thermageddon is to set the climate sensitivity quite high.

The problem is that when the modelers use a very high sensitivity like 3°C/doubling of CO2, they end up way overestimating the effect of the volcanoes. We can see this clearly in Figure 3 above, showing the reanalysis model results that Trenberth speciously claims are “data”. Using the famous Procrustean Bed as its exemplar, the model has simply modified and adjusted the real data to fit the modeler’s fantasy of high climate sensitivity. In a nutshell, the reanalysis model simply moved around and changed the real data until it showed big drops after the volcanoes … and this is supposed to be science?

Now, does this mean that all reanalysis “data” is bogus?

Well, the real problem is that we don’t know the answer to that question. The difficulty is that it seems likely that some of the reanalysis results are good and some are useless, but in general we have no way to distinguish between the two. This case of Levitus et al. is an exception, because the volcanoes have highlighted the problems. But in many uses of reanalysis “data”, we have no way to tell if it is valid or not.

And as Trenberth et al. have proven, we certainly cannot depend on the scientists using the reanalysis “data” to make even the slightest pretense of investigating whether it is valid or not …

(In passing, let me point out one reason that computer climate models don’t do well at reanalyses—nature generally does edges and blotches, while climate models generally do smooth transitions. I’ve spent a good chunk of my life on the ocean. I can assure you that even in mid-ocean, you’ll often see a distinct line between two kinds of water, with one significantly warmer than the other. Nature does that a lot. Clouds have distinct edges, and they pop into and out of existence, without much in the way of “in-between”. The computer is not very good at that blotchy, patchy stuff. If you leave the computer to fill in the gap where we have no data between two observations, say 10°C and 15°C, the computer can do it perfectly—but it will generally do it gradually and evenly, 10, 11, 12, 13, 14, 15.

But when nature fills in the gap, you’re more likely to get something like 10, 10, 10, 14, 15, 15 … nature usually doesn’t do “gradually”. But I digress …)

Does this mean we should never use reanalyses? By no means. Kriging is an excellent example of a type of reanalysis which actually is of value.

What these results do mean is that we should stop calling the output of reanalysis models  “data”, and that we should TEST THE REANALYSIS MODEL OUTPUTS EXTENSIVELY before use.

These results also mean that one should be extremely cautious when reanalysis “data” is used as the input to a climate model. If you do that, you are using the output of one climate model as the input to another climate model … which is generally a Very Bad Idea™ for a host of reasons.

In addition, in all cases where reanalysis model results are used, the exact same analysis should be done using the actual data. I have done this in Figure 3 above. Had Trenberth et al. presented that graph along with their results … well … if they’d done that, likely their paper would not have been published at all.

Which may or may not be related to why they didn’t present that comparative analysis, and to why they’re trying to claim that computer model results are “data” …

Regards to everyone,

w.

NOTES:

The Trenberth et al. paper identifies their deepest layer as from the surface to “total depth”. However, the reanalysis doesn’t have any changes below 2,000 metres, so that is their “total depth”.

DATA:

The data is from NOAA , except the ERSST and HadISST data, which are from KNMI.

The NOAA ocean depth data is here.

The R code to extract and calculate the volumes for the various Levitus layers is here.

About these ads

113 thoughts on “Why Reanalysis Data Isn’t …

  1. Another superb posting, Willis. I would love to hear what a scientist (who supports the idea of AGW) makes of the ‘science’ of reanalysis data. For how long do we have to put up with the denigration of real science? I used to tell the children of my family how science is ‘real’ and not like religion. I used to tell them that they can trust it completely, that the very nature of science meant that it had to be accurate – that it was our best guess on something after rigorous examination and testing. Well, those days are gone now.

  2. The UK Met Office continues to label Mt Pinatubo on its temperature graphs, for instance here:

    http://www.metoffice.gov.uk/research/climate/seasonal-to-decadal/long-range/decadal-fc

    I don’t understand why. For a start it looks as though the eruption happened just *after* temps had dropped. Perhaps this is due to an inaccurate graph. But more puzzling is that the dip in temperatures that happened around that time looks completely normal. There’s a very similar dip in the mid 50s. And one about 1964. Another one in the mid 70s. There’s one in the mid 80s, and one in the late 90s. Etc.

  3. Thanks for an illustrative post. I liked your allution to the Procrustean Bed. So now we have procrustean data.

  4. Good article.

    I’d add one thing. It’s generally accepted that in order for a theory to be scientific, it must be ‘well articulated’. Which means someone with an appropriate background, with no prior knowledge of the theory, can take the theory and produce the same predictions from it, the originator and anyone else correctly interpreting the theory would.

    This is the criteria the climate models fail, and why I argue, their output isn’t science. Which is not to say a well articulated climate model isn’t possible, but the current crop don’t make the grade.

  5. I am perfecting a model for the weekly “Six from Thirty Six” lottery. I ran it about sixty million times and averaged all the models.
    I then bet on number 18 coming up six times…

  6. Good work Willis.
    As I posted in you last thread, over rating volcanics is the key the exaggerating CO2. Since the real data does not support either it is now necessary to ‘reanalise’ it until it does.

    The other thing that look pretty screwed in the model is that El Nino _boosts_ OHC. They rather foolishly hightlight this feature too.

  7. Very good exposition of the True Nature Of Climate Models – And Modelers.

    I can’t claim to have paid any meaningful attention to the internal workings of these things, but I am unaware of having come across a straightforward description before. Too much of “Climate Science” is lost in the earnestness of parsing details so as to establish the validity or not of a preferred interpretation, whilst not really paying attention to the actual validity of the whole damn process in the first place.

    Lots of people quite rightly point out problematic aspects, or again rightly say that these are models not reality, but to illustrate the essentially manufactured nature of models in this simple way is rarely done.

  8. Two comments.

    Re-analysis of weather works to some extent because a lot of relevant data have been collected which can constrain the model quite effectively. It can be useful, if only to produce “complete” atmospheric states in a regular format including many poorly observed variables, so there is more to learn from and it can be done easier.

    Ocean reanalysis is much less effective: there is a severe lack of data which could effectively constrain the model (most data is at or near the surface, you can’t easily collect profiles), while the spatial scales of the processes are much smaller than in the atmosphere so you would need a lot more. People call it “ocean reanalysis” but this type of product is in no way comparable to an atmospheric reanalysis. This is not likely to change.

    About all reanalysis: it is hard to verify a state-of-the-art product, since almost all data went into it (in some cases it can be done though). For the atmosphere, the same models are used for weather forecasting, so quite a lot is known about (forecast) skill, which helps. This is not the case with ocean models.

  9. One thing that stands out in 0-100m line in figure 4 is that there are three notable events

    Big drops in OHC, as well as I can read off that scale they’re centred on 1971 1985 1999. Even 14 year intervals. One of them coincidentally matches a volcanic eruption.

    This needs to be examined in the annual or 3monthly data to avoid getting fooled by runny mean distortions, but It has been noted elsewhere that the drop in SST around El Chichon actually starts well before the eruption.

  10. Willis,

    If what you describe is correct then Fig 1. in the Trenberth paper would be classified as fraud in any other field. For example a similar procedure applied in the search for the Higgs Boson at CERN would have generated a signal from nothing by “correcting” the raw data by a (complex) model that assumes its existance. Instead you can only compare the measured raw data with the simulated signal predicted by the model. Only if the two agree can you begin to claim a discovery. You show clearly that in fact the raw Leviticus data indeed show no such volcanic signal.

  11. “Now for me, anyone who looks at Figure 4 and claims that they can see the effects of the eruptions of Pinatubo and El Chichon in that actual data is hallucinating. There is no effect visible.”

    Put in a 5 year filter and you will see it too.

  12. Cees de Valk says:
    May 11, 2013 at 1:18 am

    Two comments.

    Re-analysis of weather works to some extent because a lot of relevant data have been collected which can constrain the model quite effectively. It can be useful, if only to produce “complete” atmospheric states in a regular format including many poorly observed variables, so there is more to learn from and it can be done easier.

    Ocean reanalysis is much less effective: there is a severe lack of data which could effectively constrain the model (most data is at or near the surface, you can’t easily collect profiles), while the spatial scales of the processes are much smaller than in the atmosphere so you would need a lot more. People call it “ocean reanalysis” but this type of product is in no way comparable to an atmospheric reanalysis. This is not likely to change.

    Thanks for the thoughts, Cees. However, I disagree. Look at the problems in Figure 1 with the pre-1980 results from the five reanalysis model runs. That wide range in results is because the reanalyses are poorly constrained by the pre-1980 data. However, after 1980 this is much less the case, with the five model runs becoming very similar.

    And since the introduction of the Argo data, the constraints have gotten even tighter.

    So your claim, that the problem is that the data doesn’t constrain the reanalysis, is clearly untrue. The more recent results shown in Figure 1 are very close together, meaning that they are tightly constrained … but unfortunately, despite being well constrained they are also wrong …

    w.

  13. Willis: I will agree that I’ve never seen dips and rebounds from volcanic eruptions in global ocean heat content data, but it should be visible in sea surface temperature data. The sea surface temperature data in your Figure 4 appears to be smoothed with a 5-year filter… http://oi43.tinypic.com/2ztal54.jpg
    …while the ocean heat content data looks as though it’s annual data. Please confirm.

    The 1982/83 El Nino and the response to the eruption of El Chichon were comparable in size so they were a wash in sea surface temperature data, but Mount Pinatubo was strong enough to overcome the response to the 1991/92 El Nino, so there should be a dip then. The 5-year filter seems to suppress the response of the sea surface temperature data to Mount Pinatubo.

    Also if you present the sea surface temperature data in annual form in your Figure 4, then the dip in the subsurface temperatures for 0-100 meters caused by the 1997/98 El Nino will oppose the rise in sea surface temperatures then.

    Regards

  14. lgl says:
    May 11, 2013 at 1:49 am

    “Now for me, anyone who looks at Figure 4 and claims that they can see the effects of the eruptions of Pinatubo and El Chichon in that actual data is hallucinating. There is no effect visible.”

    Put in a 5 year filter and you will see it too.

    Been there, tried that with a 5-year centered Gaussian filter, and I still couldn’t see the slightest sign of an effect from the eruptions. Your move.

    w.

  15. As Tamsin Edwards would say, ‘all models are wrong, but some can be useful’.

  16. Overall I think most observations and conclusions in this article are correct. There are still things with which I don’t agree, though.

    For instance in Figure 2, model runs “plunging to snowball earth” are not significant part of the dataset. Their presence does not make the whole simulation invalid. Significant part, i.e. majority of runs actually holds up to constant value. The important thing there is that the object of the research was comparison of (simulated) situation with “normal CO2″ versus situation with “doubled CO2″ and the result definitely allows to perform such comparison. Of course the result is limited to accuracy of the modelling itself and it is certain that these models are not perfect as there is no perfect climate model on the Earth yet. There’s of course no guarantee that even the average result is reliable, but that’s not because some runs diverge but because of unknown amount of physics not simulated by the model which may have significant influence on climate.

    Regarding Figure 3, Trenberth’s reanalysis is about the 0-700 m layer so comparing 0-2000 m data is somewhat irrelevant to it. Levitus sure produced also 0-700 m measurements so I guess you could have compared these. But I guess I can see the problem. They actually have dents corresponding to Trenberth’s “reanalysis” data, don’t they? Maybe just not as big.
    ftp://kakapo.ucsd.edu/pub/sio_220/e03%20-%20Global%20warming/Levitus_et_al.GRL12.pdf

    Figure 4 makes up for Figure 3 a bit except Trenberth’s data are not present in it for comparison (and in corresponding format) I agree that smoothing might obscure the data but so does presenting data in different formats to make direct comparison hard. The data processing is different from Trenberth (annual means instead of smoothed monthly means) but it contains observable signal for surface temperature and 0-100 m layer (very noisy, probably statistically insignificant but observable), but definitely no signal for bigger depths.

    It would be nice to have all three volcanic eruptions marked in your graphs, though.

  17. QUESTION: “We need more money! How do we get more money without doing science?”

    ANSWER: “Easy! When you run out of science, just baffle them with bullnalysis….”

  18. Bob Tisdale says:
    May 11, 2013 at 2:14 am

    Willis: I will agree that I’ve never seen dips and rebounds from volcanic eruptions in global ocean heat content data, but it should be visible in sea surface temperature data. The sea surface temperature data in your Figure 4 appears to be smoothed with a 5-year filter… http://oi43.tinypic.com/2ztal54.jpg
    …while the ocean heat content data looks as though it’s annual data. Please confirm.

    The 1982/83 El Nino and the response to the eruption of El Chichon were comparable in size so they were a wash in sea surface temperature data, but Mount Pinatubo was strong enough to overcome the response to the 1991/92 El Nino, so there should be a dip then. The 5-year filter seems to suppress the response of the sea surface temperature data to Mount Pinatubo.

    Also if you present the sea surface temperature data in annual form in your Figure 4, then the dip in the subsurface temperatures for 0-100 meters caused by the 1997/98 El Nino will oppose the rise in sea surface temperatures then.

    Regards

    Thanks, Bob. As usual, you are right. I was still inadvertently using the 5-year average data from the previous analysis. I’ve updated Figure 4 with the correct annual SST data.

    I still don’t see any volcanic effect, though. The drop post 1991 is absolutely bog-standard and indistinguishable from half-a-dozen other such drops in the record.

    w.

  19. Bob
    “the dip in the subsurface temperatures for 0-100 meters caused by the 1997/98 El Nino will oppose the rise in sea surface temperatures then.”

    No Bob, the 1997/98 El Nino heated the ocean. That heat is found two years later in the 100-700m layer, which has misled you to believe La Nina is heating the ocean.

  20. Kasuha says:
    May 11, 2013 at 2:27 am

    Overall I think most observations and conclusions in this article are correct. There are still things with which I don’t agree, though.
    For instance in Figure 2, model runs “plunging to snowball earth” are not significant part of the dataset. Their presence does not make the whole simulation invalid.

    First, I posted that graphic to show that the effect of accumulated error can send a model into a tailspin.

    Second, by appearances about 1% of the models fell off of the rails. The earth (despite huge provocation) has not fallen off the rails in the last half a billion years. If the real earth had a 1% failure rate, it would have gone off the rails long, long ago. This means that there is some serious problem with the model.

    Significant part, i.e. majority of runs actually holds up to constant value. The important thing there is that the object of the research was comparison of (simulated) situation with “normal CO2″ versus situation with “doubled CO2″ and the result definitely allows to perform such comparison.

    Perhaps that impresses you. Me, I see that the earth doesn’t have a 1% failure rate, which means that the model contains some kind of fundamental errors. Does that affect the “comparison of (simulated) situation with “normal CO2″ versus situation with “doubled CO2″”?

    Who knows … but it certainly doesn’t give me the slightest desire to draw any conclusion from the results.

    Of course the result is limited to accuracy of the modelling itself and it is certain that these models are not perfect as there is no perfect climate model on the Earth yet. There’s of course no guarantee that even the average result is reliable, but that’s not because some runs diverge but because of unknown amount of physics not simulated by the model which may have significant influence on climate.

    Oh, please. That’s splitting hairs. The model is going off of the rails because of the “physics not simulated by the model”, so what’s the difference between the model going off the rails and the physics not being properly represented in the model? End result is the same, it goes off the rails.

    Regarding Figure 3, Trenberth’s reanalysis is about the 0-700 m layer so comparing 0-2000 m data is somewhat irrelevant to it. Levitus sure produced also 0-700 m measurements so I guess you could have compared these. But I guess I can see the problem. They actually have dents corresponding to Trenberth’s “reanalysis” data, don’t they? Maybe just not as big.
    ftp://kakapo.ucsd.edu/pub/sio_220/e03%20-%20Global%20warming/Levitus_et_al.GRL12.pdf

    Please don’t make accusations that I’m avoiding graphs because of what they show. You might do that kind of thing, I have no idea.

    I don’t do that, and I don’t appreciate your nasty insinuations. I have in fact shown the 0-700m Levitus measurements in Figure 4, and the Trenberth results are in Figure 1. But if you’d like them separated out, here they are:


    Figure 3.


    Figure S1. Levitus and ORAS4 data for the 0-700 m layer.

    As you can see, the 0-700 metre layer shows nothing more about the effects of the volcanoes than does Figure 3 showing the 0-2000 metre layer. I considered putting in the 1-700 metre data, but I left it out. However, I did so for the OPPOSITE REASON from what you speculate—not because it contradicted my thesis, but because it added no new information that was not shown in Figure 3. Which is hardly surprising, since the post-1980 correlation between the 0-700 and the 0-2000 m ORAS4 layers is about 0.9.

    Figure 4 makes up for Figure 3 a bit except Trenberth’s data are not present in it for comparison (and in corresponding format) I agree that smoothing might obscure the data but so does presenting data in different formats to make direct comparison hard. The data processing is different from Trenberth (annual means instead of smoothed monthly means) but it contains observable signal for surface temperature and 0-100 m layer (very noisy, probably statistically insignificant but observable), but definitely no signal for bigger depths.

    There is a limit to how much data I can put on one graph, and on the number of graphs folks will look at before dropping it. I try to balance them, so at times I leave things off of graphs.

    Your complaint that Trenberth uses monthly means and I have processed it differently ignores the fact that the real data is annual, not monthly. So if there is a fault here it is not mine, I can’t manufacture monthly data the way that Trenberth did …

    It would be nice to have all three volcanic eruptions marked in your graphs, though.

    I put Mt Agung on Figure 4. Comparing it to Trenberths results is meaningless given the huge error bars. With error bars like that, we have no clue even as to whether the data is rising or falling, because in those early model results, one year is not statistically different from any other.

    In any case, there is no sign of Mt. Agung in the actual records … so whether it shows up in the reanalysis nonsense is not particularly meaningful.

    Thanks,

    w.

  21. lgl says:
    May 11, 2013 at 2:42 am

    Willis

    If you give me your fig.4 data on .txt or .xls :)

    Why go the long way around via txt or xls? Here’s the data in comma-separated (CSV) format:

    YEAR,  0 to 100 m,  100 to 700 m,  700 to 2000 m,  Surface: ERSST,  Surface: ICOADS SST
    1955.5, -0.106, -0.003, 0.005, -0.224, -0.225
    1956.5, -0.096, 0.003, 0.005, -0.202, -0.193
    1957.5, -0.063, -0.028, -0.003, -0.14, -0.18
    1958.5, 0.000, 0.000, 0.000, 0.00, 0.00
    1959.5, -0.044, -0.001, -0.001, -0.04, -0.05
    1960.5, -0.020, 0.005, -0.002, -0.14, -0.15
    1961.5, -0.028, -0.002, -0.001, -0.07, -0.08
    1962.5, -0.043, 0.013, 0.000, -0.08, -0.08
    1963.5, 0.008, -0.011, -0.003, -0.10, -0.16
    1964.5, -0.116, 0.000, 0.002, -0.08, -0.11
    1965.5, -0.088, -0.003, 0.003, -0.23, -0.31
    1966.5, -0.067, -0.019, 0.004, -0.10, -0.13
    1967.5, -0.135, -0.012, 0.000, -0.13, -0.18
    1968.5, -0.110, -0.034, -0.003, -0.21, -0.28
    1969.5, -0.042, -0.029, -0.001, 0.08, 0.07
    1970.5, -0.116, -0.027, -0.001, 0.04, 0.02
    1971.5, -0.232, 0.012, 0.004, -0.12, -0.19
    1972.5, -0.107, -0.027, -0.002, -0.09, -0.16
    1973.5, -0.063, -0.014, 0.002, 0.14, 0.11
    1974.5, -0.116, 0.005, 0.004, -0.17, -0.19
    1975.5, -0.129, 0.022, 0.008, -0.09, -0.18
    1976.5, -0.112, 0.007, 0.008, -0.25, -0.39
    1977.5, 0.054, 0.009, 0.008, 0.06, -0.04
    1978.5, 0.047, 0.012, 0.009, 0.01, -0.05
    1979.5, 0.059, -0.003, 0.006, 0.03, -0.02
    1980.5, 0.103, 0.015, 0.009, 0.13, 0.07
    1981.5, 0.054, 0.011, 0.005, 0.00, -0.08
    1982.5, 0.025, -0.014, 0.001, 0.04, -0.01
    1983.5, 0.091, -0.031, 0.007, 0.18, 0.13
    1984.5, -0.009, 0.014, 0.006, 0.05, 0.00
    1985.5, -0.015, 0.023, 0.011, -0.02, -0.10
    1986.5, 0.016, 0.003, 0.008, -0.04, -0.10
    1987.5, 0.159, -0.019, 0.005, 0.06, -0.02
    1988.5, 0.085, 0.018, 0.006, 0.22, 0.19
    1989.5, 0.069, 0.019, 0.006, 0.01, -0.07
    1990.5, 0.160, -0.007, 0.007, 0.08, 0.03
    1991.5, 0.167, 0.023, 0.003, 0.16, 0.12
    1992.5, 0.162, -0.002, 0.003, 0.10, 0.03
    1993.5, 0.155, 0.000, 0.009, 0.08, 0.03
    1994.5, 0.105, 0.019, 0.009, 0.04, 0.00
    1995.5, 0.142, 0.021, 0.009, 0.16, 0.12
    1996.5, 0.120, 0.050, 0.012, 0.09, 0.02
    1997.5, 0.165, 0.012, 0.010, 0.10, 0.02
    1998.5, 0.242, 0.012, 0.009, 0.40, 0.34
    1999.5, 0.070, 0.055, 0.007, 0.17, 0.10
    2000.5, 0.102, 0.052, 0.011, 0.12, 0.09
    2001.5, 0.167, 0.030, 0.008, 0.18, 0.12
    2002.5, 0.233, 0.058, 0.011, 0.25, 0.25
    2003.5, 0.254, 0.081, 0.020, 0.29, 0.25
    2004.5, 0.286, 0.092, 0.024, 0.30, 0.27
    2005.5, 0.274, 0.073, 0.019, 0.28, 0.26
    2006.5, 0.264, 0.093, 0.024, 0.23, 0.20
    2007.5, 0.217, 0.094, 0.026, 0.31, 0.29
    2008.5, 0.176, 0.109, 0.027, 0.11, 0.07
    2009.5, 0.282, 0.093, 0.027, 0.20, 0.16
    2010.5, 0.294, 0.097, 0.031, 0.36, 0.29
    2011.5, 0.224, 0.115, 0.033
    2012.5, 0.242, 0.113, 0.037

    Rock on …

    w.

  22. Kevin Trenberth seems to have re-discovered the faith in climate models that deserted him in this Nature Climate Change blog post from June 2007. it has been posted many times in many places, but people forget:

    http://blogs.nature.com/climatefeedback/2007/06/predictions_of_climate.html

    “I have often seen references to predictions of future climate by the Intergovernmental Panel on Climate Change (IPCC), presumably through the IPCC assessments.

    In fact, since the last report it is also often stated that the science is settled or done and now is the time for action. In fact there are no predictions by IPCC at all. And there never have been.

    “None of the models used by IPCC are initialized to the observed state and none of the climate states in the models correspond even remotely to the current observed climate. In particular, the state of the oceans, sea ice, and soil moisture has no relationship to the observed state at any recent time in any of the IPCC models.

    There is neither an El Niño sequence nor any Pacific Decadal Oscillation that replicates the recent past; yet these are critical modes of variability that affect Pacific rim countries and beyond.

    The Atlantic Multi-decadal Oscillation, that may depend on the thermohaline circulation and thus ocean currents in the Atlantic, is not set up to match today’s state, but it is a critical component of the Atlantic hurricanes and it undoubtedly affects forecasts for the next decade from Brazil to Europe.

    Moreover, the starting climate state in several of the models may depart significantly from the real climate owing to model errors.”

    These were quite revealing statements because only some 3 months earlier he had presented the AR4 report conclusion to the Committee on Science and Technology of the US House of Representatives.

    “The iconic summary statement of the observations section of the IPCC (2007) report is “Warming of the climate system is unequivocal, as is now evident from observations of increases in global average air and ocean temperatures, widespread melting of snow and ice, and rising global mean sea level.”

    Sometimes models have to be changed to fit the political narrative as with Tom Wigley’s MAGICC model, part funded by the US EPA. You can download the manual here:

    http://www.cgd.ucar.edu/cas/wigley/magicc/UserMan5.3.v2.pdf

    “Changes have been made to MAGICC to ensure, as nearly as possible, consistency with the IPCC AR4.”

    There is more on the politics behind models here: – “Undeniable Global Warming And Climate Models”- http://scienceandpublicpolicy.org/originals/undeniable_models.html

  23. I give up. Why don’t these people just get on a time machine, and go back to the Soviet Union’s heyday, when they can make up whatever ‘reanalysis data’ they like and present it as true and sound?

    Kriging has well-understood limitations, unlike what is used by Trenberth et al. above. Bendigo Gold had a $250 million write off a few years ago, fooling everyone-including the banks- because some fancy statistician fudged the resource numbers-in this case the ‘nugget effect’ in the drilling data, which any 1850s miner could have told them the Bendigo gold field was famous for. The gold that was supposed to be between the drillholes just wasn’t there.

    I would have thought a lot of well educated, out of work statisticians could make themselves a useful career auditing the shenanigans of climate science. (But of course, like in the field of mining, what usually happens is that the auditors-in the 3rd world that means the local government- usually just get their snouts in the trough and the whole regulatory process breaks down. Same as climate science, I suppose).

  24. Willis you have a gift. I admire (and slightly envy) your ability to grasp what’s relevant from what’s BS and clearly explain it to others. Thanks again.

    Clive Best says:
    “If what you describe is correct then Fig 1. in the Trenberth paper would be classified as fraud in any other field.”
    Absolutely! Only climastrology warps data to fit models. Every other scientific discipline uses empirical data to test their models.As time progresses and IPCC climate model projections go farther and farther “off the rails” the more climastrologists will resort to this kind of fraud to try to convince ‘low information voters’ that they were really correct. This fraud should be pointed out at every opportunity.

  25. The Argo actual measurements are 0.46 W/m2 is being absorbed into the 0-2000 metre ocean.

    Trenberth says a climate model reanalysis provides an estimate of 1.1 W/m2.

    I think we should just thank Dr. Trenberth, for finding yet another example of the climate models overestimating the warming rate / climate impacts by more than double.

    So far, that makes about 12 out of 13 key climate aspects that the climate models miss by 50%:
    – surface temperature:
    – troposphere temperature;
    – volcanic impact;
    – Ocean Heat Content;
    – water vapor;
    – precipitation;
    – CO2 growth rate feedback;
    – cloud optical depth;
    – OLR;
    – Antarctic sea ice;
    – stratosphere temps (after correcting for ozone loss from volcanoes)
    – sea level increase;

    I’ll give them the
    – Arctic sea ice.

    So Trenberth did not find (some of) the missing energy, he just pointed out where the mssing energy error originates:

    – in the climate models and in the theory.

  26. Thanks Willis

    No you are not hallucinating. The change really goes negative after all three major eruptions (and the latest strong Ninas)

  27. “…if you say that the outputs of the kriging process are “data”, you can be thrown in jail…”

    Careful there Willis. You know how you can get into trouble for saying obvious things such as these. ;) Ask Anthony and his Fox interview.

  28. Thank Willis for an excellent article.

    As someone with years of experience in computational fluid dynamics, there is in fact a third BIG problem with climate models, and that is that they are highly NON-LINEAR. What this means is that a seemingly small error in one variable can amplify (by quite a lot) as you march the numerical solution in time. Given that you are solving numerous coupled, non-linear differential equations with uncertainties in the boundary and initial conditions, the potential for producing erroneous solution is large. And there is no way with non-linear equations to prove or ensure that the time step you are using and/or spatial resolution of your mesh will yield a valid solution for a given problem definition.

    All of means that it is imperative that the modelers document their model equations, solution techniques and software design. And, actually, NCAR does a pretty good job of this. Others, like NASA/GISS, do a horrible job (because they really don’t care about model documentation…they’re more into blogging and tweeting).

  29. Willis
    Thanks for showing Fig. 2 with the very wide distribution in outputs from the same climate model. That shows both iterative errors and chaotic impacts.
    S. Fred Singer modeled the errors and recommends 400 model years of output for the mean results to settle out the chaotic effects. e.g 20 model runs for 20 years or 10 model runs for 40 years, or 40 model runs for 10 years. This is much more than the 1-5 runs that the IPCC typically reports. See:
    S. Fred Singer Overcoming Chaotic Behavior of Climate Models, SEPP July 2012

  30. Clouds have distinct edges, and they pop into and out of existence, without much in the way of “in-between”. The computer is not very good at that blotchy, patchy stuff.
    ===========
    Do we even know why this happens? Why does water clump together to form clouds? Why doesn’t it mix evenly with the air to form an even haze across the sky?

  31. Willis , when I try to run your R script from the link at the end of the posting it breaks down on the execution of the line:
    “… mydepths=read.csv(“levitus depth.csv”,header=FALSE)…”

    and spits out the error message in the following quote:

    ” Error in file(file, “rt”) : cannot open the connection
    In addition: Warning message:
    In file(file, “rt”) :
    cannot open file ‘levitus depth.csv': No such file or directory ”

    Did a code line for the creation of the comma separted file, perhaps fall out of the script and through the rifts in the floorboard when you uploaded to dropbox?.

    Here is how the scipt lines upto to (and including ) the offending line look when I click on the link given.
    ————————————————————————————–

    #URL ftp://ftp.nodc.noaa.gov/pub/WOA09/MASKS/landsea.msk

    url <- "ftp://ftp.nodc.noaa.gov/pub/WOA09/MASKS/landsea.msk&quot;
    file <- "levitus depths.txt"
    download.file(url, file)
    surf_area=511207740688000 # earth surface, square metres
    # depths by code from ftp://ftp.nodc.noaa.gov/pub/WOA09/DOC/woa09documentation.pdf
    depthcode=c(0, 10, 20, 30, 50, 75, 100, 125, 150, 200, 250,
    300, 400, 500, 600, 700, 800, 900, 1000, 1100, 1200,
    1300, 1400, 1500, 1750, 2000, 2500, 3000, 3500, 4000,
    4500, 5000, 5500, 6000, 6500, 7000, 7500,
    8000, 8500, 9000) # depths in metres for codes 1-40, where 1 is land.

    levitus_depths=as.matrix(read.fwf(file, widths=rep(8,10)))
    depthmatrix=matrix(as.vector(aperm(levitus_depths)),byrow=TRUE,nrow=180)

    mydepths=read.csv("levitus depth.csv",header=FALSE)
    ……
    ————————————————————————————–

  32. So what Trenberth is doing is making models of models? Have I got that right? Or is it models of models of models?

    Oh, I see no problem there, considering how ineffective and non-robust their climate models are to begin with. Sure, models of models ad nauseum–that fixes everything. /sarc

    I’d suggest to Trenberth that he dispense with his original models and quit daydreaming about his missing heat. It’s a dead end career move.

    BTW, Good analysis, Willis. Always an education.

  33. David L. Hagen says:
    May 11, 2013 at 5:52 am
    S. Fred Singer modeled the errors and recommends 400 model years of output for the mean results to settle out the chaotic effects
    ==============
    you can’t settle out the chaotic effects, which is something completely misunderstood by climate science.

    Say you use a pair of dice as your model of a pair of dice. This should be a perfect model – but in fact it isn’t. If you throw the dice 400 times you will get 7 as the most likely throw. So, this is your “climate prediction” as to what will happen when you roll the real dice.

    However, when you roll the real dice, you will get a result between 2 and 12. 7 is the most likely, but this doesn’t mean 7 is what will happen in reality.

    This is the same problem with trying to predict the future with climate models. No matter how perfect the model, you still can’t predict what will actually happen in the future.

    Maybe the future temperature will be “7”, but it might also be “2” or “12” and there is no way at present given our understanding of mathematics and physics to say which it will be.

    We can see this in the models above, where sometimes the model predicts heating, sometimes it predicts cooling, with no change in the forcings.

    This is the fallacy of using models to predict the future. The universe is not a 19th century clockwork. The future is not written. There is no “ACTUAL” future to be predicted.

    Our minds fool us into believing the future is a place at which we will arrive, because we assume that the future is like the present, only it is “ahead” of us in time. But this is not what the dice tell us.

  34. thingodonta says:
    May 11, 2013 at 4:34 am


    Kriging has well-understood limitations, unlike what is used by Trenberth et al. above. Bendigo Gold had a $250 million write off a few years ago, fooling everyone-including the banks- because some fancy statistician fudged the resource numbers-in this case the ‘nugget effect’ in the drilling data, which any 1850s miner could have told them the Bendigo gold field was famous for. The gold that was supposed to be between the drillholes just wasn’t there.

    High nugget effect is an immediate “red flag” that indicates a possible mix of problems, including: 1) poor analysis reproducibility (probably no controls in the assay samples); 2) spatial irregularities caused by down-hole drift with no survey tool to correct for it; 3) down-hole contamination from zones of mineralization above. If the data is of poor quality, any model will be of poor quality. The only fix in the real world is to go back to the data.

    But you’re right–like many “climate scientists” that make things up to keep their jobs, the same thing happened at Bemdogp. I’ve known companies that will employ several geostaticians and only keep those that give them the rosiest outlook. Of course, the find out later that those projections weren’t factual at all. Oops! Writeoff! The guys that gave them the straight story were let go because they didn’t add to their company’s “reserve base” and stock value.

    Write-offs should be assessed to mining CEOs who suffer from a “bling” mentality. That would help fix the problem.

  35. Willis,
    Excellent point about edges. The action is almost always on the margin. Like emergent properties, this topic deserves a whole post of its own.

    Isn’t it somewhat of a problem with trying to find the volcanic effects in OHC that volcanic aerosols are primarily regional and this analysis is looking at the globally averaged ocean? The mesh size of the net may be too coarse to catch this fish.

  36. What the models are telling us is quite a different story than what the modellers are telling us. Look at Figure 2. The model delivers a whole range of results. Without any change in the forcings, the model predicts both warming and cooling.

    This is very important to understand. The model shows us that both warming and cooling are possible with a doubling of CO2. Now the assumption in climate science is that “the future” is some sort of “average” of the model runs, which gives a sensitivity of between 2 and 3 K on Figure 1. However, this is a nonsense. The future is not on average of anything. We will not arrive at any sort of “average future”.

    If the model is 100 percent perfect, then our future lies along one of the lines predicted by the model and there is no way to predict which one. We could have cooling with a doubling, or we could have warming, without making any change. This is what the model is actually telling us.

    What is surprising is that more scientists don’t take the model builders to task on this point. In effect the models themselves are showing us that “natural variability” exists without any change in the atmosphere, or the sun, or the earth’s orbit. Rather, that even if we keep everything exactly the same, the models show us that climate will still change, and it may change dramatically.

  37. ferdberple says:
    May 11, 2013 at 6:10 am
    Clouds have distinct edges, and they pop into and out of existence, without much in the way of “in-between”. The computer is not very good at that blotchy, patchy stuff.
    ===========
    Do we even know why this happens? Why does water clump together to form clouds? Why doesn’t it mix evenly with the air to form an even haze across the sky?

    Clouds must be attractors. When something is contrained in a phase-space for no apparent reason, e.g. smoke ribbons rising in still air, an attractor is at work. Here is a paper on the nonlinear dynamics of clouds (paywalled unfortunately).

  38. Well, now we can add ‘data’ to the list.

    Terry Oldberg recently wrote this:
    “Climatologists often use polysemic terms. Some of these terms are words. Others are word pairs. The two words of a word pair sound alike and while they have different meanings climatologists treat the two words as though they were synonyms in making arguments. ”

    See his guest-post at the Briggs blog to see the implications of this developed to a dramatic conclusion: http://wmbriggs.com/blog/?p=7923

  39. The Ghost Of Big Jim Cooley says:
    “I used to tell the children of my family how science is ‘real’ and not like religion. I used to tell them that they can trust it completely, that the very nature of science meant that it had to be accurate – that it was our best guess on something after rigorous examination and testing. Well, those days are gone now.”

    To be honest that is part of the problem. We tend to treat science like it is an entity or something, like it holds weight in and of itself. It is a tool used by humans. Like all tools it can be used wrong (or flawed by manufacturing defects but science is a process so I suppose it doesn’t have that). Math is real to, but look how many mistakes can be made using it. To trust science is to trust those that use it, the scientist. We know very well that these men/women are not perfect (neither are we). The process is supposed to help minimize the impact of these perfections, but if history is any help we can never really remove it completely.

    As far as science being ‘real’ and not like religion, that is a interesting comparison. When it really comes down to it, unless you are doing the study yourself you are placing faith in someone else’s work, kinda like a religion (even if you did the work yourself your placing faith in the peer review process). Sometimes we generalize religion to be a blind faith (that exists) but faith generally requires a basic understanding of something, like sitting in a chair. You wouldn’t sit in a chair if you didn’t believe it would hold your weight in the first place, yet you don’t test every chair you sit in (you have faith in the process that made that chair). As far as religion requiring an initial belief in something unseen and non-provable like God, but on the flip side some people aren’t content with putting any belief in that we are creatures of chance and slow mutations (also non-provable without a time machine and that ever elusive missing link).

    Long story short, I really think science will be scrutinized a little more as we continue to trust a little less and require more proof of this MGW monstrosity. Anyway, I’m done.

  40. I wonder if the authors of this paper have any comment on why ocean warming apparently started only in 1975. And before that an apparent cooling trend. They try of course to dismiss the pre-1975 period with a “high uncertainty” comment”. Why did rising CO2 only become effective in warming the oceans after 1975?

  41. The error path began with the assumption that ‘science’ could be funded to give a desired result, rather than reality….it is therefore, government funded, commodity speculation MARKETING. Since it is a demonstrable failure, it is amusing to review another, little known marketing failure.

    When the suits at Ford Motor Company were preparing the Ford Taurus concept, they wanted to overcome the dull family sedan reality with ‘sports car’ pretensions, so they contacted racing legend, Scotsman, Jackie Stewart, who agreed to endorse the new auto, given a list of ‘performance’ and quality features. Corporate bean counters ‘re-adjusted’ this list, without notice and invited Jackie to the product launch at the Detroit airport.

    Jackie had been paid a million dollars for this endorsement and flew over in his private jet. To capture the ‘spontaneity’ of the moment, an advertising video team was set up to document the impromptu review. Jackie walked up to the Taurus, lifted the flimsy plastic door handle and said…”What is this? This is crap”. He got inside, noticed the goofy interior, said, “What is this? This is crap.” Jackie drove the Taurus a few laps around the taxiway, repeating the same refrain. Finally he got out of the car, walked to the gas fill door, opened it up and the plastic cap was held by a plastic cable and dangled against the body of the car.

    Jackie stepped back, saying….”This is crap, THIS IS ALL CRAP”….at which point, he got into his still running private jet and departed. The Taurus dropped their racing legend pretensions. We have examined the hypothesis, the real data, the altered data, the ridiculous predictions, the dire warnings. We can only conclude that AGW is another Wall Street created marketing failure deserving of the Jackie Stewart quote.

    BTW, the MAGICC mentioned above si “Model for the Assessment of Greenhouse Induced Climate Change….for the simpletons who chose ‘magic’ over science.

  42. Excellent example of how liars find new ways to lie about their lies to keep the lies going. Kind of like the Obama administration and Benghazi.

  43. Thanks Willis for a well written and well illustrated educational essay. Steve McIntyre often points out that there are often cases where adverse data is not reported in paleo reconstructions. It should probably be SOP that when model data is presented, the adverse model runs are reported too. I think that would be enlightening. We can start with the IPCC.

  44. lgl says:
    May 11, 2013 at 5:16 am
    “No you are not hallucinating. The change really goes negative after all three major eruptions (and the latest strong Ninas)”

    The time series of Levitus 0-2000m and ORAS4 still look wildly different after the eruptions. For instance after El Cichon Levitus goes up while ORAS4 goes down.

  45. phlogiston says:
    May 11, 2013 at 7:12 am
    Clouds must be attractors.
    ================
    Assuming clouds are attractors yields results that somewhat match clouds. However, this doesn’t tell us what gives rise to the attractor.

    Why when one sails on the ocean, is there a very distinct line between runoff from the land and the ocean waters, many many miles from the land? Why hasn’t this long since mixed?

    It seems very curious to me that we have a whole bunch of scientists pretending to know a whole lot about the physical world, that apparently aren’t able to explain the clouds in the sky or water in the oceans. But feel it their duty to tell everyone else how to act.

  46. Willis Eschenbach says: “I still don’t see any volcanic effect, though. The drop post 1991 is absolutely bog-standard and indistinguishable from half-a-dozen other such drops in the record.”

    The drop in 1991 does not appear in the East Pacific data due to the strength of the ENSO signal:

    But the effects of Mount Pinatubo show up plain as day in the Atlantic, Indian and West Pacific data, especially when you consider there was a series of El Niños then:

    And with that in mind, it really does make its presence known in the monthly global data. There should’ve been a hump in the 1990s similar to the period from 2002 to 2007 due to the string of secondary El Niños from 1991/92 to 1994/1995:

    But you’re right—it’s indistinguishable from the other noise in the annual global data.

    Regards

  47. “Do we even know why this happens? Why does water clump together to form clouds? Why doesn’t it mix evenly with the air to form an even haze across the sky?”

    A discrete nucleation site and surface tension creates clumping on a small scale. Rising and falling parcels of air creates the boundaries. Ultimately though, the boundaries are created by the non-homogenous surface of the earth. The earth is a “blotchy” radiator.

  48. Think of a drug company using similar data reanalysis of their previous clinical trials data to prove the beneficial effects of ingesting one of their products. And think that they would argue that changes in dosage affected the outcome in patients similarly to argument about the effect of Pinatubo, El Chichon and Mt. Agung eruptions on ocean heat content. And think they would present their paper and seek an approval of their drug.

  49. lgl says: “No Bob, the 1997/98 El Nino heated the ocean. That heat is found two years later in the 100-700m layer, which has misled you to believe La Nina is heating the ocean.

    http://virakkraft.com/SST-SSST.png”

    My apologies, lgl. I was apparently having trouble reading the 0-100 meter data without having had a cup of coffee this morning.

    But a clarification: La Niña’s can and do cause ocean heat content to warm in the tropical Pacific. The 1954-57, 1973-76, 1995/96 and 1998-01 La Niña’s are highlighted in the following graph:

    On the other hand, looking at ARGO era data, the ocean heat content of the oceans remote to the tropical Pacific can warm in response to El Niños and cool due La Niñas, like in the tropical North Atlantic:

    Then there’s the Indian Ocean during the ARGO era, which warms during El Niño events but doesn’t cool during La Niña events:

    Regards

  50. “Then that result, the prediction for a half hour from now, is taken as input to the climate model, and the next half-hour’s results are calculated. Do that about 9,000 times, and you’ve simulated a year of weather … lather, rinse, and repeat enough times, and voila! You now have predicted the weather, half-hour by half-hour, all the way to the year 2100.

    There are two very, very large problems with iterative models. The first is that errors tend to accumulate. If you calculate one half hour even slightly incorrectly, the next half hour starts with bad data, so it may be even further out of line, and the next, and the next, until the model goes completely off the rails. Figure 2 shows a number of runs from the Climateprediction climate model …

    ##############

    First off, Im not convinced by the paper willis discusses, but there is a potential misunderstand with re analysis data that bears some looking at.

    Re analysis does not go off the rails as Willis suggests. The step you are missing is called data assimilation.

    For the model at issue here see the following

    http://climatedataguide.ucar.edu/guidance/oras4-ecmwf-ocean-reanalysis

    http://onlinelibrary.wiley.com/doi/10.1002/qj.2063/abstract

    In simple terms it works like this. You take a weather model, in this case ECMWF and you iterate forward using a physics model one time step. Then to avoid the problem willis mentions above you use data assimilation. So lets say you have the value of the
    air temperature at midnight and 4 am. At midnight its 10C and at 4AM its 9C.
    You run your model to fill in the temporal gap. At 4AM model time you model says its
    9.2C. Do you let it run wild and accumulate errors? Nope, that where data assimilation
    comes is. You use every bit of observation data your have to keep the model on track.
    So you dont get the kind of “runaway error” that Willis points out. Of course, you might get different types of errors, you always will, but not the accumulation type errors as described in the post.

    But please folks if you have issues with re analysis data, please be consistent.

    It you have issues with weather model and reanalysis data, please contact Ryan Maue
    look closely at the chart below… NCEP is re analysis data

    http://wattsupwiththat.com/2013/02/28/february-2013-global-surface-temperature-at-normal/

    Still have issues? Were you on that thread warning about runaway error?

    Still wary about re analysis data. Contact the guys who make this chart. reanalysis data

    http://ocean.dmi.dk/arctic/meant80n.uk.php

    Still have issues. Contact the guys who wrote this paper. They used re analysis data

    http://surfacestations.org/fall_etal_2011.htm

    ##########################

    for more details see here

    http://www.ecmwf.int/products/forecasts/ocean/oras4_documentation/Data_assim.html

  51. Björn says:
    May 11, 2013 at 6:16 am

    Willis , when I try to run your R script from the link at the end of the posting it breaks down on the execution of the line:
    “… mydepths=read.csv(“levitus depth.csv”,header=FALSE)…”

    and spits out the error message in the following quote:

    ” Error in file(file, “rt”) : cannot open the connection
    In addition: Warning message:
    In file(file, “rt”) :
    cannot open file ‘levitus depth.csv’: No such file or directory ”

    Sorry, Björn, I left out the “levitus depth.csv” file. It’s here

    Best regards,

    w.

  52. What about the effects of other volcanoes that have errupted over the same time interval? Shouldn’t their algorithm show similar effects for those erruptions as well?

  53. Is this problem the same wth the recent Specific Humidity paper, why it correlates so well to global temperatures?

    Calculations, not data: I don’t mind those IF the equations don’t keep changing. What you are highlighting is that the equations are tweaked going forward, not just that the input data uses the results of the last calculation.

    Here’s something: we see the Scenarios from 1988 with the Actuals overlaid. Would it not be far more appropriate (and disturbing for the warmists) if we were to white out the Scenarios from 1988 to the present, leaving only the Scenario tracks of the future? Then we would see, for example, that to get the end-result of Scenario A, we would have to go from “here to there” in 87 years, a sudden uplift that is in no discussed Scenario?

    Why Scenarios and other model runs continue to show original options when observation has eliminated many of them, I do not know. Is it like the wife of a dullard who keeps telling you she could have been the wife of a Carnegie-Mellon she met at college but went with her heart (even though we suspect he had no real interest in her)?

    Most of climatological strutting is just that, repeated statements about how smart someone is, not about how good what he did.

  54. Thanks Bob, no problem
    Well, Indian ocean is among the reasons I have no confidence in the ARGO data

    Is the President of the Maldives in charge of some of the floats perhaps :)

  55. Willis: Good detective work. The abstract says: “Volcanic eruptions and El Niño events are identified [in this reanalysis] as sharp cooling events.” The observed changes in the ocean heat content after the Pinatubo eruption does not show a “sharp cooling event”. The authors don’t address this glaring inconsistency between observations and their re-analysis.

    Your criticism of re-analyses is somewhat inaccurate. As you note, with time errors gradually creep into the output from climate/weather models, but the re-analysis protocol forces the re-analysis output to return to observed data at places and times where we have data. Surface temperatures reported by the re-analysis, for example, are presumably properly constrained to match SSTs reported by satellites. The question is whether the reanalysis output is totally out of touch with reality in most of the ocean because there aren’t enough observations to properly constrain the reanalysis. This appears to be the case after AND before Pinatubo. The re-analysis introduced a huge warming in the years before Pinatubo and cooling in the years after Pinatubo that is apparent in the Levitus observations. This presumably is taking place at locations where we don’t have observational data. However, the re-analysis was performed with and without the massive amount of data added by Argo after 2003 and this additional data

  56. Willis, I don’t think we really disagree. What I meant with “constrain” is fix the model state to a suitably small neighborhood of the real state of the ocean (given a certain resolution, etc.). That is why I stressed the importance of having sufficient coverage with relevant measurements. You can always devise a “robust” data-assimilation method which smears out data to something which is insensitive to the exact input data, initial condition, etc. But that does not mean that the problem has been solved. The ocean is still largely a black box and will probably remain so for some time.

  57. “Deep ocean heat uptake is linked to wind variability” / “surface wind variability is largely responsible for the changing ocean heat vertical distribution” / “changes in the atmospheric circulation are instrumental for the penetration of the warming into the ocean, although the mechanisms at work are still to be established” / “changes in surface winds play a major role, and although the exact nature of the wind influence still needs to be understood, the changes are consistent with the intensification of the trades in subtropical gyres” / “changes in the atmospheric circulation play an important role in the heat uptake”http://people.oregonstate.edu/~schmita2/ATS421-521/2013/papers/balmaseda13grl_inpress.pdf

    I appreciate Trenberth’s appreciation of nature and I can agree with Trenberth on at least that much.

    Improvement of the narrative will hinge on better awareness of the following:

    a) The interannual variations aren’t spatially uniform and the coupling isn’t unidirectional, so correlations necessarily (in the strictest mathematical sense) won’t be linear across all possible aggregation criteria and pairs of variables.

    b) What’s controlling the changepoints illuminated by Figure 2 here?

    Trenberth, K.E.; & Stepaniak, D.P. (2001). Indices of El Nino evolution.

    http://www.cgd.ucar.edu/staff/trenbert/trenberth.papers/i1520-0442-014-08-1697.pdf

    Answer:
    systematic solar heliographic asymmetry (N-S) timing shifts relative to coherently shifting solar activity (N+S) & volatility (|N-S|) timing:

    http://img13.imageshack.us/img13/5691/911k.gif (green = blend)

    http://img267.imageshack.us/img267/8476/rrankcmz.png (see Mursula & Zieger (2001))
    http://img829.imageshack.us/img829/2836/volcano911.png (volcanic indices in italics; Cy = Chandler wobble y phase; SOST = southern ocean surface temperature; ISW = integral of solar wind)

    Supplementary:
    http://img201.imageshack.us/img201/4995/sunspotarea.png – Note well: N+S ~= 3 |N-S| (Remember that heliospheric current sheet tilt angle varies with solar cycle phase.)

    ___
    “[UPDATE: my thanks to Nick Stokes for locating the paper here.]“

    Nick or anyone else:
    Do you have a link to the supplementary material (S)?

    “There is also a net poleward heat transport during the discharge phase of ENSO as can be seen by the exchange of heat between tropics and extratropics, which is likely favored by the intensification of the trades after 1998 (Figure S04).”

    “After 1998, there was a rapid exchange of heat between the regions above and below 700 m (Figure S01 in suplementary material).”

    “[...] changes in the subtropical gyres resulting from changes of the trade winds in the tropics (Figure S04), but whether as low frequency variability or a longer term trend remains an open question”

  58. Frank says:
    May 11, 2013 at 10:18 am


    Your criticism of re-analyses is somewhat inaccurate. As you note, with time errors gradually creep into the output from climate/weather models, but the re-analysis protocol forces the re-analysis output to return to observed data at places and times where we have data. Surface temperatures reported by the re-analysis, for example, are presumably properly constrained to match SSTs reported by satellites. The question is whether the reanalysis output is totally out of touch with reality in most of the ocean because there aren’t enough observations to properly constrain the reanalysis. This appears to be the case after AND before Pinatubo.

    Thanks, Frank. I fear that you are just repeating the feel-good line of the reanalysis modelers. IF the results were actually “force[d] to return to observed data” as you claim, then we wouldn’t see such a huge divergence between data and reanalysis as we see in Figure 1.

    Clearly, this problem is NOT from the lack of data as you claim. If it were, all of the five different runs would not line up so nicely post 1980. Prior to that, Trenberth et al. agree that the lack of data is an issue, and that’s why the early results are all over the map. From the paper:

    A large uncertainty (more than 5×1022 J ) in the first 2 decades in the total OHC arises from the sparse observations that do not constrain the values well.

    But by 1991 there was plenty of data to constrain the reanalysis model.

    w.

  59. Bob

    b t w La Nina does not warm the Pacific either. Extending to -50/50 the two years lag and step warming after the 87 and 98 Ninos become very visible.

  60. ::sigh::

    I wish that people would realize that a computer is a wonderful tool for modeling a fairly simple closed system such as a car engine or a computer circuit but that it isn’t a good tool to accurately model a complex system.

  61. Thanks, Willis. Good work!
    Reanalysis is not data and should not be used as such.
    Reanalysis is torturing the data until if confesses what you want it to say.

  62. Since re analysis is used by several well known skeptics, I thought it might be useful

    for folks to read informative stuff.

    https://reanalyses.org/ocean/overview-current-reanalyses

    For folks who have worked in areas like target prediction, you can think of re analysis type systems as being very much like Kalman Filters. Of course not perfect, but we use them
    to shoot down bad guys. You will find differences between an “observational” data set
    say Levitus and a re analysis output in part because the re -analysis data can be used to
    correct for spurious errors and data coverage issues in “observational” datasets

    For example, We are all aware of the problems wth argo

    http://wattsupwiththat.com/2011/12/31/krige-the-argo-probe-data-mr-spock/

    In other words Levitus “heat Content” really is not observed. It is modelled.

    So when you compare Levitus “heat content’ with re analysis “heat content” you are not comparing observations with models. You are comparing two models. One has very few
    physical parameters for estimating heat content ( Levitus ) and the other Re Analysis uses
    all the data and all the physics you know.. So for example you’d use both the ARGO data
    and satellite data.

    It’s not as simple as saying “here is the curve for a model, and here is a curve for observations”
    They are both modelled.

  63. fredberple
    Thanks for your comments on chaos.
    Any suggestions then on why Singer’s results show declining variations with increasing run years?

    Geological evidence of glacial versus interglacial temperatures show a non-uniform distribution, suggesting colder temperatures during glacial periods are more common. Temperature was both higher and lower with higher CO2 levels and the temperature appears to vary between warmer and colder bounds.

    To me that indicates missing physics and missing feedbacks in the models.
    Proper recognition of Milankovitch cycles with Hurst-Kolmogorov dynamics appears to improve predictability over multiple scales. See:
    Markonis, Y., and D. Koutsoyiannis, Climatic variability over time scales spanning nine orders of magnitude: Connecting Milankovitch cycles with Hurst–Kolmogorov dynamics, Surveys in Geophysics, 34 (2), 181–207, 2013.

  64. Willis said: “Clearly, this problem is NOT from the lack of data as you claim. If it were, all of the five different runs would not line up so nicely post 1980. Prior to that, Trenberth et al. agree that the lack of data is an issue, and that’s why the early results are all over the map.”

    If you plot the difference between Levitus and the reanalysis, i think you will find that the biggest differences come just before and after Pinatubo, a time when data was better, but nowhere near as good as with Argo. The author’s claim that volcanoes produce significant cooling in the deeper ocean could be an artifact of the reanalysis, not a discovery made by re-analysis. I have no idea when there was enough data to trust this re-analysis, because we have little idea of how accurately models describe “diffusion” of heat below the mixed layer. Trustworthy observations of diffusion into the deeper ocean may come from measurements of CFCs, but I’ve heard little about how well climate models reproduce this data. For that matter, I’m not sure how well they even reproduce seasonal changes in the mixed layer (which are lost when one works with temperature anomalies).

    Does your BS detector understand how the initial reports from ARGO of no warming have changed into the rapid warming seen above.

  65. Steven Mosher says:
    May 11, 2013 at 12:30 pm
    “It’s not as simple as saying “here is the curve for a model, and here is a curve for observations”
    They are both modelled.”

    And one of them is an iterative model and one of them isn’t. Which is what Willis started with.

  66. @ Steven Mosher says: May 11, 2013 at 12:30 pm

    “So when you compare Levitus “heat content’ with re analysis “heat content” you are not comparing observations with models. You are comparing two models. One has very few
    physical parameters for estimating heat content ( Levitus ) and the other Re Analysis uses
    all the data and all the physics you know.. So for example you’d use both the ARGO data
    and satellite data.

    It’s not as simple as saying “here is the curve for a model, and here is a curve for observations”
    They are both modelled.”

    But the models still differ wildly in complexity. Enough so that one is an apple the other an orange. I would trust the Levitus data much more that the output from a complex climate model – even if it corrects its erroneous output with real temperature data.

    The fact that all measurements involve the application of theory does not imply that all models are created equal. That is a false argument.

  67. lgl says: “b t w La Nina does not warm the Pacific either…”

    I didn’t say the Pacific, lgl. My earlier comment reads “tropical Pacific”:
    La Niña’s can and do cause ocean heat content to warm in the tropical Pacific. The 1954-57, 1973-76, 1995/96 and 1998-01 La Niña’s are highlighted in the following graph:

    There’s a significant difference between the Pacific (50S-50N) and the tropical Pacific (24S-24N). With the latitudes you’re using, it appears you’re also capturing the 1988/89 shift in OHC in the extratropical North Pacific, lgl:

    And you’re including a portion of the South Pacific, which has so little source data before 2003 that it’s not worth plotting. All it provides is noise.

  68. lgl says: “Well, Indian ocean is among the reasons I have no confidence in the ARGO data

    Is the President of the Maldives in charge of some of the floats perhaps :)”

    Thanks. That’s a remarkable shift there in the early 2000s. It’s similar to the shift in OHC in the Northern North Atlantic:

  69. The huge 2003-2004 jump in OHC (~10^23 J) is an artifact in both the Levitus data & reanalysis, due to the xbt/mbt – argo transition.

  70. Willis, you wrote:
    “We have analyzed and tested the model called “kriging”, to the point where we understand its strengths and weakness, and we can use it with complete confidence.”

    Um. Speaking as a geologist who spent a lot of time and effort dong ore-reserve models, “it depends”. The devil is in the details, as always, and there are all kinds of things waiting to bite you in the ass, even if you are a careful and conscientious mining engineer. For one example, coarse gold and diamond deposits are notoriously hard to model, because the statistics of sampling are guaranteed to be lousy — for diamonds, you need to do pretty large-scale test-mining to get a half-reliable sample. Gold deposits aren’t much easier. So we ore-reserve guys are really careful and conservative when we deliver the model to the company — and remain nervous until mining is well underway.

    Which doesn’t detract from your point, of course, that the climate modellers are almost insanely optimistic about what their mathematics can deliver.

    Best regards, Pete Tillman
    Professional geologist, amateur climatologist

  71. As a real scientist I find this disgusting. They know it’s wrong so it’s fraud. This should be investigated by the ORI (office for research integrity http://ori.dhhs.gov/). Please report them as you see fit.

  72. Steven Mosher says:
    May 11, 2013 at 12:30 pm

    … For folks who have worked in areas like target prediction, you can think of re analysis type systems as being very much like Kalman Filters.

    You can think of them like that if you wish, but a reanalysis done using an iterative climate model is as far from a Kalman Filter as you can get. That’s like saying that an elephant is “very much like” a hyrax. Yes, they are in the same family … but you don’t want to put a howdah on a hyrax, and you don’t want to trust a reanalysis to a climate model.

    … In other words Levitus “heat Content” really is not observed. It is modelled.

    So when you compare Levitus “heat content’ with re analysis “heat content” you are not comparing observations with models. You are comparing two models.

    Duh. I said that in the head post, viz:

    Now, as Steven Mosher is fond of pointing out, it’s all models. Even something as simple as

    Force = Mass times Acceleration

    is a model. So in that regard, Steven is right.

    So why are you here to say exactly what I said all over again, and then act like you are revealing something to the unknowing?

    Steven, I mentioned you in the head post because I knew you would show up and bring this “it is all models” bullshit with you. I guess that schtick must fool the rubes, since you use it enough to be totally predictable, here you are … and not only totally predictable, but totally wrong.

    As I said above, there are models, and then there are models. You are asserting a false equivalence between say doing a reanalysis with the model we call kriging on the one hand, and doing a reanalysis with a climate model on the other hand. Your claim that it’s all models is like saying “house cats and lions are both felines, so they both must make excellent pets”. Yes, it’s all models … but in the world of models you are mistaking lions for house cats.

    I don’t care if everything is a model, Steven. That doesn’t make all models interchangeable. And it assuredly doesn’t make all models valuable for some specific task.

    One has very few physical parameters for estimating heat content ( Levitus ) and the other Re Analysis uses all the data and all the physics you know.

    Nonsense. One of them (Levitus) uses all the actual observations for estimating heat content.

    The other reanalysis uses the observations plus all of the imaginary physics and tuned parameters that you can jam into the GIGO box known as a computer climate model. As a result, and as I showed above, we end up with total garbage out in the form of an imaginary temperature rise and fall involving volcanoes.

    Steven, I had hoped that when you saw how this reanalysis ends up with results that have no relationship to reality, you might actually notice that not all reanalyses are created equal … but heck, I didn’t even get you to notice that not all models are created equal. For example, I’d trust my life to the model

    Force = Mass times Acceleration

    So would you, we do that every day. It is a model that is so good that we depend on it unconditionally.

    But if you trust your life to a computer climate model, you’re mad. Not all models are useful, and in particular, even those that are useful for one thing might be worthless for another.

    Now, I’ve heard your “everything is models” speech a couple dozen times now, and as I said in the head post, I agree with you. F=MA is a model. So is a climate model with a million lines of code.

    But so what?

    What on earth does that show about the suitability of a particular model for a particular task?

    The part you don’t seem to have noticed is that despite the fact that they are all models, some models are valuable and some are crap, some models are good for one thing and bad for another, and in particular that iterative models have a host of problems that make their use highly problematic for things like a re-analysis.

    Regards,

    w.

  73. The nature of many of the miriad of potential causal variables and their intercorrelations as determinants of climate is a chaotic soup. This includes volcanic activity. There are some interesting papers and research regarding the determination and relationships between “signals” within different chaotic systems. Most deal with communications. One of the more recent is:

    Chaotic signal detection and estimation based on attractor sets:
    Applications to secure communications
    G. K. Rohdea
    NRC Postdoctoral Research Associate, U. S. Naval Research Laboratory, Optical Sciences Division, Washington, D.C. 20375, USA
    J. M. Nichols and F. Bucholtz
    U. S. Naval Research Laboratory, Optical Sciences Division, Washington, D.C. 20375, USA
    Received 21 September 2007; accepted 10 January 2008; published online 10 March 2008

    There are many more.

    I continue to believe that predicting even some of the constituents of causal variables within climate, let alone climate itself, other than at the grossest level, such as semi-approximation of timing of the onset of glaciations, such as with the Milankovik cycles, is a fools errand based upon the chaotic nature of the subject.

    The warmists have the complexity of the subject on their side as they continue their “settled science” nonsensical obfuscation of the climate debate.

  74. Excellent insight and a very instructional post Willis! Kudos to everyone that helped you on this.

    Thinking about those temperature dips for the volcanos which are apparently used by the paper’s authors to bolster their claim of constructed temperature accuracy…

    If the model has parameters in the modeling array for volcanic aerosols forcing, then the model is likely to have other forcing parameters in the calculation? Seems likely to me.

    And somehow, I doubt the paper’s authors included thunderstorm’s as a natural limiting mechanism to ocean heat as you’ve ably demonstrated using argos data.

    Given that the paper’s model runs (re-analyzed data, seriously?) have ocean heat content rising, perhaps there is a forcing parameter for CO2 included in their code iteration array? Releasing all code and mathematical formulas is really essential to understand what Trenberth and others are modeling.

    I do wonder after reading through your post above, just how much of Trenberth’s increased ocean heat content is model forcings, not actual temperatures.

  75. David L. Hagen says:
    May 11, 2013 at 12:36 pm
    fredberple
    Thanks for your comments on chaos.
    Any suggestions then on why Singer’s results show declining variations with increasing run years?
    ============
    I haven’t studied his results. At a guess the most likely reason is that they are programed to do so or your interpretation is in error.

    clearly there is no physical reason why one would see declining variations in any single model run. it makes no physical sense. the longer you observer the ocean, the more likely you are to see a wave bigger than all you have seen before. the longer you record temperatures, the more likely you are to see a minimum or maximum the exceeds all you have seen before.

    We see this effect all the time in climate science. Folks start recording weather, and a big storm comes along. Oh No, the world is at an end, it is bigger than anything ever seen before. Humans must be that cause. Hardly. Much bigger storms have been seen before. Only those people are dead and the records are buried out of sight. So we get nonsense reporting.

  76. pdtillman says:
    May 11, 2013 at 2:35 pm

    Willis, you wrote:
    “We have analyzed and tested the model called “kriging”, to the point where we understand its strengths and weakness, and we can use it with complete confidence.”

    Um. Speaking as a geologist who spent a lot of time and effort dong ore-reserve models, “it depends”. The devil is in the details, as always, and there are all kinds of things waiting to bite you in the ass, even if you are a careful and conscientious mining engineer. For one example, coarse gold and diamond deposits are notoriously hard to model, because the statistics of sampling are guaranteed to be lousy — for diamonds, you need to do pretty large-scale test-mining to get a half-reliable sample. Gold deposits aren’t much easier. So we ore-reserve guys are really careful and conservative when we deliver the model to the company — and remain nervous until mining is well underway.

    Which doesn’t detract from your point, of course, that the climate modellers are almost insanely optimistic about what their mathematics can deliver.

    Thanks, Pete. The guys in the field, the engineers, the people who actually do the work on the ground, are the people I tend to listen to. Modeling an ore-body from limited information, where millions of dollars can be made or lost depending on the accuracy of your model, tends to focus a man’s mind wonderfully … one of the huge problems with scientists as opposed to engineers is that far too often the scientists have no skin in the game.

    Your obvious understanding of the model we call kriging reinforces my point about different kinds of models. You are clear, not only that kriging has strengths and weaknesses, but you know where most of the traps and pitfalls lie. You can recognize in front the kinds of situations where results can be less than optimal.

    I’ve actually been quite surprised that some variety of kriging isn’t used more in climate science for those times when it’s desirable to have a complete data field. At least it wouldn’t create imaginary drops in OHC after volcanic eruptions …

    Having said that, in general, I prefer to invent analysis methods that don’t require me to have complete datasets.

    Regards,

    w.

  77. Willis writes “Not all models are useful”

    This is an understated point. All global warming enthusiasts seem to agree that all models are wrong but some models are useful.

    Then they seem to assume that their model (eg GISS for Gavin) is one of the useful ones. There appears to be no acceptance that their particular model of interest is a dud. It must be the “other ones” that aren’t useful.

  78. A few weeks ago, WUWT had an article about the air portion of the re-analysis, “Global Warming Over Land Is Real: CU-Boulder, NOAA Study”

    http://wattsupwiththat.com/2013/04/13/global-warming-over-land-is-real-cu-boulder-noaa-study/

    The CU Boulder study is titled “Independent Confirmation of Global Land Warming without the Use of Station Temperatures” at http://onlinelibrary.wiley.com/doi/10.1002/grl.50425/pdf
    On page 4 it describes the analysis method…
    “We use the 20th Century Reanalysis (20CR), a physically-based state-of-the-art data assimilation system, to infer TL2m given only CO2, solar and volcanic radiative forcing agents; monthly-averaged sea surface temperature (SST) and sea ice concentration fields; and hourly and synoptic barometric pressure observations (from the International Surface Pressure Databank.”
    Physically based? Given “only” CO2 and a few other things?
    No wonder volcanoes and CO2 show up in the “data” – these things go into the “data”, which is nothing more than another model.

  79. Mr. Steven Moser, FerdBerple

    You’re saying that models are okay because they are controlled in an ONGOING manner, based on ONGOING accumulation of data. You tweak the models as you go. This seems to raise other problems. If you are taking data all along as you go, why even bother with the models at all? I must guess why: Climactic dudes wish to build themselves FUTUREBOTS.

    See FerdBerple as 6:49 AM on predicting the future. On to something there.

    Anyway, Steve, what I’m thinking is that you’re hoping to get a machine to eventually guess the “FUTURE” for you, but there is too much data you would need to get; in fact you will need all data on everything in the universe and more. You just can’t do it. And if you somehow had the dang data, the curious might crunch it, make hypothesis or two, and not think twice about building a dang FUTUREBOT in their own image.

    Maybe if you could enter 10,000 years of data you could have something. But still, not the FUTURE. I don’t think that modelers understand their own limitations. Perhaps training in classical philosophy or logic would help put modeling in a realistic perspective.

  80. Thanks for an interesting article. I looked up kriging to read more about the method. I find one curious anomaly in chart #4 and yes I did hallucinate a few times back in the 60s. If you look approximately 6 years out past each eruption, the graph shows a large upward heat spike on the surface:ICOADS SST line. That spike following all three eruptions gains approximately 2.5C from the point where the ICOADS line crosses the eruption event to the peak of the ICOADS line 6 years out. Is this just a coincidence? Also, there are 16 peaks in that time span or slightly over 3.5 years between spikes on the surface:ICOADS. It seems so regular, but what would cause that? The surface:ERSST closely follows the same pattern.

  81. Berényi Péter says, May 11, 2013 at 2:29 pm:

    “The huge 2003-2004 jump in OHC (~10^23 J) is an artifact in both the Levitus data & reanalysis, due to the xbt/mbt – argo transition.”

    This is something that should’ve been addressed a lot more often. Because it’s quite striking and clearly, as you say, nothing but an artifact of the ‘stitching together’ of the pre-ARGO and post-ARGO data-collating regimes. There is no way you can justify such a jump in global OHC during 2002-03, a year, year-and-a-half after the three-year La Niña of 1998-2001. Where would such a massive pulse of extra heat to the ocean come from? There are no known globally scaled mechanisms outside the ENSO process that could account for it. And the TOA fluxes surely do not show anything unusual during the period in question, if anything, rather a drop in net incoming:

    Here’s the global OHC 100-700m from NOAA:

    I’ve identified the noteworthy La Niña and El Niño events from 1970 to 2012.

    Starting from the beginning, you can clearly see the heat-storing work done by the 1970-72 and 1973-76 La Niñas, only interrupted by the conspicuous and sudden drainage by the 1972/73 El Niño. After the La Niñas, there’s a little secondary El Niño drop and from then on it’s pretty flat going until the enormous El Niño of 1982/83 draws a huge amount of the accumulated deep heat up towards the surface of the ocean, from where a large part of it is released into the troposphere.

    By this stage, the pattern is pretty clear – it’s mostly about the ENSO process.

    Following the 1982/83 El Niño is a new sequence of La Niñas separated by a solitary El Niño. This time it’s the on-and-off 1983-86 La Niña and the severe 1988/89 La Niña that’s storing the heat, while the 1986-88 El Niño is draining in between. Had it not been for the El Niño 1982/83 pulling out so much heat ahead of this particular La Niña sequence, there would have been a clear step up in OHC during this period also, just as with the equivalent sequences before (1970-76) and after (1996-2001). Now the general rise is hard to spot. There is pretty much zero trend during the 20 years from 1976 to 1996. But the individual La Niñas and El Niños are still doing what they’re supposed to be doing.

    The next sequence starts with the 1995-96 La Niña and follows through with the 1998-2001 event directly on the heels of the mighty 1997/98 El Niño. A new step change in general OHC level is established.

    Now, in the chart above, I’ve deliberately adjusted the OHC down in 2002/03. The official curve shows an extra major upward shift during this year, a year with neutral ENSO conditions leading up to a secondary El Niño. It should not go up here. According to the observed distinct pattern, It should go flat and then somewhat down. There simply is no justification for a significant rise in mean level during this period.

    From 2000/01 to 2007/08 global OHC once again proceeds more or less without further increase, until the latest La Niña sequence sets in by 2008. We have the same thing going all over again, only now the La Niñas don’t seem as powerful anymore. They seem to have lost steam. The buildup is there, but it’s less than in earlier times. Well, that is to say, if the ARGO data is directly comparable to the xbt/mbt data.

    Here is global OHC 0-100m BTW:

    A staircase if ever there was one.

  82. Pretty much the entire accumulated 0-700m ocean heat during ‘the ARGO era’ (2003-13) is to be found within the red area on this map:

    Let’s call it ‘The Extended Indian-Pacific Warm Pool’. It is basically the heat reservoir of the ENSO process. This is where La Niñas deliver their solar-generated heat. It is also where El Niños draw their gigantic volumes of warm water from to spread out across the tropical central and east Pacific and, after a distinctive few, large and solitary, where the leftover heat is brought back when the circulation eventually turns.

    You can clearly see the extension of the SPCZ in the South Pacific and the KOE in the North Pacific, both recipients of heat from similar oceanic conveyor systems. Interesting is the western (NW Indian Ocean) and especially the southern extension (S and W of Australia) of the heat-storing region.

    Mind you, there is a big range in absolute accumulation of heat also within the red area. It is far from even. Most is in fact occurring in the central region, the actual tropical Indian-Pacific Warm Pool and even there, the West Pacific part (N of New Guinea) is by far the greatest contributor.

    Anyway, the red area constitutes a little bit more than one fifth of the global ocean, the rest makes up a bit less than four fifths. Weighted against each other, it then comes out like this:

    You can easily see how the OHC evolution in the two opposing ‘basins’ of the global ocean tightly follows the NINO3.4 ups and downs, only the one in a direct fashion and the other in an inverted manner. So, nearly 80% of the global ocean is strongly cooling during ‘the ARGO era’. But this is more than offset by the prodigious accumulation in the extended Indian-Pacific Warm Pool region. Either way, it’s pretty hard to filter out a CO2 warming signal from this. If the magical molecule hasn’t somehow struck a special deal with the Warm Pool, that is …

    Look at these two maps comparing annual global OHC anomalies in the year 2003 (starting with an El Niño) and the year 2012 (starting with a La Niña):

    Notice how there is a complete or near anomaly reversal in most corners of the world between the two years, not just in the oceanic ENSO core region. And these are not even full-fledged ‘ENSO years’.

  83. This is how the OHC (0-700m) has evolved globally in ‘the ARGO era’ (2003-13) when divided into subsets (area weighted to show relative significance):

    Southern Extratropics (Pacific):

    Southern Extratropics (Atlantic):

    Southern Extratropics (Indian):

    Northern Extratropics (Pacific+Atlantic+Arctic):

    Arctic Ocean:

    (note that this is incorporated into the region above)

    Tropics (WPa/EIn):

    Tropics (EPa/At/WIn):

  84. Poems of Our Climate says:
    May 11, 2013 at 10:05 pm
    See FerdBerple as 6:49 AM on predicting the future. On to something there.
    ========
    Thanks 6:39 AM

    What I demonstrated in the very simple example I gave is that the future is not simply difficult to predict. Rather that it is impossible to predict from first principles under our current understanding of the physical world.

    Our imagination assumes that the future is simply the present displaced in time. Somewhere that we can “arrive at” and thus predict. What the simple example of the dice shows is that this view of the future is nonsense.

    The predictability we seen in simple examples such as F = MA is a byproduct of natures ability to always select the least energy path to the future, which suggests that nature knows something that we don’t. However this predictability quickly goes off the rails. For example, the 3 body problem.

    Rather than trying to model the future from first principles, there is only one technique that has been shown to work when dealing with complexity. Looking for patterns in the complexity. Early humans learned to predict the future by studying the cycles in nature, long before they understood what drove the cycles.

    We use this same technique to provide highly accurate predictions of the tides on earth for dozens, even hundreds of years into the future. These tides are extremely complex, much too complex to calculate from first principles. Yet we ignore this proven body of work when it comes to predicting climate.

  85. Willis>

    Come on, you can do better. You’re generally good at thinking-different. One can indeed see a signal from each volcano in the data as soon as you stop assuming the signal must be negative, as per climate science official wisdom. Pinatubo clearly corresponds to a drop, but Chich and Ag both correspond to clear temperature increases.

  86. goldminor: “If you look approximately 6 years out past each eruption, the graph shows a large upward heat spike on the surface:ICOADS SST line. That spike following all three eruptions gains approximately 2.5C from the point where the ICOADS line crosses the eruption event to the peak of the ICOADS line 6 years out. Is this just a coincidence? ”

    Thank you. This is something I have been drawing attention to for a couple of years. I call it volcanic rebound.

    I think there is evidence that climate feedbacks effectively recover the heat lost due the masking effects of ash/aerosols by lower SST causing more of the available solar to be absorbed.

    This ties in with Bob Tisdale’s hypothesis of the asymmetric effects of Nino/Nina and Willis’ tropical governor.

  87. PS The corrolary of that argument is without volcanic cooling there is no need to suggest +ve feedbacks to the known CO2 forcing. In fact, if climate counteracts volcanoes it most likely counteracts CO2 too, which would lead to -ve f/b cancelling both.

  88. Frank says:
    May 11, 2013 at 10:18 am
    the re-analysis protocol forces the re-analysis output to return to observed data at places and times where we have data.
    ================
    This is a form of the Gambler’s Fallacy. You are in effect assuming that our observed reality is “correct”, and the other possible realities predicted by the model are “wrong” and thus can be eliminated from consideration.

    Just because the observed data matches one of the predictions of the model does not make any of the other possibilities less likely. In effect the protocol forces the data to ignore the other possibilities which leads to an incorrect estimation of the odds.

    Think of it this way. We have a data point, call it 1900. We have 3 possibilities looking at 1901. Either temps go up, down or stay the same. We don’t have any data for 1901, but we have data for 1902. 1902 has the same temp as 1901.

    Our model tells us that on this basis, since 1900 = 1902, then temps in 1901 must also have been unchanged. And on this basis we build a theory of how temperature changes.

    But our underlying reality is wrong. Temps might have also gone up or down in 1901, so our theory is based on faulty data, namely the assumption that 1901 = 1900 and 1901 = 1902. From this we conclude that temperature has low natural variability.

    But in reality it is our model that has low variability. Depending on what actually happened in 1901, variability might be low or high, we simply cannot say with any degree of confidence. However, the reanalysis protocol tells us that quite the opposite.

    The reanalysis tells us that we have low variability, which gives us a false idea of the odds. The Gamblers Fallacy.

  89. Greg Goodman says:
    May 12, 2013 at 9:07 am
    In fact, if climate counteracts volcanoes it most likely counteracts CO2 too
    ========
    this has been demonstrated by applying economic theory (unit root) to climate. the effects of CO2 on temps are transient. The climate adjusts to eliminate them. However, the presence of a near unit root in the temp data give the misleading statistical appearance that the change is permanent.

  90. Fred: “this has been demonstrated by applying economic theory (unit root) to climate. the effects of CO2 on temps are transient. The climate adjusts to eliminate them. ”

    Could you expand on that a bit? Where has it been shown?

    “However, the presence of a near unit root in the temp data give the misleading statistical appearance that the change is permanent.”

    Don’t follow. If its not covered by response to previous qu. could you explain?

    Thx

  91. Greg Goodman says:
    May 12, 2013 at 9:57 am
    Could you expand on that a bit? Where has it been shown?
    =====
    can’t locate the paper. one of the authors was perhaps from univ of tel-aviv, economics?

    As I recall, the paper showed that it was not temperature that varied with CO2, but rather the rate of change once you differenced the data to correct for unit root.

    Which would appear to support:

    ” In fact, if climate counteracts volcanoes it most likely counteracts CO2 too, which would lead to -ve f/b cancelling both.”

  92. “As I recall, the paper showed that it was not temperature that varied with CO2, but rather the rate of change once you differenced the data to correct for unit root. ”

    Unit root is basically a test for stationarity (a simplistic explanation being that the mean is not drifting up or down in time). Temperature time series are autoregressive (current value strongly influenced by previous one). Taking the difference of successive values, or differentiation, can often remove this. This is necessary when doing some data processing techniques like FFT. Something Grant “Tamino” Foster stupidly ignored in his recent disingenuous attempts to “school me”.

    However, unless I’m missing your point, I don’t see how this relates to volcanoes, CO2 and feedbacks.

    Sure, it’s rate of change of temp that relates to CO2 , that was the subject of a recent thread here on WUWT.

    Since dT/dt is a power term and CO2 radiative effect is power/m2 , that seems perfectly sensible. Both of these things are why I constantly say we should be studying rate of change of temperature (or ice cover for that matter) and not the simple time series.

    So far I don’t see many either mainstream or outside picking up on that.

  93. Lunar-solar influence on SST

    http://climategrog.wordpress.com/2013/03/01/61/

    Comparing _rate of change_ of land and sea temperatures

    http://climategrog.wordpress.com/?attachment_id=219

    rate of change of El Nino and Length of day

    http://climategrog.wordpress.com/?attachment_id=136

    rate of change of Arctic ice cover

    http://climategrog.wordpress.com/2013/03/11/open-mind-or-cowardly-bigot/ddt_arctic_ice/

    If we are interested in climate change we should be looking at _rate of change_ not the time series.

  94. “… it’s rate of change of temp that relates to CO2 , that was the subject of a recent thread here on WUWT. ”

    Wrong way around, what the recent thread discussed was rate of change of CO2 being a fn of SST. This ties in well, in both short term variations and the full Keeling MLO record since 1958.

    Here is plot I contributed to that discussion.

    http://climategrog.wordpress.com/?attachment_id=207

    I may have more detail on that shortly.

  95. @ ferd berple…wouldn’t the Gambler,s Fallacy be a perfect description for the hardcore AGW believers? I have a good understanding of the Gambler’s Fallacy.

  96. @ Greg Goodman…then the likely interaction should have something to do with a diminished cloud cover over the ocean at 6 years out? Could the fallout of the last particles draw other particles out of a region of the atmosphere and so lead to a clearer than normal atmosphere for a period of time?

  97. Yes, I think something like that may be at play. IIRC the inverse reaction that is seen in stratospheric temps also show a similar rebound. I need to find a graph that shows that.

  98. Couldn’t this be the smoking gun for why temperatures rose so high above average in the first place? Global warming has been volcano-induced. For what ever natural forces were in play, with a {slight boost?} from CO2, was then amplified by this volcanic aftereffect. The Pinatubo event, in particular, strikes with perfect timing to where the 6 year atmospheric effect coincides with the solar rise after the minimum. So you have a ‘hot’ sun and a Windex clear atmosphere for it to penetrate and cause the great El Nino of 1997/98. All of the heat from that event has been dissipating ever since. Isn’t that why the extra warmth in the northern Atlantic encompassed so much of Greenland as well as warming Europe for the 10 years after 1998?

  99. “Couldn’t this be the smoking gun for why temperatures rose so high above average in the first place? Global warming has been volcano-induced. For what ever natural forces were in play, with a {slight boost?} from CO2, was then amplified by this volcanic aftereffect.”

    I’ve considered that possibility but it would need substantive evidence. Climate’s ability to auto-correct by negative feedbacks seems a reasonable suggestion. That volcanoes actually eventually remove cloud seeding nuclei to leave a clearer atmosphere and hence cause a net warming is not impossible but would need clear evidence.

    However, I think there is a strong possibility that negative feedback reaction to Mt Pinatubo did at least contribute to the size of the 1998 El Nino.

  100. I should have said ‘add to’ instead of ’cause’, regarding effects to El Nino. Considering the “evidence” that CAGW believers are using, I would think that the volcano scenario offers a better foundation for explanation.

Comments are closed.