Guest Post by Willis Eschenbach
I was reading through the recent Trenberth paper on ocean heat content that’s been discussed at various locations around the web. It’s called “Distinctive climate signals in reanalysis of global ocean heat content”, paywalled, of course. [UPDATE: my thanks to Nick Stokes for locating the paper here.] Among the “distinctive climate signals” that they claim to find are signals from the massive eruptions of Mt. Pinatubo in mid-1991 and El Chichon in mid-1982. They show these claimed signals in my Figure 1 below, which is also Figure 1 in their paper.
ORIGINAL CAPTION: Figure 1. OHC integrated from 0 to 300 m (grey), 700 m (blue), and total depth (violet) from ORAS4, as represented by its 5 ensemble members. The time series show monthly anomalies smoothed with a 12 month running mean, with respect to the 1958–1965 base period. Hatching extends over the range of the ensemble members and hence the spread gives a measure of the uncertainty as represented by ORAS4 (which does not cover all sources of uncertainty). The vertical colored bars indicate a two year interval following the volcanic eruptions with a 6 month lead (owing to the 12 month running mean), and the 1997–98 El Niño event again with 6 months on either side. On lower right, the linear slope for a set of global heating rates (W m-2) is given.
I looked at that and I said “Whaaa???”. I’d never seen any volcanic signals like that in the ocean heat content data. What was I missing?
Well, what I was missing is that Trenberth et al. are using what is laughably called “reanalysis data”. But as the title says, reanalysis “data” isn’t data in any sense of the word. It is the output of a computer climate model masquerading as data.
Now, the basic idea of a “reanalysis” is not a bad one. If you have data with “holes” in it, if you are missing information about certain times and/or places, you can use some kind of “best guess” algorithm to fill in the holes. In mining, this procedure is quite common. You have spotty data about what is happening underground. So you use a kriging procedure employing all the available information, and it gives you the best guess about what is happening in the “holes” where you have no data. (Please note, however, that if you claim the results of your kriging model are real observations, if you say that the outputs of the kriging process are “data”, you can be thrown in jail for misrepresentation … but I digress, that’s the real world and this is climate “science” at its finest.)
The problems arise as you start to use more and more complex procedures to fill in the holes in the data. Kriging is straight math, and it gives you error bars on the estimates. But a global climate model is a horrendously complex creature, and gives no estimate of error of any kind.
Now, as Steven Mosher is fond of pointing out, it’s all models. Even something as simple as
Force = Mass times Acceleration
is a model. So in that regard, Steven is right.
The problem is that there are models and there are models. Some models, like kriging, are both well-understood and well-behaved. We have analyzed and tested the model called “kriging”, to the point where we understand its strengths and weakness, and we can use it with complete confidence.
Then there is another class of models with very different characteristics. These are called “iterative” models. They differ from models like kriging or F = M A because at each time step, the previous output of the model is used as the new input for the model. Climate models are iterative models. In a climate model, for example, it starts with the present weather, and predicts where the weather will go at the next time step (typically a half hour).
Then that result, the prediction for a half hour from now, is taken as input to the climate model, and the next half-hour’s results are calculated. Do that about 9,000 times, and you’ve simulated a year of weather … lather, rinse, and repeat enough times, and voila! You now have predicted the weather, half-hour by half-hour, all the way to the year 2100.
There are two very, very large problems with iterative models. The first is that errors tend to accumulate. If you calculate one half hour even slightly incorrectly, the next half hour starts with bad data, so it may be even further out of line, and the next, and the next, until the model goes completely off the rails. Figure 2 shows a number of runs from the Climateprediction climate model …
Figure 2. Simulations from climateprediction.net. Note that a significant number of the model runs plunge well below ice age temperatures … bad model, no cookies!
See how many of the runs go completely off the rails and head off into a snowball earth, or take off for stratospheric temperatures? That’s the accumulated error problem in action.
The second problem with iterative models is that often we have no idea how the model got the answer. A climate model is so complex and is iterated so many times that the internal workings of the model are often totally opaque. As a result, suppose that we get three very different answers from three different runs. We have no way to say that one of them is more likely right than the other … except for the one tried and true method that is often used in climate science, viz:
If it fits our expectations, it is clearly a good, valid, solid gold model run. And if it doesn’t fit our expectations, obviously we can safely ignore it.
So how many “bad” reanalysis runs end up on the cutting room floor because the modeler didn’t like the outcome? Lots and lots, but how many nobody knows.
With that as a prelude, let’s look at Trenberth’s reanalysis “data”, which of course isn’t data at all … Figure 3 compares the ORAS4 reanalysis model results to the Levitus data:
Figure 3. ORAS4 reanalysis results for the 0-2000 metre layer (blue) versus Levitus data for the same layer. ORAS4 results are digitized from Figure 1. Note that the ORAS4 “data” prior to about 1980 has error bars from floor to ceiling, and so is of little use (see Figure 1). The data is aligned to their common start in 1958 (1958=0)
In Figure 3, the shortcomings of the reanalysis model results are laid bare. The computer model predicts a large drop in OHC from the volcanoes … which obviously didn’t happen. But instead of building on that reality of no OHC change after the eruptions, the reanalysis model has simply warped the real data so that it can show the putative drop after the eruptions.
And this is the underlying problem with treating reanalysis results as real data—they are nothing of the sort. All that the reanalysis model is doing is finding the most effective way to reshape the data to meet the fantasies, preconceptions, and errors of the modelers. Let me re-post the plot with which I ended my last post. This shows all of the various measurements of oceanic temperature, from the surface down to the deepest levels that we have measured extensively, two kilometers deep.
Figure 4. Oceanic temperature measurements. There are two surface measurements, from ERSST and ICOADS, along with individual layer measurements for three separate levels, from Levitus. NOTE—Figure 4 is updated after Bob Tisdale pointed out that I was inadvertently using smoothed data for the SSTs.
Now for me, anyone who looks at Figure 4 and claims that they can see the effects of the eruptions of Pinatubo and El Chichon and Mt. Agung in that actual data is hallucinating. There is no effect visible. Yes, there is a drop in SST during the year after Pinatubo … but the previous two drops were larger, and there is no drop during the year after El Chichon or Mt. Agung. In addition, temperatures rose more in the two years before Pinatubo than they dropped in the two years after. All that taken together says to me that it’s just random chance that Pinatubo has a small drop after it.
But the poor climate modelers are caught. The only way that they can claim that CO2 will cause the dreaded Thermageddon is to set the climate sensitivity quite high.
The problem is that when the modelers use a very high sensitivity like 3°C/doubling of CO2, they end up way overestimating the effect of the volcanoes. We can see this clearly in Figure 3 above, showing the reanalysis model results that Trenberth speciously claims are “data”. Using the famous Procrustean Bed as its exemplar, the model has simply modified and adjusted the real data to fit the modeler’s fantasy of high climate sensitivity. In a nutshell, the reanalysis model simply moved around and changed the real data until it showed big drops after the volcanoes … and this is supposed to be science?
Now, does this mean that all reanalysis “data” is bogus?
Well, the real problem is that we don’t know the answer to that question. The difficulty is that it seems likely that some of the reanalysis results are good and some are useless, but in general we have no way to distinguish between the two. This case of Levitus et al. is an exception, because the volcanoes have highlighted the problems. But in many uses of reanalysis “data”, we have no way to tell if it is valid or not.
And as Trenberth et al. have proven, we certainly cannot depend on the scientists using the reanalysis “data” to make even the slightest pretense of investigating whether it is valid or not …
(In passing, let me point out one reason that computer climate models don’t do well at reanalyses—nature generally does edges and blotches, while climate models generally do smooth transitions. I’ve spent a good chunk of my life on the ocean. I can assure you that even in mid-ocean, you’ll often see a distinct line between two kinds of water, with one significantly warmer than the other. Nature does that a lot. Clouds have distinct edges, and they pop into and out of existence, without much in the way of “in-between”. The computer is not very good at that blotchy, patchy stuff. If you leave the computer to fill in the gap where we have no data between two observations, say 10°C and 15°C, the computer can do it perfectly—but it will generally do it gradually and evenly, 10, 11, 12, 13, 14, 15.
But when nature fills in the gap, you’re more likely to get something like 10, 10, 10, 14, 15, 15 … nature usually doesn’t do “gradually”. But I digress …)
Does this mean we should never use reanalyses? By no means. Kriging is an excellent example of a type of reanalysis which actually is of value.
What these results do mean is that we should stop calling the output of reanalysis models “data”, and that we should TEST THE REANALYSIS MODEL OUTPUTS EXTENSIVELY before use.
These results also mean that one should be extremely cautious when reanalysis “data” is used as the input to a climate model. If you do that, you are using the output of one climate model as the input to another climate model … which is generally a Very Bad Idea™ for a host of reasons.
In addition, in all cases where reanalysis model results are used, the exact same analysis should be done using the actual data. I have done this in Figure 3 above. Had Trenberth et al. presented that graph along with their results … well … if they’d done that, likely their paper would not have been published at all.
Which may or may not be related to why they didn’t present that comparative analysis, and to why they’re trying to claim that computer model results are “data” …
Regards to everyone,
w.
NOTES:
The Trenberth et al. paper identifies their deepest layer as from the surface to “total depth”. However, the reanalysis doesn’t have any changes below 2,000 metres, so that is their “total depth”.
DATA:
The data is from NOAA , except the ERSST and HadISST data, which are from KNMI.
The NOAA ocean depth data is here.
The R code to extract and calculate the volumes for the various Levitus layers is here.
Fred: “this has been demonstrated by applying economic theory (unit root) to climate. the effects of CO2 on temps are transient. The climate adjusts to eliminate them. ”
Could you expand on that a bit? Where has it been shown?
“However, the presence of a near unit root in the temp data give the misleading statistical appearance that the change is permanent.”
Don’t follow. If its not covered by response to previous qu. could you explain?
Thx
Greg Goodman says:
May 12, 2013 at 9:57 am
Could you expand on that a bit? Where has it been shown?
=====
can’t locate the paper. one of the authors was perhaps from univ of tel-aviv, economics?
As I recall, the paper showed that it was not temperature that varied with CO2, but rather the rate of change once you differenced the data to correct for unit root.
Which would appear to support:
” In fact, if climate counteracts volcanoes it most likely counteracts CO2 too, which would lead to -ve f/b cancelling both.”
“As I recall, the paper showed that it was not temperature that varied with CO2, but rather the rate of change once you differenced the data to correct for unit root. ”
Unit root is basically a test for stationarity (a simplistic explanation being that the mean is not drifting up or down in time). Temperature time series are autoregressive (current value strongly influenced by previous one). Taking the difference of successive values, or differentiation, can often remove this. This is necessary when doing some data processing techniques like FFT. Something Grant “Tamino” Foster stupidly ignored in his recent disingenuous attempts to “school me”.
However, unless I’m missing your point, I don’t see how this relates to volcanoes, CO2 and feedbacks.
Sure, it’s rate of change of temp that relates to CO2 , that was the subject of a recent thread here on WUWT.
Since dT/dt is a power term and CO2 radiative effect is power/m2 , that seems perfectly sensible. Both of these things are why I constantly say we should be studying rate of change of temperature (or ice cover for that matter) and not the simple time series.
So far I don’t see many either mainstream or outside picking up on that.
Lunar-solar influence on SST
http://climategrog.wordpress.com/2013/03/01/61/
Comparing _rate of change_ of land and sea temperatures
http://climategrog.wordpress.com/?attachment_id=219
rate of change of El Nino and Length of day
http://climategrog.wordpress.com/?attachment_id=136
rate of change of Arctic ice cover
http://climategrog.wordpress.com/2013/03/11/open-mind-or-cowardly-bigot/ddt_arctic_ice/
If we are interested in climate change we should be looking at _rate of change_ not the time series.
Somehow this whole process of models all the way down reminded me of this:-
“… it’s rate of change of temp that relates to CO2 , that was the subject of a recent thread here on WUWT. ”
Wrong way around, what the recent thread discussed was rate of change of CO2 being a fn of SST. This ties in well, in both short term variations and the full Keeling MLO record since 1958.
Here is plot I contributed to that discussion.
http://climategrog.wordpress.com/?attachment_id=207
I may have more detail on that shortly.
@ur momisugly ferd berple…wouldn’t the Gambler,s Fallacy be a perfect description for the hardcore AGW believers? I have a good understanding of the Gambler’s Fallacy.
@ur momisugly Greg Goodman…then the likely interaction should have something to do with a diminished cloud cover over the ocean at 6 years out? Could the fallout of the last particles draw other particles out of a region of the atmosphere and so lead to a clearer than normal atmosphere for a period of time?
Yes, I think something like that may be at play. IIRC the inverse reaction that is seen in stratospheric temps also show a similar rebound. I need to find a graph that shows that.
Couldn’t this be the smoking gun for why temperatures rose so high above average in the first place? Global warming has been volcano-induced. For what ever natural forces were in play, with a {slight boost?} from CO2, was then amplified by this volcanic aftereffect. The Pinatubo event, in particular, strikes with perfect timing to where the 6 year atmospheric effect coincides with the solar rise after the minimum. So you have a ‘hot’ sun and a Windex clear atmosphere for it to penetrate and cause the great El Nino of 1997/98. All of the heat from that event has been dissipating ever since. Isn’t that why the extra warmth in the northern Atlantic encompassed so much of Greenland as well as warming Europe for the 10 years after 1998?
Is the ‘game afoot’ with this line of reasoning?
“Couldn’t this be the smoking gun for why temperatures rose so high above average in the first place? Global warming has been volcano-induced. For what ever natural forces were in play, with a {slight boost?} from CO2, was then amplified by this volcanic aftereffect.”
I’ve considered that possibility but it would need substantive evidence. Climate’s ability to auto-correct by negative feedbacks seems a reasonable suggestion. That volcanoes actually eventually remove cloud seeding nuclei to leave a clearer atmosphere and hence cause a net warming is not impossible but would need clear evidence.
However, I think there is a strong possibility that negative feedback reaction to Mt Pinatubo did at least contribute to the size of the 1998 El Nino.
I should have said ‘add to’ instead of ’cause’, regarding effects to El Nino. Considering the “evidence” that CAGW believers are using, I would think that the volcano scenario offers a better foundation for explanation.