Guest Post by Willis Eschenbach
I was reading through the recent Trenberth paper on ocean heat content that’s been discussed at various locations around the web. It’s called “Distinctive climate signals in reanalysis of global ocean heat content”, paywalled, of course. [UPDATE: my thanks to Nick Stokes for locating the paper here.] Among the “distinctive climate signals” that they claim to find are signals from the massive eruptions of Mt. Pinatubo in mid-1991 and El Chichon in mid-1982. They show these claimed signals in my Figure 1 below, which is also Figure 1 in their paper.
ORIGINAL CAPTION: Figure 1. OHC integrated from 0 to 300 m (grey), 700 m (blue), and total depth (violet) from ORAS4, as represented by its 5 ensemble members. The time series show monthly anomalies smoothed with a 12 month running mean, with respect to the 1958–1965 base period. Hatching extends over the range of the ensemble members and hence the spread gives a measure of the uncertainty as represented by ORAS4 (which does not cover all sources of uncertainty). The vertical colored bars indicate a two year interval following the volcanic eruptions with a 6 month lead (owing to the 12 month running mean), and the 1997–98 El Niño event again with 6 months on either side. On lower right, the linear slope for a set of global heating rates (W m-2) is given.
I looked at that and I said “Whaaa???”. I’d never seen any volcanic signals like that in the ocean heat content data. What was I missing?
Well, what I was missing is that Trenberth et al. are using what is laughably called “reanalysis data”. But as the title says, reanalysis “data” isn’t data in any sense of the word. It is the output of a computer climate model masquerading as data.
Now, the basic idea of a “reanalysis” is not a bad one. If you have data with “holes” in it, if you are missing information about certain times and/or places, you can use some kind of “best guess” algorithm to fill in the holes. In mining, this procedure is quite common. You have spotty data about what is happening underground. So you use a kriging procedure employing all the available information, and it gives you the best guess about what is happening in the “holes” where you have no data. (Please note, however, that if you claim the results of your kriging model are real observations, if you say that the outputs of the kriging process are “data”, you can be thrown in jail for misrepresentation … but I digress, that’s the real world and this is climate “science” at its finest.)
The problems arise as you start to use more and more complex procedures to fill in the holes in the data. Kriging is straight math, and it gives you error bars on the estimates. But a global climate model is a horrendously complex creature, and gives no estimate of error of any kind.
Now, as Steven Mosher is fond of pointing out, it’s all models. Even something as simple as
Force = Mass times Acceleration
is a model. So in that regard, Steven is right.
The problem is that there are models and there are models. Some models, like kriging, are both well-understood and well-behaved. We have analyzed and tested the model called “kriging”, to the point where we understand its strengths and weakness, and we can use it with complete confidence.
Then there is another class of models with very different characteristics. These are called “iterative” models. They differ from models like kriging or F = M A because at each time step, the previous output of the model is used as the new input for the model. Climate models are iterative models. In a climate model, for example, it starts with the present weather, and predicts where the weather will go at the next time step (typically a half hour).
Then that result, the prediction for a half hour from now, is taken as input to the climate model, and the next half-hour’s results are calculated. Do that about 9,000 times, and you’ve simulated a year of weather … lather, rinse, and repeat enough times, and voila! You now have predicted the weather, half-hour by half-hour, all the way to the year 2100.
There are two very, very large problems with iterative models. The first is that errors tend to accumulate. If you calculate one half hour even slightly incorrectly, the next half hour starts with bad data, so it may be even further out of line, and the next, and the next, until the model goes completely off the rails. Figure 2 shows a number of runs from the Climateprediction climate model …
Figure 2. Simulations from climateprediction.net. Note that a significant number of the model runs plunge well below ice age temperatures … bad model, no cookies!
See how many of the runs go completely off the rails and head off into a snowball earth, or take off for stratospheric temperatures? That’s the accumulated error problem in action.
The second problem with iterative models is that often we have no idea how the model got the answer. A climate model is so complex and is iterated so many times that the internal workings of the model are often totally opaque. As a result, suppose that we get three very different answers from three different runs. We have no way to say that one of them is more likely right than the other … except for the one tried and true method that is often used in climate science, viz:
If it fits our expectations, it is clearly a good, valid, solid gold model run. And if it doesn’t fit our expectations, obviously we can safely ignore it.
So how many “bad” reanalysis runs end up on the cutting room floor because the modeler didn’t like the outcome? Lots and lots, but how many nobody knows.
With that as a prelude, let’s look at Trenberth’s reanalysis “data”, which of course isn’t data at all … Figure 3 compares the ORAS4 reanalysis model results to the Levitus data:
Figure 3. ORAS4 reanalysis results for the 0-2000 metre layer (blue) versus Levitus data for the same layer. ORAS4 results are digitized from Figure 1. Note that the ORAS4 “data” prior to about 1980 has error bars from floor to ceiling, and so is of little use (see Figure 1). The data is aligned to their common start in 1958 (1958=0)
In Figure 3, the shortcomings of the reanalysis model results are laid bare. The computer model predicts a large drop in OHC from the volcanoes … which obviously didn’t happen. But instead of building on that reality of no OHC change after the eruptions, the reanalysis model has simply warped the real data so that it can show the putative drop after the eruptions.
And this is the underlying problem with treating reanalysis results as real data—they are nothing of the sort. All that the reanalysis model is doing is finding the most effective way to reshape the data to meet the fantasies, preconceptions, and errors of the modelers. Let me re-post the plot with which I ended my last post. This shows all of the various measurements of oceanic temperature, from the surface down to the deepest levels that we have measured extensively, two kilometers deep.
Figure 4. Oceanic temperature measurements. There are two surface measurements, from ERSST and ICOADS, along with individual layer measurements for three separate levels, from Levitus. NOTE—Figure 4 is updated after Bob Tisdale pointed out that I was inadvertently using smoothed data for the SSTs.
Now for me, anyone who looks at Figure 4 and claims that they can see the effects of the eruptions of Pinatubo and El Chichon and Mt. Agung in that actual data is hallucinating. There is no effect visible. Yes, there is a drop in SST during the year after Pinatubo … but the previous two drops were larger, and there is no drop during the year after El Chichon or Mt. Agung. In addition, temperatures rose more in the two years before Pinatubo than they dropped in the two years after. All that taken together says to me that it’s just random chance that Pinatubo has a small drop after it.
But the poor climate modelers are caught. The only way that they can claim that CO2 will cause the dreaded Thermageddon is to set the climate sensitivity quite high.
The problem is that when the modelers use a very high sensitivity like 3°C/doubling of CO2, they end up way overestimating the effect of the volcanoes. We can see this clearly in Figure 3 above, showing the reanalysis model results that Trenberth speciously claims are “data”. Using the famous Procrustean Bed as its exemplar, the model has simply modified and adjusted the real data to fit the modeler’s fantasy of high climate sensitivity. In a nutshell, the reanalysis model simply moved around and changed the real data until it showed big drops after the volcanoes … and this is supposed to be science?
Now, does this mean that all reanalysis “data” is bogus?
Well, the real problem is that we don’t know the answer to that question. The difficulty is that it seems likely that some of the reanalysis results are good and some are useless, but in general we have no way to distinguish between the two. This case of Levitus et al. is an exception, because the volcanoes have highlighted the problems. But in many uses of reanalysis “data”, we have no way to tell if it is valid or not.
And as Trenberth et al. have proven, we certainly cannot depend on the scientists using the reanalysis “data” to make even the slightest pretense of investigating whether it is valid or not …
(In passing, let me point out one reason that computer climate models don’t do well at reanalyses—nature generally does edges and blotches, while climate models generally do smooth transitions. I’ve spent a good chunk of my life on the ocean. I can assure you that even in mid-ocean, you’ll often see a distinct line between two kinds of water, with one significantly warmer than the other. Nature does that a lot. Clouds have distinct edges, and they pop into and out of existence, without much in the way of “in-between”. The computer is not very good at that blotchy, patchy stuff. If you leave the computer to fill in the gap where we have no data between two observations, say 10°C and 15°C, the computer can do it perfectly—but it will generally do it gradually and evenly, 10, 11, 12, 13, 14, 15.
But when nature fills in the gap, you’re more likely to get something like 10, 10, 10, 14, 15, 15 … nature usually doesn’t do “gradually”. But I digress …)
Does this mean we should never use reanalyses? By no means. Kriging is an excellent example of a type of reanalysis which actually is of value.
What these results do mean is that we should stop calling the output of reanalysis models “data”, and that we should TEST THE REANALYSIS MODEL OUTPUTS EXTENSIVELY before use.
These results also mean that one should be extremely cautious when reanalysis “data” is used as the input to a climate model. If you do that, you are using the output of one climate model as the input to another climate model … which is generally a Very Bad Idea™ for a host of reasons.
In addition, in all cases where reanalysis model results are used, the exact same analysis should be done using the actual data. I have done this in Figure 3 above. Had Trenberth et al. presented that graph along with their results … well … if they’d done that, likely their paper would not have been published at all.
Which may or may not be related to why they didn’t present that comparative analysis, and to why they’re trying to claim that computer model results are “data” …
Regards to everyone,
w.
NOTES:
The Trenberth et al. paper identifies their deepest layer as from the surface to “total depth”. However, the reanalysis doesn’t have any changes below 2,000 metres, so that is their “total depth”.
DATA:
The data is from NOAA , except the ERSST and HadISST data, which are from KNMI.
The NOAA ocean depth data is here.
The R code to extract and calculate the volumes for the various Levitus layers is here.
@ur momisugly Steven Mosher says: May 11, 2013 at 12:30 pm
“So when you compare Levitus “heat content’ with re analysis “heat content” you are not comparing observations with models. You are comparing two models. One has very few
physical parameters for estimating heat content ( Levitus ) and the other Re Analysis uses
all the data and all the physics you know.. So for example you’d use both the ARGO data
and satellite data.
It’s not as simple as saying “here is the curve for a model, and here is a curve for observations”
They are both modelled.”
But the models still differ wildly in complexity. Enough so that one is an apple the other an orange. I would trust the Levitus data much more that the output from a complex climate model – even if it corrects its erroneous output with real temperature data.
The fact that all measurements involve the application of theory does not imply that all models are created equal. That is a false argument.
lgl says: “b t w La Nina does not warm the Pacific either…”
I didn’t say the Pacific, lgl. My earlier comment reads “tropical Pacific”:
La Niña’s can and do cause ocean heat content to warm in the tropical Pacific. The 1954-57, 1973-76, 1995/96 and 1998-01 La Niña’s are highlighted in the following graph:
http://i40.tinypic.com/9l97wh.jpg
There’s a significant difference between the Pacific (50S-50N) and the tropical Pacific (24S-24N). With the latitudes you’re using, it appears you’re also capturing the 1988/89 shift in OHC in the extratropical North Pacific, lgl:
http://oi43.tinypic.com/fuqhja.jpg
And you’re including a portion of the South Pacific, which has so little source data before 2003 that it’s not worth plotting. All it provides is noise.
lgl says: “Well, Indian ocean is among the reasons I have no confidence in the ARGO data
http://climexp.knmi.nl/data/inodc_heat700_20-120E_-30-30N_na.png
Is the President of the Maldives in charge of some of the floats perhaps :)”
Thanks. That’s a remarkable shift there in the early 2000s. It’s similar to the shift in OHC in the Northern North Atlantic:
http://bobtisdale.files.wordpress.com/2012/10/4-northern-no-atl.png
The huge 2003-2004 jump in OHC (~10^23 J) is an artifact in both the Levitus data & reanalysis, due to the xbt/mbt – argo transition.
Willis, you wrote:
“We have analyzed and tested the model called “kriging”, to the point where we understand its strengths and weakness, and we can use it with complete confidence.”
Um. Speaking as a geologist who spent a lot of time and effort dong ore-reserve models, “it depends”. The devil is in the details, as always, and there are all kinds of things waiting to bite you in the ass, even if you are a careful and conscientious mining engineer. For one example, coarse gold and diamond deposits are notoriously hard to model, because the statistics of sampling are guaranteed to be lousy — for diamonds, you need to do pretty large-scale test-mining to get a half-reliable sample. Gold deposits aren’t much easier. So we ore-reserve guys are really careful and conservative when we deliver the model to the company — and remain nervous until mining is well underway.
Which doesn’t detract from your point, of course, that the climate modellers are almost insanely optimistic about what their mathematics can deliver.
Best regards, Pete Tillman
Professional geologist, amateur climatologist
As a real scientist I find this disgusting. They know it’s wrong so it’s fraud. This should be investigated by the ORI (office for research integrity http://ori.dhhs.gov/). Please report them as you see fit.
Steven Mosher says:
May 11, 2013 at 12:30 pm
You can think of them like that if you wish, but a reanalysis done using an iterative climate model is as far from a Kalman Filter as you can get. That’s like saying that an elephant is “very much like” a hyrax. Yes, they are in the same family … but you don’t want to put a howdah on a hyrax, and you don’t want to trust a reanalysis to a climate model.
Duh. I said that in the head post, viz:
So why are you here to say exactly what I said all over again, and then act like you are revealing something to the unknowing?
Steven, I mentioned you in the head post because I knew you would show up and bring this “it is all models” bullshit with you. I guess that schtick must fool the rubes, since you use it enough to be totally predictable, here you are … and not only totally predictable, but totally wrong.
As I said above, there are models, and then there are models. You are asserting a false equivalence between say doing a reanalysis with the model we call kriging on the one hand, and doing a reanalysis with a climate model on the other hand. Your claim that it’s all models is like saying “house cats and lions are both felines, so they both must make excellent pets”. Yes, it’s all models … but in the world of models you are mistaking lions for house cats.
I don’t care if everything is a model, Steven. That doesn’t make all models interchangeable. And it assuredly doesn’t make all models valuable for some specific task.
Nonsense. One of them (Levitus) uses all the actual observations for estimating heat content.
The other reanalysis uses the observations plus all of the imaginary physics and tuned parameters that you can jam into the GIGO box known as a computer climate model. As a result, and as I showed above, we end up with total garbage out in the form of an imaginary temperature rise and fall involving volcanoes.
Steven, I had hoped that when you saw how this reanalysis ends up with results that have no relationship to reality, you might actually notice that not all reanalyses are created equal … but heck, I didn’t even get you to notice that not all models are created equal. For example, I’d trust my life to the model
Force = Mass times Acceleration
So would you, we do that every day. It is a model that is so good that we depend on it unconditionally.
But if you trust your life to a computer climate model, you’re mad. Not all models are useful, and in particular, even those that are useful for one thing might be worthless for another.
Now, I’ve heard your “everything is models” speech a couple dozen times now, and as I said in the head post, I agree with you. F=MA is a model. So is a climate model with a million lines of code.
But so what?
What on earth does that show about the suitability of a particular model for a particular task?
The part you don’t seem to have noticed is that despite the fact that they are all models, some models are valuable and some are crap, some models are good for one thing and bad for another, and in particular that iterative models have a host of problems that make their use highly problematic for things like a re-analysis.
Regards,
w.
The nature of many of the miriad of potential causal variables and their intercorrelations as determinants of climate is a chaotic soup. This includes volcanic activity. There are some interesting papers and research regarding the determination and relationships between “signals” within different chaotic systems. Most deal with communications. One of the more recent is:
Chaotic signal detection and estimation based on attractor sets:
Applications to secure communications
G. K. Rohdea
NRC Postdoctoral Research Associate, U. S. Naval Research Laboratory, Optical Sciences Division, Washington, D.C. 20375, USA
J. M. Nichols and F. Bucholtz
U. S. Naval Research Laboratory, Optical Sciences Division, Washington, D.C. 20375, USA
Received 21 September 2007; accepted 10 January 2008; published online 10 March 2008
There are many more.
I continue to believe that predicting even some of the constituents of causal variables within climate, let alone climate itself, other than at the grossest level, such as semi-approximation of timing of the onset of glaciations, such as with the Milankovik cycles, is a fools errand based upon the chaotic nature of the subject.
The warmists have the complexity of the subject on their side as they continue their “settled science” nonsensical obfuscation of the climate debate.
Excellent insight and a very instructional post Willis! Kudos to everyone that helped you on this.
Thinking about those temperature dips for the volcanos which are apparently used by the paper’s authors to bolster their claim of constructed temperature accuracy…
If the model has parameters in the modeling array for volcanic aerosols forcing, then the model is likely to have other forcing parameters in the calculation? Seems likely to me.
And somehow, I doubt the paper’s authors included thunderstorm’s as a natural limiting mechanism to ocean heat as you’ve ably demonstrated using argos data.
Given that the paper’s model runs (re-analyzed data, seriously?) have ocean heat content rising, perhaps there is a forcing parameter for CO2 included in their code iteration array? Releasing all code and mathematical formulas is really essential to understand what Trenberth and others are modeling.
I do wonder after reading through your post above, just how much of Trenberth’s increased ocean heat content is model forcings, not actual temperatures.
David L. Hagen says:
May 11, 2013 at 12:36 pm
fredberple
Thanks for your comments on chaos.
Any suggestions then on why Singer’s results show declining variations with increasing run years?
============
I haven’t studied his results. At a guess the most likely reason is that they are programed to do so or your interpretation is in error.
clearly there is no physical reason why one would see declining variations in any single model run. it makes no physical sense. the longer you observer the ocean, the more likely you are to see a wave bigger than all you have seen before. the longer you record temperatures, the more likely you are to see a minimum or maximum the exceeds all you have seen before.
We see this effect all the time in climate science. Folks start recording weather, and a big storm comes along. Oh No, the world is at an end, it is bigger than anything ever seen before. Humans must be that cause. Hardly. Much bigger storms have been seen before. Only those people are dead and the records are buried out of sight. So we get nonsense reporting.
More interesting on chaos predictions:
Time Series Prediction by Chaotic Modeling of Nonlinear Dynamical Systems
Arslan Basharat+
Kitware Inc.
Clifton Park, NY, USA
arslan.basharat@kitware.com
Mubarak Shah+
+University of Central Florida
Orlando, FL, USA
shah@cs.ucf.edu
pdtillman says:
May 11, 2013 at 2:35 pm
Thanks, Pete. The guys in the field, the engineers, the people who actually do the work on the ground, are the people I tend to listen to. Modeling an ore-body from limited information, where millions of dollars can be made or lost depending on the accuracy of your model, tends to focus a man’s mind wonderfully … one of the huge problems with scientists as opposed to engineers is that far too often the scientists have no skin in the game.
Your obvious understanding of the model we call kriging reinforces my point about different kinds of models. You are clear, not only that kriging has strengths and weaknesses, but you know where most of the traps and pitfalls lie. You can recognize in front the kinds of situations where results can be less than optimal.
I’ve actually been quite surprised that some variety of kriging isn’t used more in climate science for those times when it’s desirable to have a complete data field. At least it wouldn’t create imaginary drops in OHC after volcanic eruptions …
Having said that, in general, I prefer to invent analysis methods that don’t require me to have complete datasets.
Regards,
w.
Willis writes “Not all models are useful”
This is an understated point. All global warming enthusiasts seem to agree that all models are wrong but some models are useful.
Then they seem to assume that their model (eg GISS for Gavin) is one of the useful ones. There appears to be no acceptance that their particular model of interest is a dud. It must be the “other ones” that aren’t useful.
A few weeks ago, WUWT had an article about the air portion of the re-analysis, “Global Warming Over Land Is Real: CU-Boulder, NOAA Study”
http://wattsupwiththat.com/2013/04/13/global-warming-over-land-is-real-cu-boulder-noaa-study/
The CU Boulder study is titled “Independent Confirmation of Global Land Warming without the Use of Station Temperatures” at http://onlinelibrary.wiley.com/doi/10.1002/grl.50425/pdf
On page 4 it describes the analysis method…
“We use the 20th Century Reanalysis (20CR), a physically-based state-of-the-art data assimilation system, to infer TL2m given only CO2, solar and volcanic radiative forcing agents; monthly-averaged sea surface temperature (SST) and sea ice concentration fields; and hourly and synoptic barometric pressure observations (from the International Surface Pressure Databank.”
Physically based? Given “only” CO2 and a few other things?
No wonder volcanoes and CO2 show up in the “data” – these things go into the “data”, which is nothing more than another model.
Mr. Steven Moser, FerdBerple
You’re saying that models are okay because they are controlled in an ONGOING manner, based on ONGOING accumulation of data. You tweak the models as you go. This seems to raise other problems. If you are taking data all along as you go, why even bother with the models at all? I must guess why: Climactic dudes wish to build themselves FUTUREBOTS.
See FerdBerple as 6:49 AM on predicting the future. On to something there.
Anyway, Steve, what I’m thinking is that you’re hoping to get a machine to eventually guess the “FUTURE” for you, but there is too much data you would need to get; in fact you will need all data on everything in the universe and more. You just can’t do it. And if you somehow had the dang data, the curious might crunch it, make hypothesis or two, and not think twice about building a dang FUTUREBOT in their own image.
Maybe if you could enter 10,000 years of data you could have something. But still, not the FUTURE. I don’t think that modelers understand their own limitations. Perhaps training in classical philosophy or logic would help put modeling in a realistic perspective.
Thanks for an interesting article. I looked up kriging to read more about the method. I find one curious anomaly in chart #4 and yes I did hallucinate a few times back in the 60s. If you look approximately 6 years out past each eruption, the graph shows a large upward heat spike on the surface:ICOADS SST line. That spike following all three eruptions gains approximately 2.5C from the point where the ICOADS line crosses the eruption event to the peak of the ICOADS line 6 years out. Is this just a coincidence? Also, there are 16 peaks in that time span or slightly over 3.5 years between spikes on the surface:ICOADS. It seems so regular, but what would cause that? The surface:ERSST closely follows the same pattern.
Berényi Péter says, May 11, 2013 at 2:29 pm:
“The huge 2003-2004 jump in OHC (~10^23 J) is an artifact in both the Levitus data & reanalysis, due to the xbt/mbt – argo transition.”
This is something that should’ve been addressed a lot more often. Because it’s quite striking and clearly, as you say, nothing but an artifact of the ‘stitching together’ of the pre-ARGO and post-ARGO data-collating regimes. There is no way you can justify such a jump in global OHC during 2002-03, a year, year-and-a-half after the three-year La Niña of 1998-2001. Where would such a massive pulse of extra heat to the ocean come from? There are no known globally scaled mechanisms outside the ENSO process that could account for it. And the TOA fluxes surely do not show anything unusual during the period in question, if anything, rather a drop in net incoming:
http://i1172.photobucket.com/albums/r565/Keyell/OverlappERBS-CERES-1_zpse75139e9.png
http://i1172.photobucket.com/albums/r565/Keyell/NettoTOAfluksCERES-1_zpsd5478e0d.png
http://i1172.photobucket.com/albums/r565/Keyell/BAMS-glTOA-flukserCERES-Kopi-1_zps6ef1fca6.png
Here’s the global OHC 100-700m from NOAA:
http://i1172.photobucket.com/albums/r565/Keyell/GlOHC100-700mvsLNampEN_zpsd7aec889.png
I’ve identified the noteworthy La Niña and El Niño events from 1970 to 2012.
Starting from the beginning, you can clearly see the heat-storing work done by the 1970-72 and 1973-76 La Niñas, only interrupted by the conspicuous and sudden drainage by the 1972/73 El Niño. After the La Niñas, there’s a little secondary El Niño drop and from then on it’s pretty flat going until the enormous El Niño of 1982/83 draws a huge amount of the accumulated deep heat up towards the surface of the ocean, from where a large part of it is released into the troposphere.
By this stage, the pattern is pretty clear – it’s mostly about the ENSO process.
Following the 1982/83 El Niño is a new sequence of La Niñas separated by a solitary El Niño. This time it’s the on-and-off 1983-86 La Niña and the severe 1988/89 La Niña that’s storing the heat, while the 1986-88 El Niño is draining in between. Had it not been for the El Niño 1982/83 pulling out so much heat ahead of this particular La Niña sequence, there would have been a clear step up in OHC during this period also, just as with the equivalent sequences before (1970-76) and after (1996-2001). Now the general rise is hard to spot. There is pretty much zero trend during the 20 years from 1976 to 1996. But the individual La Niñas and El Niños are still doing what they’re supposed to be doing.
The next sequence starts with the 1995-96 La Niña and follows through with the 1998-2001 event directly on the heels of the mighty 1997/98 El Niño. A new step change in general OHC level is established.
Now, in the chart above, I’ve deliberately adjusted the OHC down in 2002/03. The official curve shows an extra major upward shift during this year, a year with neutral ENSO conditions leading up to a secondary El Niño. It should not go up here. According to the observed distinct pattern, It should go flat and then somewhat down. There simply is no justification for a significant rise in mean level during this period.
From 2000/01 to 2007/08 global OHC once again proceeds more or less without further increase, until the latest La Niña sequence sets in by 2008. We have the same thing going all over again, only now the La Niñas don’t seem as powerful anymore. They seem to have lost steam. The buildup is there, but it’s less than in earlier times. Well, that is to say, if the ARGO data is directly comparable to the xbt/mbt data.
Here is global OHC 0-100m BTW:
http://i1172.photobucket.com/albums/r565/Keyell/GlOHC0-100m_zpsfa4c816b.png
A staircase if ever there was one.
Pretty much the entire accumulated 0-700m ocean heat during ‘the ARGO era’ (2003-13) is to be found within the red area on this map:
http://i1172.photobucket.com/albums/r565/Keyell/world-map-2_zpsf491f95c.png
Let’s call it ‘The Extended Indian-Pacific Warm Pool’. It is basically the heat reservoir of the ENSO process. This is where La Niñas deliver their solar-generated heat. It is also where El Niños draw their gigantic volumes of warm water from to spread out across the tropical central and east Pacific and, after a distinctive few, large and solitary, where the leftover heat is brought back when the circulation eventually turns.
You can clearly see the extension of the SPCZ in the South Pacific and the KOE in the North Pacific, both recipients of heat from similar oceanic conveyor systems. Interesting is the western (NW Indian Ocean) and especially the southern extension (S and W of Australia) of the heat-storing region.
Mind you, there is a big range in absolute accumulation of heat also within the red area. It is far from even. Most is in fact occurring in the central region, the actual tropical Indian-Pacific Warm Pool and even there, the West Pacific part (N of New Guinea) is by far the greatest contributor.
Anyway, the red area constitutes a little bit more than one fifth of the global ocean, the rest makes up a bit less than four fifths. Weighted against each other, it then comes out like this:
http://i1172.photobucket.com/albums/r565/Keyell/ARGOOHCtapvsvinst_zps43f1739c.png
You can easily see how the OHC evolution in the two opposing ‘basins’ of the global ocean tightly follows the NINO3.4 ups and downs, only the one in a direct fashion and the other in an inverted manner. So, nearly 80% of the global ocean is strongly cooling during ‘the ARGO era’. But this is more than offset by the prodigious accumulation in the extended Indian-Pacific Warm Pool region. Either way, it’s pretty hard to filter out a CO2 warming signal from this. If the magical molecule hasn’t somehow struck a special deal with the Warm Pool, that is …
Look at these two maps comparing annual global OHC anomalies in the year 2003 (starting with an El Niño) and the year 2012 (starting with a La Niña):
http://i1172.photobucket.com/albums/r565/Keyell/Varmeinnhold2003kart_zps3cb15251.png
Notice how there is a complete or near anomaly reversal in most corners of the world between the two years, not just in the oceanic ENSO core region. And these are not even full-fledged ‘ENSO years’.
This is how the OHC (0-700m) has evolved globally in ‘the ARGO era’ (2003-13) when divided into subsets (area weighted to show relative significance):
Southern Extratropics (Pacific):
http://i1172.photobucket.com/albums/r565/Keyell/ARGO6_zpse2c52071.png
Southern Extratropics (Atlantic):
http://i1172.photobucket.com/albums/r565/Keyell/ARGO5_zps65b11868.png
Southern Extratropics (Indian):
http://i1172.photobucket.com/albums/r565/Keyell/ARGO4_zps4d4f7fb1.png
Northern Extratropics (Pacific+Atlantic+Arctic):
http://i1172.photobucket.com/albums/r565/Keyell/ARGO1_zps28dcd70f.png
Arctic Ocean:
http://i1172.photobucket.com/albums/r565/Keyell/ARGO7_zpsf49c1c91.png
(note that this is incorporated into the region above)
Tropics (WPa/EIn):
http://i1172.photobucket.com/albums/r565/Keyell/ARGO3_zps19795c14.png
Tropics (EPa/At/WIn):
http://i1172.photobucket.com/albums/r565/Keyell/ARGO2_zps94570136.png
Poems of Our Climate says:
May 11, 2013 at 10:05 pm
See FerdBerple as 6:49 AM on predicting the future. On to something there.
========
Thanks 6:39 AM
What I demonstrated in the very simple example I gave is that the future is not simply difficult to predict. Rather that it is impossible to predict from first principles under our current understanding of the physical world.
Our imagination assumes that the future is simply the present displaced in time. Somewhere that we can “arrive at” and thus predict. What the simple example of the dice shows is that this view of the future is nonsense.
The predictability we seen in simple examples such as F = MA is a byproduct of natures ability to always select the least energy path to the future, which suggests that nature knows something that we don’t. However this predictability quickly goes off the rails. For example, the 3 body problem.
Rather than trying to model the future from first principles, there is only one technique that has been shown to work when dealing with complexity. Looking for patterns in the complexity. Early humans learned to predict the future by studying the cycles in nature, long before they understood what drove the cycles.
We use this same technique to provide highly accurate predictions of the tides on earth for dozens, even hundreds of years into the future. These tides are extremely complex, much too complex to calculate from first principles. Yet we ignore this proven body of work when it comes to predicting climate.
Willis>
Come on, you can do better. You’re generally good at thinking-different. One can indeed see a signal from each volcano in the data as soon as you stop assuming the signal must be negative, as per climate science official wisdom. Pinatubo clearly corresponds to a drop, but Chich and Ag both correspond to clear temperature increases.
goldminor: “If you look approximately 6 years out past each eruption, the graph shows a large upward heat spike on the surface:ICOADS SST line. That spike following all three eruptions gains approximately 2.5C from the point where the ICOADS line crosses the eruption event to the peak of the ICOADS line 6 years out. Is this just a coincidence? ”
Thank you. This is something I have been drawing attention to for a couple of years. I call it volcanic rebound.
I think there is evidence that climate feedbacks effectively recover the heat lost due the masking effects of ash/aerosols by lower SST causing more of the available solar to be absorbed.
This ties in with Bob Tisdale’s hypothesis of the asymmetric effects of Nino/Nina and Willis’ tropical governor.
PS The corrolary of that argument is without volcanic cooling there is no need to suggest +ve feedbacks to the known CO2 forcing. In fact, if climate counteracts volcanoes it most likely counteracts CO2 too, which would lead to -ve f/b cancelling both.
Frank says:
May 11, 2013 at 10:18 am
the re-analysis protocol forces the re-analysis output to return to observed data at places and times where we have data.
================
This is a form of the Gambler’s Fallacy. You are in effect assuming that our observed reality is “correct”, and the other possible realities predicted by the model are “wrong” and thus can be eliminated from consideration.
Just because the observed data matches one of the predictions of the model does not make any of the other possibilities less likely. In effect the protocol forces the data to ignore the other possibilities which leads to an incorrect estimation of the odds.
Think of it this way. We have a data point, call it 1900. We have 3 possibilities looking at 1901. Either temps go up, down or stay the same. We don’t have any data for 1901, but we have data for 1902. 1902 has the same temp as 1901.
Our model tells us that on this basis, since 1900 = 1902, then temps in 1901 must also have been unchanged. And on this basis we build a theory of how temperature changes.
But our underlying reality is wrong. Temps might have also gone up or down in 1901, so our theory is based on faulty data, namely the assumption that 1901 = 1900 and 1901 = 1902. From this we conclude that temperature has low natural variability.
But in reality it is our model that has low variability. Depending on what actually happened in 1901, variability might be low or high, we simply cannot say with any degree of confidence. However, the reanalysis protocol tells us that quite the opposite.
The reanalysis tells us that we have low variability, which gives us a false idea of the odds. The Gamblers Fallacy.
Greg Goodman says:
May 12, 2013 at 9:07 am
In fact, if climate counteracts volcanoes it most likely counteracts CO2 too
========
this has been demonstrated by applying economic theory (unit root) to climate. the effects of CO2 on temps are transient. The climate adjusts to eliminate them. However, the presence of a near unit root in the temp data give the misleading statistical appearance that the change is permanent.