Why Reanalysis Data Isn't …

Guest Post by Willis Eschenbach

I was reading through the recent Trenberth paper on ocean heat content that’s been discussed at various locations around the web. It’s called “Distinctive climate signals in reanalysis of global ocean heat content”,  paywalled, of course. [UPDATE: my thanks to Nick Stokes for locating the paper here.] Among the “distinctive climate signals” that they claim to find are signals from the massive eruptions of Mt. Pinatubo in mid-1991 and El Chichon in mid-1982. They show these claimed signals in my Figure 1 below, which is also Figure 1 in their paper.

ORAS4 OHC joulesORIGINAL CAPTION: Figure 1. OHC integrated from 0 to 300 m (grey), 700 m (blue), and total depth (violet) from ORAS4, as represented by its 5 ensemble members. The time series show monthly anomalies smoothed with a 12 month running mean, with respect to the 1958–1965 base period. Hatching extends over the range of the ensemble members and hence the spread gives a measure of the uncertainty as represented by ORAS4 (which does not cover all sources of uncertainty). The vertical colored bars indicate a two year interval following the volcanic eruptions with a 6 month lead (owing to the 12 month running mean), and the 1997–98 El Niño event again with 6 months on either side. On lower right, the linear slope for a set of global heating rates (W m-2) is given.

I looked at that and I said “Whaaa???”. I’d never seen any volcanic signals like that in the ocean heat content data. What was I missing?

Well, what I was missing is that Trenberth et al. are using what is laughably called “reanalysis data”. But as the title says, reanalysis “data” isn’t data in any sense of the word. It is the output of a computer climate model masquerading as data.

Now, the basic idea of a “reanalysis” is not a bad one. If you have data with “holes” in it, if you are missing information about certain times and/or places, you can use some kind of “best guess” algorithm to fill in the holes. In mining, this procedure is quite common. You have spotty data about what is happening underground. So you use a kriging procedure employing all the available information, and it gives you the best guess about what is happening in the “holes” where you have no data. (Please note, however, that if you claim the results of your kriging model are real observations, if you say that the outputs of the kriging process are “data”, you can be thrown in jail for misrepresentation … but I digress, that’s the real world and this is climate “science” at its finest.)

The problems arise as you start to use more and more complex procedures to fill in the holes in the data. Kriging is straight math, and it gives you error bars on the estimates. But a global climate model is a horrendously complex creature, and gives no estimate of error of any kind.

Now, as Steven Mosher is fond of pointing out, it’s all models. Even something as simple as

Force = Mass times Acceleration 

is a model. So in that regard, Steven is right.

The problem is that there are models and there are models. Some models, like kriging, are both well-understood and well-behaved. We have analyzed and tested the model called “kriging”, to the point where we understand its strengths and weakness, and we can use it with complete confidence.

Then there is another class of models with very different characteristics. These are called “iterative” models. They differ from models like kriging or F = M A because at each time step, the previous output of the model is used as the new input for the model. Climate models are iterative models. In a climate model, for example, it starts with the present weather, and predicts where the weather will go at the next time step (typically a half hour).

Then that result, the prediction for a half hour from now, is taken as input to the climate model, and the next half-hour’s results are calculated. Do that about 9,000 times, and you’ve simulated a year of weather … lather, rinse, and repeat enough times, and voila! You now have predicted the weather, half-hour by half-hour, all the way to the year 2100.

There are two very, very large problems with iterative models. The first is that errors tend to accumulate. If you calculate one half hour even slightly incorrectly, the next half hour starts with bad data, so it may be even further out of line, and the next, and the next, until the model goes completely off the rails. Figure 2 shows a number of runs from the Climateprediction climate model …

climateprediction_bad_dataFigure 2. Simulations from climateprediction.net. Note that a significant number of the model runs plunge well below ice age temperatures … bad model, no cookies!

See how many of the runs go completely off the rails and head off into a snowball earth, or take off for stratospheric temperatures? That’s the accumulated error problem in action.

The second problem with iterative models is that often we have no idea how the model got the answer. A climate model is so complex and is iterated so many times that the internal workings of the model are often totally opaque. As a result, suppose that we get three very different answers from three different runs. We have no way to say that one of them is more likely right than the other … except for the one tried and true method that is often used in climate science, viz:

If it fits our expectations, it is clearly a good, valid, solid gold model run. And if it doesn’t fit our expectations, obviously we can safely ignore it.

So how many “bad” reanalysis runs end up on the cutting room floor because the modeler didn’t like the outcome? Lots and lots, but how many nobody knows.

With that as a prelude, let’s look at Trenberth’s reanalysis “data”, which of course isn’t data at all … Figure 3 compares the ORAS4 reanalysis model results to the Levitus data:

oras4 reanalysis vs levitus dataFigure 3. ORAS4 reanalysis results for the 0-2000 metre layer (blue) versus Levitus data for the same layer. ORAS4 results are digitized from Figure 1. Note that the ORAS4 “data” prior to about 1980 has error bars from floor to ceiling, and so is of little use (see Figure 1). The data is aligned to their common start in 1958 (1958=0)

In Figure 3, the shortcomings of the reanalysis model results are laid bare. The computer model predicts a large drop in OHC from the volcanoes … which obviously didn’t happen. But instead of building on that reality of no OHC change after the eruptions, the reanalysis model has simply warped the real data so that it can show the putative drop after the eruptions.

And this is the underlying problem with treating reanalysis results as real data—they are nothing of the sort. All that the reanalysis model is doing is finding the most effective way to reshape the data to meet the fantasies, preconceptions, and errors of the modelers. Let me re-post the plot with which I ended my last post. This shows all of the various measurements of oceanic temperature, from the surface down to the deepest levels that we have measured extensively, two kilometers deep.

changes in sea surface and sub correctedFigure 4. Oceanic temperature measurements. There are two surface measurements, from ERSST and ICOADS, along with individual layer measurements for three separate levels, from Levitus. NOTE—Figure 4 is updated after Bob Tisdale pointed out that I was inadvertently using smoothed data for the SSTs.

Now for me, anyone who looks at Figure 4 and claims that they can see the effects of the eruptions of Pinatubo and El Chichon and Mt. Agung in that actual data is hallucinating. There is no effect visible. Yes, there is a drop in SST during the year after Pinatubo … but the previous two drops were larger, and there is no drop during the year after El Chichon or Mt. Agung. In addition, temperatures rose more in the two years before Pinatubo than they dropped in the two years after. All that taken together says to me that it’s just random chance that Pinatubo has a small drop after it.

But the poor climate modelers are caught. The only way that they can claim that CO2 will cause the dreaded Thermageddon is to set the climate sensitivity quite high.

The problem is that when the modelers use a very high sensitivity like 3°C/doubling of CO2, they end up way overestimating the effect of the volcanoes. We can see this clearly in Figure 3 above, showing the reanalysis model results that Trenberth speciously claims are “data”. Using the famous Procrustean Bed as its exemplar, the model has simply modified and adjusted the real data to fit the modeler’s fantasy of high climate sensitivity. In a nutshell, the reanalysis model simply moved around and changed the real data until it showed big drops after the volcanoes … and this is supposed to be science?

Now, does this mean that all reanalysis “data” is bogus?

Well, the real problem is that we don’t know the answer to that question. The difficulty is that it seems likely that some of the reanalysis results are good and some are useless, but in general we have no way to distinguish between the two. This case of Levitus et al. is an exception, because the volcanoes have highlighted the problems. But in many uses of reanalysis “data”, we have no way to tell if it is valid or not.

And as Trenberth et al. have proven, we certainly cannot depend on the scientists using the reanalysis “data” to make even the slightest pretense of investigating whether it is valid or not …

(In passing, let me point out one reason that computer climate models don’t do well at reanalyses—nature generally does edges and blotches, while climate models generally do smooth transitions. I’ve spent a good chunk of my life on the ocean. I can assure you that even in mid-ocean, you’ll often see a distinct line between two kinds of water, with one significantly warmer than the other. Nature does that a lot. Clouds have distinct edges, and they pop into and out of existence, without much in the way of “in-between”. The computer is not very good at that blotchy, patchy stuff. If you leave the computer to fill in the gap where we have no data between two observations, say 10°C and 15°C, the computer can do it perfectly—but it will generally do it gradually and evenly, 10, 11, 12, 13, 14, 15.

But when nature fills in the gap, you’re more likely to get something like 10, 10, 10, 14, 15, 15 … nature usually doesn’t do “gradually”. But I digress …)

Does this mean we should never use reanalyses? By no means. Kriging is an excellent example of a type of reanalysis which actually is of value.

What these results do mean is that we should stop calling the output of reanalysis models  “data”, and that we should TEST THE REANALYSIS MODEL OUTPUTS EXTENSIVELY before use.

These results also mean that one should be extremely cautious when reanalysis “data” is used as the input to a climate model. If you do that, you are using the output of one climate model as the input to another climate model … which is generally a Very Bad Idea™ for a host of reasons.

In addition, in all cases where reanalysis model results are used, the exact same analysis should be done using the actual data. I have done this in Figure 3 above. Had Trenberth et al. presented that graph along with their results … well … if they’d done that, likely their paper would not have been published at all.

Which may or may not be related to why they didn’t present that comparative analysis, and to why they’re trying to claim that computer model results are “data” …

Regards to everyone,

w.

NOTES:

The Trenberth et al. paper identifies their deepest layer as from the surface to “total depth”. However, the reanalysis doesn’t have any changes below 2,000 metres, so that is their “total depth”.

DATA:

The data is from NOAA , except the ERSST and HadISST data, which are from KNMI.

The NOAA ocean depth data is here.

The R code to extract and calculate the volumes for the various Levitus layers is here.

Get notified when a new post is published.
Subscribe today!
0 0 votes
Article Rating
113 Comments
Inline Feedbacks
View all comments
Admin
May 11, 2013 7:43 am

Thanks Willis for a well written and well illustrated educational essay. Steve McIntyre often points out that there are often cases where adverse data is not reported in paleo reconstructions. It should probably be SOP that when model data is presented, the adverse model runs are reported too. I think that would be enlightening. We can start with the IPCC.

DirkH
May 11, 2013 8:17 am

lgl says:
May 11, 2013 at 5:16 am
“No you are not hallucinating. The change really goes negative after all three major eruptions (and the latest strong Ninas)”
The time series of Levitus 0-2000m and ORAS4 still look wildly different after the eruptions. For instance after El Cichon Levitus goes up while ORAS4 goes down.

ferdberple
May 11, 2013 8:20 am

phlogiston says:
May 11, 2013 at 7:12 am
Clouds must be attractors.
================
Assuming clouds are attractors yields results that somewhat match clouds. However, this doesn’t tell us what gives rise to the attractor.
Why when one sails on the ocean, is there a very distinct line between runoff from the land and the ocean waters, many many miles from the land? Why hasn’t this long since mixed?
It seems very curious to me that we have a whole bunch of scientists pretending to know a whole lot about the physical world, that apparently aren’t able to explain the clouds in the sky or water in the oceans. But feel it their duty to tell everyone else how to act.

Editor
May 11, 2013 8:30 am

Willis Eschenbach says: “I still don’t see any volcanic effect, though. The drop post 1991 is absolutely bog-standard and indistinguishable from half-a-dozen other such drops in the record.”
The drop in 1991 does not appear in the East Pacific data due to the strength of the ENSO signal:
http://oi47.tinypic.com/hv8lcx.jpg
But the effects of Mount Pinatubo show up plain as day in the Atlantic, Indian and West Pacific data, especially when you consider there was a series of El Niños then:
http://oi49.tinypic.com/29le06e.jpg
And with that in mind, it really does make its presence known in the monthly global data. There should’ve been a hump in the 1990s similar to the period from 2002 to 2007 due to the string of secondary El Niños from 1991/92 to 1994/1995:
http://bobtisdale.files.wordpress.com/2013/05/01-global.png
But you’re right—it’s indistinguishable from the other noise in the annual global data.
Regards

Ummer
May 11, 2013 8:32 am

Climate Hoaxers… what wont you do!

May 11, 2013 8:41 am

“Do we even know why this happens? Why does water clump together to form clouds? Why doesn’t it mix evenly with the air to form an even haze across the sky?”
A discrete nucleation site and surface tension creates clumping on a small scale. Rising and falling parcels of air creates the boundaries. Ultimately though, the boundaries are created by the non-homogenous surface of the earth. The earth is a “blotchy” radiator.

Colorado Wellington
May 11, 2013 8:41 am

Think of a drug company using similar data reanalysis of their previous clinical trials data to prove the beneficial effects of ingesting one of their products. And think that they would argue that changes in dosage affected the outcome in patients similarly to argument about the effect of Pinatubo, El Chichon and Mt. Agung eruptions on ocean heat content. And think they would present their paper and seek an approval of their drug.

Editor
May 11, 2013 9:04 am

lgl says: “No Bob, the 1997/98 El Nino heated the ocean. That heat is found two years later in the 100-700m layer, which has misled you to believe La Nina is heating the ocean.
http://virakkraft.com/SST-SSST.png”
My apologies, lgl. I was apparently having trouble reading the 0-100 meter data without having had a cup of coffee this morning.
But a clarification: La Niña’s can and do cause ocean heat content to warm in the tropical Pacific. The 1954-57, 1973-76, 1995/96 and 1998-01 La Niña’s are highlighted in the following graph:
http://i40.tinypic.com/9l97wh.jpg
On the other hand, looking at ARGO era data, the ocean heat content of the oceans remote to the tropical Pacific can warm in response to El Niños and cool due La Niñas, like in the tropical North Atlantic:
http://oi42.tinypic.com/2liwuix.jpg
Then there’s the Indian Ocean during the ARGO era, which warms during El Niño events but doesn’t cool during La Niña events:
http://bobtisdale.files.wordpress.com/2013/03/20-argo-era-indian-ohc-v-nino3-4.png
Regards

Justthinkin
May 11, 2013 9:07 am

How the heck can you reanalyze something that never existed?

May 11, 2013 9:24 am

“Then that result, the prediction for a half hour from now, is taken as input to the climate model, and the next half-hour’s results are calculated. Do that about 9,000 times, and you’ve simulated a year of weather … lather, rinse, and repeat enough times, and voila! You now have predicted the weather, half-hour by half-hour, all the way to the year 2100.
There are two very, very large problems with iterative models. The first is that errors tend to accumulate. If you calculate one half hour even slightly incorrectly, the next half hour starts with bad data, so it may be even further out of line, and the next, and the next, until the model goes completely off the rails. Figure 2 shows a number of runs from the Climateprediction climate model …
##############
First off, Im not convinced by the paper willis discusses, but there is a potential misunderstand with re analysis data that bears some looking at.
Re analysis does not go off the rails as Willis suggests. The step you are missing is called data assimilation.
For the model at issue here see the following
http://climatedataguide.ucar.edu/guidance/oras4-ecmwf-ocean-reanalysis
http://onlinelibrary.wiley.com/doi/10.1002/qj.2063/abstract
In simple terms it works like this. You take a weather model, in this case ECMWF and you iterate forward using a physics model one time step. Then to avoid the problem willis mentions above you use data assimilation. So lets say you have the value of the
air temperature at midnight and 4 am. At midnight its 10C and at 4AM its 9C.
You run your model to fill in the temporal gap. At 4AM model time you model says its
9.2C. Do you let it run wild and accumulate errors? Nope, that where data assimilation
comes is. You use every bit of observation data your have to keep the model on track.
So you dont get the kind of “runaway error” that Willis points out. Of course, you might get different types of errors, you always will, but not the accumulation type errors as described in the post.
But please folks if you have issues with re analysis data, please be consistent.
It you have issues with weather model and reanalysis data, please contact Ryan Maue
look closely at the chart below… NCEP is re analysis data
http://wattsupwiththat.com/2013/02/28/february-2013-global-surface-temperature-at-normal/
Still have issues? Were you on that thread warning about runaway error?
Still wary about re analysis data. Contact the guys who make this chart. reanalysis data
http://ocean.dmi.dk/arctic/meant80n.uk.php
Still have issues. Contact the guys who wrote this paper. They used re analysis data
http://surfacestations.org/fall_etal_2011.htm
##########################
for more details see here
http://www.ecmwf.int/products/forecasts/ocean/oras4_documentation/Data_assim.html

noaaprogrammer
May 11, 2013 9:58 am

What about the effects of other volcanoes that have errupted over the same time interval? Shouldn’t their algorithm show similar effects for those erruptions as well?

May 11, 2013 10:01 am

Is this problem the same wth the recent Specific Humidity paper, why it correlates so well to global temperatures?
Calculations, not data: I don’t mind those IF the equations don’t keep changing. What you are highlighting is that the equations are tweaked going forward, not just that the input data uses the results of the last calculation.
Here’s something: we see the Scenarios from 1988 with the Actuals overlaid. Would it not be far more appropriate (and disturbing for the warmists) if we were to white out the Scenarios from 1988 to the present, leaving only the Scenario tracks of the future? Then we would see, for example, that to get the end-result of Scenario A, we would have to go from “here to there” in 87 years, a sudden uplift that is in no discussed Scenario?
Why Scenarios and other model runs continue to show original options when observation has eliminated many of them, I do not know. Is it like the wife of a dullard who keeps telling you she could have been the wife of a Carnegie-Mellon she met at college but went with her heart (even though we suspect he had no real interest in her)?
Most of climatological strutting is just that, repeated statements about how smart someone is, not about how good what he did.

lgl
May 11, 2013 10:17 am

Thanks Bob, no problem
Well, Indian ocean is among the reasons I have no confidence in the ARGO data
http://climexp.knmi.nl/data/inodc_heat700_20-120E_-30-30N_na.png
Is the President of the Maldives in charge of some of the floats perhaps 🙂

Frank
May 11, 2013 10:18 am

Willis: Good detective work. The abstract says: “Volcanic eruptions and El Niño events are identified [in this reanalysis] as sharp cooling events.” The observed changes in the ocean heat content after the Pinatubo eruption does not show a “sharp cooling event”. The authors don’t address this glaring inconsistency between observations and their re-analysis.
Your criticism of re-analyses is somewhat inaccurate. As you note, with time errors gradually creep into the output from climate/weather models, but the re-analysis protocol forces the re-analysis output to return to observed data at places and times where we have data. Surface temperatures reported by the re-analysis, for example, are presumably properly constrained to match SSTs reported by satellites. The question is whether the reanalysis output is totally out of touch with reality in most of the ocean because there aren’t enough observations to properly constrain the reanalysis. This appears to be the case after AND before Pinatubo. The re-analysis introduced a huge warming in the years before Pinatubo and cooling in the years after Pinatubo that is apparent in the Levitus observations. This presumably is taking place at locations where we don’t have observational data. However, the re-analysis was performed with and without the massive amount of data added by Argo after 2003 and this additional data

Cees de Valk
May 11, 2013 10:26 am

Willis, I don’t think we really disagree. What I meant with “constrain” is fix the model state to a suitably small neighborhood of the real state of the ocean (given a certain resolution, etc.). That is why I stressed the importance of having sufficient coverage with relevant measurements. You can always devise a “robust” data-assimilation method which smears out data to something which is insensitive to the exact input data, initial condition, etc. But that does not mean that the problem has been solved. The ocean is still largely a black box and will probably remain so for some time.

Paul Vaughan
May 11, 2013 11:06 am

“Deep ocean heat uptake is linked to wind variability” / “surface wind variability is largely responsible for the changing ocean heat vertical distribution” / “changes in the atmospheric circulation are instrumental for the penetration of the warming into the ocean, although the mechanisms at work are still to be established” / “changes in surface winds play a major role, and although the exact nature of the wind influence still needs to be understood, the changes are consistent with the intensification of the trades in subtropical gyres” / “changes in the atmospheric circulation play an important role in the heat uptake”http://people.oregonstate.edu/~schmita2/ATS421-521/2013/papers/balmaseda13grl_inpress.pdf
I appreciate Trenberth’s appreciation of nature and I can agree with Trenberth on at least that much.
Improvement of the narrative will hinge on better awareness of the following:
a) The interannual variations aren’t spatially uniform and the coupling isn’t unidirectional, so correlations necessarily (in the strictest mathematical sense) won’t be linear across all possible aggregation criteria and pairs of variables.
b) What’s controlling the changepoints illuminated by Figure 2 here?
Trenberth, K.E.; & Stepaniak, D.P. (2001). Indices of El Nino evolution.
http://www.cgd.ucar.edu/staff/trenbert/trenberth.papers/i1520-0442-014-08-1697.pdf
Answer:
systematic solar heliographic asymmetry (N-S) timing shifts relative to coherently shifting solar activity (N+S) & volatility (|N-S|) timing:
http://img13.imageshack.us/img13/5691/911k.gif (green = blend)
http://img268.imageshack.us/img268/8272/sjev911.png
http://img267.imageshack.us/img267/8476/rrankcmz.png (see Mursula & Zieger (2001))
http://img829.imageshack.us/img829/2836/volcano911.png (volcanic indices in italics; Cy = Chandler wobble y phase; SOST = southern ocean surface temperature; ISW = integral of solar wind)
Supplementary:
http://img201.imageshack.us/img201/4995/sunspotarea.png – Note well: N+S ~= 3 |N-S| (Remember that heliospheric current sheet tilt angle varies with solar cycle phase.)
http://tallbloke.files.wordpress.com/2013/03/scd_sst_q.png
___
“[UPDATE: my thanks to Nick Stokes for locating the paper here.]”
Nick or anyone else:
Do you have a link to the supplementary material (S)?
“There is also a net poleward heat transport during the discharge phase of ENSO as can be seen by the exchange of heat between tropics and extratropics, which is likely favored by the intensification of the trades after 1998 (Figure S04).”
“After 1998, there was a rapid exchange of heat between the regions above and below 700 m (Figure S01 in suplementary material).”
“[…] changes in the subtropical gyres resulting from changes of the trade winds in the tropics (Figure S04), but whether as low frequency variability or a longer term trend remains an open question”

lgl
May 11, 2013 11:44 am

Bob
b t w La Nina does not warm the Pacific either. Extending to -50/50 the two years lag and step warming after the 87 and 98 Ninos become very visible.
http://virakkraft.com/Pac-OHC.png

LamontT
May 11, 2013 12:20 pm

::sigh::
I wish that people would realize that a computer is a wonderful tool for modeling a fairly simple closed system such as a car engine or a computer circuit but that it isn’t a good tool to accurately model a complex system.

May 11, 2013 12:24 pm

Thanks, Willis. Good work!
Reanalysis is not data and should not be used as such.
Reanalysis is torturing the data until if confesses what you want it to say.

May 11, 2013 12:30 pm

Since re analysis is used by several well known skeptics, I thought it might be useful
for folks to read informative stuff.
https://reanalyses.org/ocean/overview-current-reanalyses
For folks who have worked in areas like target prediction, you can think of re analysis type systems as being very much like Kalman Filters. Of course not perfect, but we use them
to shoot down bad guys. You will find differences between an “observational” data set
say Levitus and a re analysis output in part because the re -analysis data can be used to
correct for spurious errors and data coverage issues in “observational” datasets
For example, We are all aware of the problems wth argo
http://wattsupwiththat.com/2011/12/31/krige-the-argo-probe-data-mr-spock/
In other words Levitus “heat Content” really is not observed. It is modelled.
So when you compare Levitus “heat content’ with re analysis “heat content” you are not comparing observations with models. You are comparing two models. One has very few
physical parameters for estimating heat content ( Levitus ) and the other Re Analysis uses
all the data and all the physics you know.. So for example you’d use both the ARGO data
and satellite data.
It’s not as simple as saying “here is the curve for a model, and here is a curve for observations”
They are both modelled.

David L. Hagen
May 11, 2013 12:36 pm

fredberple
Thanks for your comments on chaos.
Any suggestions then on why Singer’s results show declining variations with increasing run years?
Geological evidence of glacial versus interglacial temperatures show a non-uniform distribution, suggesting colder temperatures during glacial periods are more common. Temperature was both higher and lower with higher CO2 levels and the temperature appears to vary between warmer and colder bounds.
To me that indicates missing physics and missing feedbacks in the models.
Proper recognition of Milankovitch cycles with Hurst-Kolmogorov dynamics appears to improve predictability over multiple scales. See:
Markonis, Y., and D. Koutsoyiannis, Climatic variability over time scales spanning nine orders of magnitude: Connecting Milankovitch cycles with Hurst–Kolmogorov dynamics, Surveys in Geophysics, 34 (2), 181–207, 2013.

Frank
May 11, 2013 12:44 pm

Willis said: “Clearly, this problem is NOT from the lack of data as you claim. If it were, all of the five different runs would not line up so nicely post 1980. Prior to that, Trenberth et al. agree that the lack of data is an issue, and that’s why the early results are all over the map.”
If you plot the difference between Levitus and the reanalysis, i think you will find that the biggest differences come just before and after Pinatubo, a time when data was better, but nowhere near as good as with Argo. The author’s claim that volcanoes produce significant cooling in the deeper ocean could be an artifact of the reanalysis, not a discovery made by re-analysis. I have no idea when there was enough data to trust this re-analysis, because we have little idea of how accurately models describe “diffusion” of heat below the mixed layer. Trustworthy observations of diffusion into the deeper ocean may come from measurements of CFCs, but I’ve heard little about how well climate models reproduce this data. For that matter, I’m not sure how well they even reproduce seasonal changes in the mixed layer (which are lost when one works with temperature anomalies).
Does your BS detector understand how the initial reports from ARGO of no warming have changed into the rapid warming seen above.

DirkH
May 11, 2013 12:52 pm

Steven Mosher says:
May 11, 2013 at 12:30 pm
“It’s not as simple as saying “here is the curve for a model, and here is a curve for observations”
They are both modelled.”
And one of them is an iterative model and one of them isn’t. Which is what Willis started with.