Why Reanalysis Data Isn't …

Guest Post by Willis Eschenbach

I was reading through the recent Trenberth paper on ocean heat content that’s been discussed at various locations around the web. It’s called “Distinctive climate signals in reanalysis of global ocean heat content”,  paywalled, of course. [UPDATE: my thanks to Nick Stokes for locating the paper here.] Among the “distinctive climate signals” that they claim to find are signals from the massive eruptions of Mt. Pinatubo in mid-1991 and El Chichon in mid-1982. They show these claimed signals in my Figure 1 below, which is also Figure 1 in their paper.

ORAS4 OHC joulesORIGINAL CAPTION: Figure 1. OHC integrated from 0 to 300 m (grey), 700 m (blue), and total depth (violet) from ORAS4, as represented by its 5 ensemble members. The time series show monthly anomalies smoothed with a 12 month running mean, with respect to the 1958–1965 base period. Hatching extends over the range of the ensemble members and hence the spread gives a measure of the uncertainty as represented by ORAS4 (which does not cover all sources of uncertainty). The vertical colored bars indicate a two year interval following the volcanic eruptions with a 6 month lead (owing to the 12 month running mean), and the 1997–98 El Niño event again with 6 months on either side. On lower right, the linear slope for a set of global heating rates (W m-2) is given.

I looked at that and I said “Whaaa???”. I’d never seen any volcanic signals like that in the ocean heat content data. What was I missing?

Well, what I was missing is that Trenberth et al. are using what is laughably called “reanalysis data”. But as the title says, reanalysis “data” isn’t data in any sense of the word. It is the output of a computer climate model masquerading as data.

Now, the basic idea of a “reanalysis” is not a bad one. If you have data with “holes” in it, if you are missing information about certain times and/or places, you can use some kind of “best guess” algorithm to fill in the holes. In mining, this procedure is quite common. You have spotty data about what is happening underground. So you use a kriging procedure employing all the available information, and it gives you the best guess about what is happening in the “holes” where you have no data. (Please note, however, that if you claim the results of your kriging model are real observations, if you say that the outputs of the kriging process are “data”, you can be thrown in jail for misrepresentation … but I digress, that’s the real world and this is climate “science” at its finest.)

The problems arise as you start to use more and more complex procedures to fill in the holes in the data. Kriging is straight math, and it gives you error bars on the estimates. But a global climate model is a horrendously complex creature, and gives no estimate of error of any kind.

Now, as Steven Mosher is fond of pointing out, it’s all models. Even something as simple as

Force = Mass times Acceleration 

is a model. So in that regard, Steven is right.

The problem is that there are models and there are models. Some models, like kriging, are both well-understood and well-behaved. We have analyzed and tested the model called “kriging”, to the point where we understand its strengths and weakness, and we can use it with complete confidence.

Then there is another class of models with very different characteristics. These are called “iterative” models. They differ from models like kriging or F = M A because at each time step, the previous output of the model is used as the new input for the model. Climate models are iterative models. In a climate model, for example, it starts with the present weather, and predicts where the weather will go at the next time step (typically a half hour).

Then that result, the prediction for a half hour from now, is taken as input to the climate model, and the next half-hour’s results are calculated. Do that about 9,000 times, and you’ve simulated a year of weather … lather, rinse, and repeat enough times, and voila! You now have predicted the weather, half-hour by half-hour, all the way to the year 2100.

There are two very, very large problems with iterative models. The first is that errors tend to accumulate. If you calculate one half hour even slightly incorrectly, the next half hour starts with bad data, so it may be even further out of line, and the next, and the next, until the model goes completely off the rails. Figure 2 shows a number of runs from the Climateprediction climate model …

climateprediction_bad_dataFigure 2. Simulations from climateprediction.net. Note that a significant number of the model runs plunge well below ice age temperatures … bad model, no cookies!

See how many of the runs go completely off the rails and head off into a snowball earth, or take off for stratospheric temperatures? That’s the accumulated error problem in action.

The second problem with iterative models is that often we have no idea how the model got the answer. A climate model is so complex and is iterated so many times that the internal workings of the model are often totally opaque. As a result, suppose that we get three very different answers from three different runs. We have no way to say that one of them is more likely right than the other … except for the one tried and true method that is often used in climate science, viz:

If it fits our expectations, it is clearly a good, valid, solid gold model run. And if it doesn’t fit our expectations, obviously we can safely ignore it.

So how many “bad” reanalysis runs end up on the cutting room floor because the modeler didn’t like the outcome? Lots and lots, but how many nobody knows.

With that as a prelude, let’s look at Trenberth’s reanalysis “data”, which of course isn’t data at all … Figure 3 compares the ORAS4 reanalysis model results to the Levitus data:

oras4 reanalysis vs levitus dataFigure 3. ORAS4 reanalysis results for the 0-2000 metre layer (blue) versus Levitus data for the same layer. ORAS4 results are digitized from Figure 1. Note that the ORAS4 “data” prior to about 1980 has error bars from floor to ceiling, and so is of little use (see Figure 1). The data is aligned to their common start in 1958 (1958=0)

In Figure 3, the shortcomings of the reanalysis model results are laid bare. The computer model predicts a large drop in OHC from the volcanoes … which obviously didn’t happen. But instead of building on that reality of no OHC change after the eruptions, the reanalysis model has simply warped the real data so that it can show the putative drop after the eruptions.

And this is the underlying problem with treating reanalysis results as real data—they are nothing of the sort. All that the reanalysis model is doing is finding the most effective way to reshape the data to meet the fantasies, preconceptions, and errors of the modelers. Let me re-post the plot with which I ended my last post. This shows all of the various measurements of oceanic temperature, from the surface down to the deepest levels that we have measured extensively, two kilometers deep.

changes in sea surface and sub correctedFigure 4. Oceanic temperature measurements. There are two surface measurements, from ERSST and ICOADS, along with individual layer measurements for three separate levels, from Levitus. NOTE—Figure 4 is updated after Bob Tisdale pointed out that I was inadvertently using smoothed data for the SSTs.

Now for me, anyone who looks at Figure 4 and claims that they can see the effects of the eruptions of Pinatubo and El Chichon and Mt. Agung in that actual data is hallucinating. There is no effect visible. Yes, there is a drop in SST during the year after Pinatubo … but the previous two drops were larger, and there is no drop during the year after El Chichon or Mt. Agung. In addition, temperatures rose more in the two years before Pinatubo than they dropped in the two years after. All that taken together says to me that it’s just random chance that Pinatubo has a small drop after it.

But the poor climate modelers are caught. The only way that they can claim that CO2 will cause the dreaded Thermageddon is to set the climate sensitivity quite high.

The problem is that when the modelers use a very high sensitivity like 3°C/doubling of CO2, they end up way overestimating the effect of the volcanoes. We can see this clearly in Figure 3 above, showing the reanalysis model results that Trenberth speciously claims are “data”. Using the famous Procrustean Bed as its exemplar, the model has simply modified and adjusted the real data to fit the modeler’s fantasy of high climate sensitivity. In a nutshell, the reanalysis model simply moved around and changed the real data until it showed big drops after the volcanoes … and this is supposed to be science?

Now, does this mean that all reanalysis “data” is bogus?

Well, the real problem is that we don’t know the answer to that question. The difficulty is that it seems likely that some of the reanalysis results are good and some are useless, but in general we have no way to distinguish between the two. This case of Levitus et al. is an exception, because the volcanoes have highlighted the problems. But in many uses of reanalysis “data”, we have no way to tell if it is valid or not.

And as Trenberth et al. have proven, we certainly cannot depend on the scientists using the reanalysis “data” to make even the slightest pretense of investigating whether it is valid or not …

(In passing, let me point out one reason that computer climate models don’t do well at reanalyses—nature generally does edges and blotches, while climate models generally do smooth transitions. I’ve spent a good chunk of my life on the ocean. I can assure you that even in mid-ocean, you’ll often see a distinct line between two kinds of water, with one significantly warmer than the other. Nature does that a lot. Clouds have distinct edges, and they pop into and out of existence, without much in the way of “in-between”. The computer is not very good at that blotchy, patchy stuff. If you leave the computer to fill in the gap where we have no data between two observations, say 10°C and 15°C, the computer can do it perfectly—but it will generally do it gradually and evenly, 10, 11, 12, 13, 14, 15.

But when nature fills in the gap, you’re more likely to get something like 10, 10, 10, 14, 15, 15 … nature usually doesn’t do “gradually”. But I digress …)

Does this mean we should never use reanalyses? By no means. Kriging is an excellent example of a type of reanalysis which actually is of value.

What these results do mean is that we should stop calling the output of reanalysis models  “data”, and that we should TEST THE REANALYSIS MODEL OUTPUTS EXTENSIVELY before use.

These results also mean that one should be extremely cautious when reanalysis “data” is used as the input to a climate model. If you do that, you are using the output of one climate model as the input to another climate model … which is generally a Very Bad Idea™ for a host of reasons.

In addition, in all cases where reanalysis model results are used, the exact same analysis should be done using the actual data. I have done this in Figure 3 above. Had Trenberth et al. presented that graph along with their results … well … if they’d done that, likely their paper would not have been published at all.

Which may or may not be related to why they didn’t present that comparative analysis, and to why they’re trying to claim that computer model results are “data” …

Regards to everyone,

w.

NOTES:

The Trenberth et al. paper identifies their deepest layer as from the surface to “total depth”. However, the reanalysis doesn’t have any changes below 2,000 metres, so that is their “total depth”.

DATA:

The data is from NOAA , except the ERSST and HadISST data, which are from KNMI.

The NOAA ocean depth data is here.

The R code to extract and calculate the volumes for the various Levitus layers is here.

Get notified when a new post is published.
Subscribe today!
0 0 votes
Article Rating
113 Comments
Inline Feedbacks
View all comments
DennisA
May 11, 2013 4:02 am

Kevin Trenberth seems to have re-discovered the faith in climate models that deserted him in this Nature Climate Change blog post from June 2007. it has been posted many times in many places, but people forget:
http://blogs.nature.com/climatefeedback/2007/06/predictions_of_climate.html
“I have often seen references to predictions of future climate by the Intergovernmental Panel on Climate Change (IPCC), presumably through the IPCC assessments.
In fact, since the last report it is also often stated that the science is settled or done and now is the time for action. In fact there are no predictions by IPCC at all. And there never have been.
“None of the models used by IPCC are initialized to the observed state and none of the climate states in the models correspond even remotely to the current observed climate. In particular, the state of the oceans, sea ice, and soil moisture has no relationship to the observed state at any recent time in any of the IPCC models.
There is neither an El Niño sequence nor any Pacific Decadal Oscillation that replicates the recent past; yet these are critical modes of variability that affect Pacific rim countries and beyond.
The Atlantic Multi-decadal Oscillation, that may depend on the thermohaline circulation and thus ocean currents in the Atlantic, is not set up to match today’s state, but it is a critical component of the Atlantic hurricanes and it undoubtedly affects forecasts for the next decade from Brazil to Europe.
Moreover, the starting climate state in several of the models may depart significantly from the real climate owing to model errors.”
These were quite revealing statements because only some 3 months earlier he had presented the AR4 report conclusion to the Committee on Science and Technology of the US House of Representatives.
“The iconic summary statement of the observations section of the IPCC (2007) report is “Warming of the climate system is unequivocal, as is now evident from observations of increases in global average air and ocean temperatures, widespread melting of snow and ice, and rising global mean sea level.”
Sometimes models have to be changed to fit the political narrative as with Tom Wigley’s MAGICC model, part funded by the US EPA. You can download the manual here:
http://www.cgd.ucar.edu/cas/wigley/magicc/UserMan5.3.v2.pdf
“Changes have been made to MAGICC to ensure, as nearly as possible, consistency with the IPCC AR4.”
There is more on the politics behind models here: – “Undeniable Global Warming And Climate Models”- http://scienceandpublicpolicy.org/originals/undeniable_models.html

thingodonta
May 11, 2013 4:34 am

I give up. Why don’t these people just get on a time machine, and go back to the Soviet Union’s heyday, when they can make up whatever ‘reanalysis data’ they like and present it as true and sound?
Kriging has well-understood limitations, unlike what is used by Trenberth et al. above. Bendigo Gold had a $250 million write off a few years ago, fooling everyone-including the banks- because some fancy statistician fudged the resource numbers-in this case the ‘nugget effect’ in the drilling data, which any 1850s miner could have told them the Bendigo gold field was famous for. The gold that was supposed to be between the drillholes just wasn’t there.
I would have thought a lot of well educated, out of work statisticians could make themselves a useful career auditing the shenanigans of climate science. (But of course, like in the field of mining, what usually happens is that the auditors-in the 3rd world that means the local government- usually just get their snouts in the trough and the whole regulatory process breaks down. Same as climate science, I suppose).

Louis Hooffstetter
May 11, 2013 4:53 am

Willis you have a gift. I admire (and slightly envy) your ability to grasp what’s relevant from what’s BS and clearly explain it to others. Thanks again.
Clive Best says:
“If what you describe is correct then Fig 1. in the Trenberth paper would be classified as fraud in any other field.”
Absolutely! Only climastrology warps data to fit models. Every other scientific discipline uses empirical data to test their models.As time progresses and IPCC climate model projections go farther and farther “off the rails” the more climastrologists will resort to this kind of fraud to try to convince ‘low information voters’ that they were really correct. This fraud should be pointed out at every opportunity.

Bill Illis
May 11, 2013 5:06 am

The Argo actual measurements are 0.46 W/m2 is being absorbed into the 0-2000 metre ocean.
Trenberth says a climate model reanalysis provides an estimate of 1.1 W/m2.
I think we should just thank Dr. Trenberth, for finding yet another example of the climate models overestimating the warming rate / climate impacts by more than double.
So far, that makes about 12 out of 13 key climate aspects that the climate models miss by 50%:
– surface temperature:
– troposphere temperature;
– volcanic impact;
– Ocean Heat Content;
– water vapor;
– precipitation;
– CO2 growth rate feedback;
– cloud optical depth;
– OLR;
– Antarctic sea ice;
– stratosphere temps (after correcting for ozone loss from volcanoes)
– sea level increase;
I’ll give them the
– Arctic sea ice.
So Trenberth did not find (some of) the missing energy, he just pointed out where the mssing energy error originates:
– in the climate models and in the theory.

Tom in Florida
May 11, 2013 5:12 am

The simplified version of this post is GIGO.

lgl
May 11, 2013 5:16 am

Thanks Willis
http://virakkraft.com/SSST-change.png
No you are not hallucinating. The change really goes negative after all three major eruptions (and the latest strong Ninas)

Shub Niggurath
May 11, 2013 5:19 am

“…if you say that the outputs of the kriging process are “data”, you can be thrown in jail…”
Careful there Willis. You know how you can get into trouble for saying obvious things such as these. 😉 Ask Anthony and his Fox interview.

Frank K.
May 11, 2013 5:26 am

Thank Willis for an excellent article.
As someone with years of experience in computational fluid dynamics, there is in fact a third BIG problem with climate models, and that is that they are highly NON-LINEAR. What this means is that a seemingly small error in one variable can amplify (by quite a lot) as you march the numerical solution in time. Given that you are solving numerous coupled, non-linear differential equations with uncertainties in the boundary and initial conditions, the potential for producing erroneous solution is large. And there is no way with non-linear equations to prove or ensure that the time step you are using and/or spatial resolution of your mesh will yield a valid solution for a given problem definition.
All of means that it is imperative that the modelers document their model equations, solution techniques and software design. And, actually, NCAR does a pretty good job of this. Others, like NASA/GISS, do a horrible job (because they really don’t care about model documentation…they’re more into blogging and tweeting).

May 11, 2013 5:50 am

“It is hard to make data where none exist.”
– Kevin Trenberth

David L. Hagen
May 11, 2013 5:52 am

Willis
Thanks for showing Fig. 2 with the very wide distribution in outputs from the same climate model. That shows both iterative errors and chaotic impacts.
S. Fred Singer modeled the errors and recommends 400 model years of output for the mean results to settle out the chaotic effects. e.g 20 model runs for 20 years or 10 model runs for 40 years, or 40 model runs for 10 years. This is much more than the 1-5 runs that the IPCC typically reports. See:
S. Fred Singer Overcoming Chaotic Behavior of Climate Models, SEPP July 2012

ferdberple
May 11, 2013 6:10 am

Clouds have distinct edges, and they pop into and out of existence, without much in the way of “in-between”. The computer is not very good at that blotchy, patchy stuff.
===========
Do we even know why this happens? Why does water clump together to form clouds? Why doesn’t it mix evenly with the air to form an even haze across the sky?

Björn
May 11, 2013 6:16 am

Willis , when I try to run your R script from the link at the end of the posting it breaks down on the execution of the line:
“… mydepths=read.csv(“levitus depth.csv”,header=FALSE)…”
and spits out the error message in the following quote:
” Error in file(file, “rt”) : cannot open the connection
In addition: Warning message:
In file(file, “rt”) :
cannot open file ‘levitus depth.csv’: No such file or directory ”
Did a code line for the creation of the comma separted file, perhaps fall out of the script and through the rifts in the floorboard when you uploaded to dropbox?.
Here is how the scipt lines upto to (and including ) the offending line look when I click on the link given.
————————————————————————————–
#URL ftp://ftp.nodc.noaa.gov/pub/WOA09/MASKS/landsea.msk
url <- "ftp://ftp.nodc.noaa.gov/pub/WOA09/MASKS/landsea.msk&quot;
file <- "levitus depths.txt"
download.file(url, file)
surf_area=511207740688000 # earth surface, square metres
# depths by code from ftp://ftp.nodc.noaa.gov/pub/WOA09/DOC/woa09documentation.pdf
depthcode=c(0, 10, 20, 30, 50, 75, 100, 125, 150, 200, 250,
300, 400, 500, 600, 700, 800, 900, 1000, 1100, 1200,
1300, 1400, 1500, 1750, 2000, 2500, 3000, 3500, 4000,
4500, 5000, 5500, 6000, 6500, 7000, 7500,
8000, 8500, 9000) # depths in metres for codes 1-40, where 1 is land.
levitus_depths=as.matrix(read.fwf(file, widths=rep(8,10)))
depthmatrix=matrix(as.vector(aperm(levitus_depths)),byrow=TRUE,nrow=180)
mydepths=read.csv("levitus depth.csv",header=FALSE)
……
————————————————————————————–

RockyRoad
May 11, 2013 6:24 am

So what Trenberth is doing is making models of models? Have I got that right? Or is it models of models of models?
Oh, I see no problem there, considering how ineffective and non-robust their climate models are to begin with. Sure, models of models ad nauseum–that fixes everything. /sarc
I’d suggest to Trenberth that he dispense with his original models and quit daydreaming about his missing heat. It’s a dead end career move.
BTW, Good analysis, Willis. Always an education.

ferdberple
May 11, 2013 6:39 am

David L. Hagen says:
May 11, 2013 at 5:52 am
S. Fred Singer modeled the errors and recommends 400 model years of output for the mean results to settle out the chaotic effects
==============
you can’t settle out the chaotic effects, which is something completely misunderstood by climate science.
Say you use a pair of dice as your model of a pair of dice. This should be a perfect model – but in fact it isn’t. If you throw the dice 400 times you will get 7 as the most likely throw. So, this is your “climate prediction” as to what will happen when you roll the real dice.
However, when you roll the real dice, you will get a result between 2 and 12. 7 is the most likely, but this doesn’t mean 7 is what will happen in reality.
This is the same problem with trying to predict the future with climate models. No matter how perfect the model, you still can’t predict what will actually happen in the future.
Maybe the future temperature will be “7”, but it might also be “2” or “12” and there is no way at present given our understanding of mathematics and physics to say which it will be.
We can see this in the models above, where sometimes the model predicts heating, sometimes it predicts cooling, with no change in the forcings.
This is the fallacy of using models to predict the future. The universe is not a 19th century clockwork. The future is not written. There is no “ACTUAL” future to be predicted.
Our minds fool us into believing the future is a place at which we will arrive, because we assume that the future is like the present, only it is “ahead” of us in time. But this is not what the dice tell us.

Jim Cripwell
May 11, 2013 6:44 am

“If you torture data long enough, it will confess”. Ronald Coase.

RockyRoad
May 11, 2013 6:46 am

thingodonta says:
May 11, 2013 at 4:34 am


Kriging has well-understood limitations, unlike what is used by Trenberth et al. above. Bendigo Gold had a $250 million write off a few years ago, fooling everyone-including the banks- because some fancy statistician fudged the resource numbers-in this case the ‘nugget effect’ in the drilling data, which any 1850s miner could have told them the Bendigo gold field was famous for. The gold that was supposed to be between the drillholes just wasn’t there.

High nugget effect is an immediate “red flag” that indicates a possible mix of problems, including: 1) poor analysis reproducibility (probably no controls in the assay samples); 2) spatial irregularities caused by down-hole drift with no survey tool to correct for it; 3) down-hole contamination from zones of mineralization above. If the data is of poor quality, any model will be of poor quality. The only fix in the real world is to go back to the data.
But you’re right–like many “climate scientists” that make things up to keep their jobs, the same thing happened at Bemdogp. I’ve known companies that will employ several geostaticians and only keep those that give them the rosiest outlook. Of course, the find out later that those projections weren’t factual at all. Oops! Writeoff! The guys that gave them the straight story were let go because they didn’t add to their company’s “reserve base” and stock value.
Write-offs should be assessed to mining CEOs who suffer from a “bling” mentality. That would help fix the problem.

RockyRoad
May 11, 2013 6:48 am

Sorry, it’s “Bendigo”; fingers got in my way.

Gary
May 11, 2013 7:06 am

Willis,
Excellent point about edges. The action is almost always on the margin. Like emergent properties, this topic deserves a whole post of its own.
Isn’t it somewhat of a problem with trying to find the volcanic effects in OHC that volcanic aerosols are primarily regional and this analysis is looking at the globally averaged ocean? The mesh size of the net may be too coarse to catch this fish.

ferdberple
May 11, 2013 7:07 am

What the models are telling us is quite a different story than what the modellers are telling us. Look at Figure 2. The model delivers a whole range of results. Without any change in the forcings, the model predicts both warming and cooling.
This is very important to understand. The model shows us that both warming and cooling are possible with a doubling of CO2. Now the assumption in climate science is that “the future” is some sort of “average” of the model runs, which gives a sensitivity of between 2 and 3 K on Figure 1. However, this is a nonsense. The future is not on average of anything. We will not arrive at any sort of “average future”.
If the model is 100 percent perfect, then our future lies along one of the lines predicted by the model and there is no way to predict which one. We could have cooling with a doubling, or we could have warming, without making any change. This is what the model is actually telling us.
What is surprising is that more scientists don’t take the model builders to task on this point. In effect the models themselves are showing us that “natural variability” exists without any change in the atmosphere, or the sun, or the earth’s orbit. Rather, that even if we keep everything exactly the same, the models show us that climate will still change, and it may change dramatically.

phlogiston
May 11, 2013 7:12 am

ferdberple says:
May 11, 2013 at 6:10 am
Clouds have distinct edges, and they pop into and out of existence, without much in the way of “in-between”. The computer is not very good at that blotchy, patchy stuff.
===========
Do we even know why this happens? Why does water clump together to form clouds? Why doesn’t it mix evenly with the air to form an even haze across the sky?
Clouds must be attractors. When something is contrained in a phase-space for no apparent reason, e.g. smoke ribbons rising in still air, an attractor is at work. Here is a paper on the nonlinear dynamics of clouds (paywalled unfortunately).

May 11, 2013 7:13 am

Well, now we can add ‘data’ to the list.
Terry Oldberg recently wrote this:
“Climatologists often use polysemic terms. Some of these terms are words. Others are word pairs. The two words of a word pair sound alike and while they have different meanings climatologists treat the two words as though they were synonyms in making arguments. ”
See his guest-post at the Briggs blog to see the implications of this developed to a dramatic conclusion: http://wmbriggs.com/blog/?p=7923

Matthew Benefiel
May 11, 2013 7:15 am

The Ghost Of Big Jim Cooley says:
“I used to tell the children of my family how science is ‘real’ and not like religion. I used to tell them that they can trust it completely, that the very nature of science meant that it had to be accurate – that it was our best guess on something after rigorous examination and testing. Well, those days are gone now.”
To be honest that is part of the problem. We tend to treat science like it is an entity or something, like it holds weight in and of itself. It is a tool used by humans. Like all tools it can be used wrong (or flawed by manufacturing defects but science is a process so I suppose it doesn’t have that). Math is real to, but look how many mistakes can be made using it. To trust science is to trust those that use it, the scientist. We know very well that these men/women are not perfect (neither are we). The process is supposed to help minimize the impact of these perfections, but if history is any help we can never really remove it completely.
As far as science being ‘real’ and not like religion, that is a interesting comparison. When it really comes down to it, unless you are doing the study yourself you are placing faith in someone else’s work, kinda like a religion (even if you did the work yourself your placing faith in the peer review process). Sometimes we generalize religion to be a blind faith (that exists) but faith generally requires a basic understanding of something, like sitting in a chair. You wouldn’t sit in a chair if you didn’t believe it would hold your weight in the first place, yet you don’t test every chair you sit in (you have faith in the process that made that chair). As far as religion requiring an initial belief in something unseen and non-provable like God, but on the flip side some people aren’t content with putting any belief in that we are creatures of chance and slow mutations (also non-provable without a time machine and that ever elusive missing link).
Long story short, I really think science will be scrutinized a little more as we continue to trust a little less and require more proof of this MGW monstrosity. Anyway, I’m done.

phlogiston
May 11, 2013 7:19 am

I wonder if the authors of this paper have any comment on why ocean warming apparently started only in 1975. And before that an apparent cooling trend. They try of course to dismiss the pre-1975 period with a “high uncertainty” comment”. Why did rising CO2 only become effective in warming the oceans after 1975?

May 11, 2013 7:25 am

The error path began with the assumption that ‘science’ could be funded to give a desired result, rather than reality….it is therefore, government funded, commodity speculation MARKETING. Since it is a demonstrable failure, it is amusing to review another, little known marketing failure.
When the suits at Ford Motor Company were preparing the Ford Taurus concept, they wanted to overcome the dull family sedan reality with ‘sports car’ pretensions, so they contacted racing legend, Scotsman, Jackie Stewart, who agreed to endorse the new auto, given a list of ‘performance’ and quality features. Corporate bean counters ‘re-adjusted’ this list, without notice and invited Jackie to the product launch at the Detroit airport.
Jackie had been paid a million dollars for this endorsement and flew over in his private jet. To capture the ‘spontaneity’ of the moment, an advertising video team was set up to document the impromptu review. Jackie walked up to the Taurus, lifted the flimsy plastic door handle and said…”What is this? This is crap”. He got inside, noticed the goofy interior, said, “What is this? This is crap.” Jackie drove the Taurus a few laps around the taxiway, repeating the same refrain. Finally he got out of the car, walked to the gas fill door, opened it up and the plastic cap was held by a plastic cable and dangled against the body of the car.
Jackie stepped back, saying….”This is crap, THIS IS ALL CRAP”….at which point, he got into his still running private jet and departed. The Taurus dropped their racing legend pretensions. We have examined the hypothesis, the real data, the altered data, the ridiculous predictions, the dire warnings. We can only conclude that AGW is another Wall Street created marketing failure deserving of the Jackie Stewart quote.
BTW, the MAGICC mentioned above si “Model for the Assessment of Greenhouse Induced Climate Change….for the simpletons who chose ‘magic’ over science.

Jim G
May 11, 2013 7:33 am

Excellent example of how liars find new ways to lie about their lies to keep the lies going. Kind of like the Obama administration and Benghazi.