Rahmstorf et al (2012) Insist on Prolonging a Myth about El Niño and La Niña
Guest post by Bob Tisdale
Anthony Watts of WattsUpWithThat forwarded a link to a newly published peer-reviewed paper by Stefan Rahmstorf, Grant Foster (aka Tamino of the blog OpenMind) and Anny Cazenave. Thanks, Anthony. The title of the paper is Comparing climate projections to observations up to 2011. My Figure 1 is Figure 1 from Rahmostorf et al (2012).
The authors of the paper have elected to prolong on the often-portrayed myth about El Niño-Southern Oscillation (ENSO):
Global temperature data can be adjusted for solar variations, volcanic aerosols and ENSO using multivariate correlation analysis…
With respect to ENSO, that, of course, is nonsense.
Figure 1
The Rahmstorf et al (2012) text for Figure 1 reads:
Figure 1. Observed annual global temperature, unadjusted (pink) and adjusted for short-term variations due to solar variability, volcanoes and ENSO (red) as in Foster and Rahmstorf (2011). 12-months running averages are shown as well as linear trend lines, and compared to the scenarios of the IPCC (blue range and lines from the third assessment, green from the fourth assessment report). Projections are aligned in the graph so that they start (in 1990 and 2000, respectively) on the linear trend line of the (adjusted) observational data.
INITIAL NOTE
Under the heading of “2. Global temperature evolution”, in the first paragraph, Rahmstorf et al (2012) write:
To compare global temperature data to projections, we need to consider that IPCC projections do not attempt to predict the effect of solar variability, or specific sequences of either volcanic eruptions or El Niño events. Solar and volcanic forcing are routinely included only in ‘historic’ simulations for the past climate evolution but not for the future, while El Niño–Southern Oscillation (ENSO) is included as a stochastic process where the timing of specific warm or cool phases is random and averages out over the ensemble of projection models. Therefore, model-data comparisons either need to account for the short-term variability due to these natural factors as an added quasi-random uncertainty, or the specific short-term variability needs to be removed from the observational data before comparison. Since the latter approach allows a more stringent comparison it is adopted here.
In the first sentence in the above quote, Rahmstorf et al (2012) forgot to mention that the climate models used in the IPCC projections simulate ENSO so poorly that the authors of Guilyardi et al (2009) Understanding El Niño in Ocean-Atmosphere General Circulation Models: progress and challenges noted:
Because ENSO is the dominant mode of climate variability at interannual time scales, the lack of consistency in the model predictions of the response of ENSO to global warming currently limits our confidence in using these predictions to address adaptive societal concerns, such as regional impacts or extremes (Joseph and Nigam 2006; Power et al. 2006).
Refer to my post Guilyardi et al (2009) “Understanding El Niño in Ocean-Atmosphere General Circulation Models: progress and challenges”, which introduces that paper. That paper was discussed in much more detail in Chapter 5.8 Scientific Studies of the IPCC’s Climate Models Reveal How Poorly the Models Simulate ENSO Processes of my book Who Turned on the Heat?
THE MYTH CONTINUED
The second paragraph of Rahmstorf et al (2012) under that heading of “2. Global temperature evolution” reads:
Global temperature data can be adjusted for solar variations, volcanic aerosols and ENSO using multivariate correlation analysis (Foster and Rahmstorf 2011, Lean and Rind 2008, 2009, Schönwiese et al2010), since independent data series for these factors exist. We here use the data adjusted with the method exactly as described in Foster and Rahmstorf, but using data until the end of 2011. The contributions of all three factors to global temperature were estimated by linear correlation with the multivariate El Niño index for ENSO, aerosol optical thickness data for volcanic activity and total solar irradiance data for solar variability (optical thickness data for the year 2011 were not yet available, but since no major volcanic eruption occurred in 2011 we assumed zero volcanic forcing). These contributions were computed separately for each of the five available global (land and ocean) temperature data series (including both satellite and surface measurements) and subtracted. The five thus adjusted data sets were averaged in order to avoid any discussion of what is ‘the best’ data set; in any case the differences between the individual series are small (Foster and Rahmstorf 2011). We show this average as a 12-months running mean in figure 1, together with the unadjusted data (likewise as average over the five available data series). Comparing adjusted with unadjusted data shows how the adjustment largely removes e.g. the cold phase in 1992/1993 following the Pinatubo eruption, the exceptionally high 1998 temperature maximum related to the preceding extreme El Niño event, and La Niña-related cold in 2008 and 2011.
IT IS IMPOSSIBLE TO REMOVE THE EFFECTS OF ENSO IN THAT FASHION
Rahmstorf et al (2012) assume the effects of La Niñas on global surface temperatures are the proportional to the effects of El Niño events. They are not. Anyone who is capable of reading a graph can see and understand this.
But first: For 33% of the surface area of the global oceans, the East Pacific Ocean (90S-90N, 180-80W), it may be possible to remove much of the linear effects of ENSO from the sea surface temperature record, because the East Pacific Ocean mimics the ENSO index (NINO3.4 sea surface temperature anomalies). See Figure 2. But note how the East Pacific Ocean has not warmed significantly in 30+ years. A linear trend of 0.007 deg C/decade is basically flat.
Figure 2
However, for the other 67% of the surface area of the global oceans, the Atlantic, Indian and West Pacific Oceans (90S-90N, 80W-180), which we’ll call the Rest of the World, the sea surface temperature anomalies do not mimic the ENSO index. We can see this by detrending the Rest-of-the-World data. Refer to Figure 3. Note how the Rest-of-the-World sea surface temperature anomalies diverge from the ENSO index during four periods. The two divergences highlighted in green are caused by the volcanic eruptions of El Chichon in 1982 and Mount Pinatubo in 1991. Rahmstorf et al (2012) are likely successful at removing most of the effects of those volcanic eruptions, using an aerosol optical depth dataset. But they have not accounted for and cannot account for the divergences highlighted in brown.
Figure 3
Those two divergences are referred to in Trenberth et al (2002) Evolution of El Nino–Southern Oscillation and global atmospheric surface temperatures” as ENSO residuals. Trenberth et al write:
Although it is possible to use regression to eliminate the linear portion of the global mean temperature signal associated with ENSO, the processes that contribute regionally to the global mean differ considerably, and the linear approach likely leaves an ENSO residual.
Again, the divergences in Figure 3 shown in brown are those ENSO residuals. They result because the naturally created warm water released from below the surface of the West Pacific Warm Pool by the El Niño events of 1986/87/88 and 1997/98 are not “consumed” by those El Niño events. In other words, there’s warm water left over from those El Niño events and that leftover warm water directly impacts the sea surface temperatures of the East Indian and West Pacific Oceans, preventing them from cooling during the trailing La Niñas. The leftover warm water, tending to initially accumulate in the South Pacific Convergence Zone (SPCZ) and in the Kuroshio-Oyashio Extension (KOE), also counteracts the indirect (teleconnection) impacts of the La Niña events on remote areas, like land surface temperatures and the sea surface temperatures of the North Atlantic. See the detrended sea surface temperature anomalies for the North Atlantic, Figure 4, which show the same ENSO-related divergences even though the North Atlantic data is isolated from the tropical Pacific Ocean and, therefore, not directly impacted by the ENSO events.
Figure 4
There’s something blatantly obvious in the graph of the detrended Rest-of-the-World sea surface temperature anomalies (Figure 3): If the Rest-of-the-World data responded proportionally during the 1988/89 and 1998-2001 La Niña events, the Rest-of-the-World data would appear similar to the East Pacific data (Figure 2) and would have no warming trend.
Because those divergences exist—that is, because the Rest-of-the-World data does not cool proportionally during those La Niña events—the Rest-of-the-World data acquires a warming trend, as shown in Figure 5. In other words, the warming trend, the appearance of upward shifts, is caused by the failure of the Rest-of-the-World sea surface temperature anomalies to cool proportionally during those La Niña events.
Figure 5
I find it difficult to believe that something so obvious is simply overlooked by climate scientists and those who peer review papers such as Rahmstorf (2012). Some readers might think the authors are intentionally being misleading.
FURTHER INFORMATION
The natural processes that cause the global oceans to warm were described in the Part 1 of YouTube video series “The Natural Warming of the Global Oceans”. It also describes and illustrates the impacts of ENSO on Ocean Heat Content for the tropical Pacific and the tropics as a whole.
Part 2 provides further explanation of the natural warming of the Ocean Heat Content and details the problems associated with Ocean Heat Content data in general. Part 2 should be viewed after Part 1.
And, of course, the natural processes that cause the oceans to warm were detailed with numerous datasets in my recently published ebook. It’s titled Who Turned on the Heat? with the subtitle The Unsuspected Global Warming Culprit, El Niño Southern Oscillation. It is intended for persons (with or without technical backgrounds) interested in learning about El Niño and La Niña events and in understanding the natural causes of the warming of our global oceans for the past 30 years. Because land surface air temperatures simply exaggerate the natural warming of the global oceans over annual and multidecadal time periods, the vast majority of the warming taking place on land is natural as well. The book is the product of years of research of the satellite-era sea surface temperature data that’s available to the public via the internet. It presents how the data accounts for its warming—and there are no indications the warming was caused by manmade greenhouse gases. None at all.
Who Turned on the Heat? was introduced in the blog post Everything You Every Wanted to Know about El Niño and La Niña… …Well Just about Everything. The Updated Free Preview includes the Table of Contents; the Introduction; the beginning of Section 1, with the cartoon-like illustrations; the discussion About the Cover; and the Closing. The book was updated recently to correct a few typos.
Please buy a copy. (Credit/Debit Card through PayPal. You do NOT need to open a PayPal account.). It’s only US$8.00.
CLOSING
Rahmstorf et al (2012) begin their Conclusions with:
In conclusion, the rise in CO2 concentration and global temperature has continued to closely match the projections over the past five years…
As discussed and illustrated above, ENSO is a process that cannot be removed simply from the global surface temperature record as Rahmstorf et al (2012) have attempted to do. The sea surface temperature records contradict the findings of Rahmstorf et al (2012). There is no evidence of a CO2-driven anthropogenic global warming component in the satellite-era sea surface temperature records. Each time climate scientists (and statisticians) attempt to continue this myth, they lose more and more…and more…credibility. Of course, that’s a choice they’ve clearly made.
And as long as papers such as Rahmstorf et al (2012) continue to pass through peer review and find publication, I will be more than happy to repeat my message about their blatantly obvious failings.
SOURCE
The Sea Surface Temperature anomaly data used in this post is available through the NOAA NOMADS website:
http://nomad1.ncep.noaa.gov/cgi-bin/pdisp_sst.sh
or:
http://nomad3.ncep.noaa.gov/cgi-bin/pdisp_sst.sh?lite=
=================================================================
Richard Tol is not impressed:
#Doha: Sea levels to rise by more than 1m by 2100 http://t.co/h2cNEMo7 Rahmstorff strikes again with his subpar statistics
http://twitter.com/RichardTol/status/273691430101323776
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.





trafamadore says: “So, if you were to recalculate/revise the red line in Fig 1 of the Rahmstorf paper, what would it look like? Isnt that the bottom line?”
No. You apparently missed something. Rahmstorf et al assume CO2 warmed the oceans, but there’s no anthropogenic global warming component in the sea surface temperature data for the past 30 years.
Matt G says @ur momisugly November 28, 2012 at 9:25 am
…That’s the difference between statisticians and science in this case, with the latter not understanding the process.
___________________________
Please do not say that. A true scientist has at least enough statistics background so he understands the basics and knows when he is over his head so he can go get help from a trained statistician. CRU’s Dr. Phil Jones, world renowned climatologist, can’t even plot a trend in Excel!
I do not have a PhD or computer training and taught myself to use Excel as soon as they let me have a computer in the lab. For these people to be in charge of global temp data and climate models is surely ‘a travesty’ They are in Universities for crying out loud. They should be knee deep in statisticians and stat courses should be readily available.
They have absolutely no excuse for their demonstrated poor statistics.
With respect I disagree with Bob.
As an AGW proponent I do not recognize either figures 2 or 3 describing how I portray warming of sea surface temperatures, even though they are labelled as such.
The Foster and Rahmstorf analysis, which I agree with, is looking at the impact of ENSO on global temperature, including land, not the impact of ENSO on 66% of the oceans (the area designated rest of world in the post above) which is less than 50% of the earth’s surface.
The claim FR (and also me and others) make is that ENSO adds noise onto global temperature (the transitory 1998 spike for example) and so it would be useful to attempt to adjust this out and see what remains. Same with the solar cycle. To not do so requires just as much assumption that there is no ENSO or solar cycle noise in the data. If anything I think that’s a bigger assumption.
I live in the UK where the AGW agenda is not only political reality, but our fuel bills, our holidays and our taxes are all subjected to the monetary surcharges, all based on the AGW nonsense.
I noticed a reader from the Federal Energy Regulatory Commission, Washington, District of Columbia, United States
Well Sir/Madam it is my hope you are a person of some influence. If so I would like to bring this to your attention:
http://www.vukcevic.talktalk.net/MidSummer-MidWinter.htm
as you can see there is no excessive warming in the summer months, green and lush England is not going to turn into Sahara desert.
You will also notice there is a modest but steady warming in the winter. This is extremely beneficial, not only to elderly for whom cold is a killer, but to general prosperity since less work days are lost to bad weather, outdoor building industry benefits greatly, and finally it supports green agenda, less fuel used for heating, more oil and coal left in the ground for future generations.
In my simple view, global warming such as it is, it was and is it is very beneficial to the economy and society in general. Worth another look
http://www.vukcevic.talktalk.net/MidSummer-MidWinter.htm
Summers now as the 3 centuries ago, winters now warmer than the 3 centuries ago.
Gail Combs says:
November 28, 2012 at 11:47 am
I didn’t mean that quote, changed it a moment later when I released my mistake. (see my post after)
S Green says:
November 28, 2012 at 11:53 am
IOW, when the data do not fit the models, adjust the data.
How do you know what is signal, and what is “noise”? Answer: you don’t. You are putting forward the thesis that, if you can imagine a way in which something can occur, then the burden is on those who disagree to prove you wrong. That is completely bass-ackwards to science.
On another note, as a general comment to all, besides all the other guff, the error bars on these projections are so large as to be useless. You could fall within them a great majority of the time assuming purely random increments of change.
S Green says:
November 28, 2012 at 11:53 am
So you don’t understand how energy is stored in the Western Pacific Warm Pool during a La Niño and then released during an El Niño to the Eastern Pacific from whence it is smeared over the Indian & North Western Pacific in the following La Niña, but you understand the F&R paper?
By adjusting this out you are cheating. You are removing energy from the system which has been stored for a time and is now released into the wider system. Think of it as Earths KERS system! 😛
DaveE.
The warmists mantra of continuing rising temperatures lies in Fig. 1 for all to see. The “good” fit with IPCC/Hansen scenarios is dependent on the period of 2003 – 2012 in two ways:
1) adjustments for ENSO etc. being appropriate, and 2) the period is not showing us a quasi-sinusoidal (polynomial/curvy) “top” to a stalling or halting warming. Even if only the 2008-2012 adjustments are deemed partly invalid, the actual trend of observation falls to the bottom of the scenarios. Any further readjustments (as for UHIE) brings the IPCC narrative to its knees.
In the short-term, there is a 5 or 6 year up and one down cylce that is pretty clear; we had the “up”from 2008, and we should expect a “down”, in 2013, or perhaps 2014. Beyond 2015, there should be a recovery, and this is crucial for the warmists: if the recovery does not shoot up to the 0.42C* level in 2015/2016, then the trend for 2003 onward has to be dropped out of the IPCC scenario range. An unadjusted temp of <0.3C in 2015/2016 kills the continued warming story. Considering the importance of the Arctic in pushing global temperatures up, all we need to see is a colder than recent summer or two above the 66th parallel to do this.
I look forward to the next two years. I'm concerned that the "adjustments" continue to be made while the mainstream doesn't see to recognize that the alleged warming of 0.8C is all in the adjustments. Fig. 1 here reveals just how precarious the warmists' hold on the IPCC/Hansen scenarios is.
S Green:
I quote all your post at November 28, 2012 at 11:53 am so it cannot be thought that I am replying out of context. You say
I respectfully submit that you are missing two basic – and very important – points.
Firstly, in any proper science when an understanding is shown to be wrong by failure of a prediction then the understanding is revisited. But in ‘climate science’ that is not done
In ‘climate science’ when an understanding is shown to be wrong by failure of a prediction then it is common practice to make post hoc adjustments to the prediction. The FR paper is a clear example of such a post hoc adjustment.
The IPCC AR4 predicted (n.b. predicted and NOT projected) “committed warming” of 0.2 deg.C/decade global warming averaged over the first two decades after year 2000. This “committed warming” was inevitable because of GHG’s already in the system. But there has been no discernible warming since 2000, indeed no discernible warming since 1997.
That is a clear failure of the prediction. The predicted average trend is 0.2 deg.C per decade between 2000 and 2020 but there has been no warming since 2000, so the needed warming to fulfill the prediction is over 0.8 deg.C over the next 8 years. OK, hypothetically there could be a rise of 0.8 deg.C over the next 8 years but that is so improbable as to be risible: the total warming over the twentieth century was not that much. And if such rapid warming happened then there would still be the problem of explaining where the “committed warming” has been hiding over the last 12 years.
The FR paper is an attempt to show that the “committed warming” has been hidden by other effects. That is a clear attempt to make a post hoc adjustment to the prediction as an excuse for the failure of the understanding which predicted the “committed warming” instead of revisiting the understanding.
Secondly, the excuse is self-defeating. If natural forces have overwhelmed the “committed warming” then those same natural forces may have cause the warming of the twentieth century. Therefore, far from supporting the AGW-hypothesis, it removes any need for that hypothesis.
In other words, the understanding summarised in the AGW-hypothesis requires revisiting. And the FR paper – whatever its merits – is unwitting evidence that the understanding needs to be revisited.
Richard
Let me see if I understand this paper. IPCC projections have not been able to accurately predict the global temps. So, this paper attempts to remove the ENSO, volcanic, and solar variations from the recent temps to prove that the IPCC was actually right. So they use a multivariate analysis and then a linear regressive analysis looking for correlation. I didn’t see the correlation coefficients stated but even if they were fairly robust… Questions:
1) I am not familiar with multivariate analysis but I do understand linear regressive analysis and I doubt its going to be very accurate as the combined forcings would have to be linear. Can anyone explain why I would be wrong?
2) How accurate can the input data be? How well can we measure and calculate the global forcing of the solar activity and ENSO (Bob covered this)?
3) Do the IPCC projections really omit ENSO, volcanoes, and solar?
4) This paper really seems like an attempt at making an argument against skeptics instead of trying to advance science, anybody else see it this way?
That shade of “don’t look at me pink” is simply ridiculous. I’ve tried a few times to flip to the AGW side, and stuff like that is just a killer.
S Green says:
November 28, 2012 at 11:53 am
With respect I disagree with Bob.
Only a crimatologist would think it ‘OK’ to adjust out features they don’t like. I have said this more times than I can remember ‘ You never, never, never adjust raw data’. It is a crime to tamper with raw data and you should be ashamed to have even suggested it.
What am I ? A fully qualified engineer in Radio, electronics, electricity and telecommunications and an ex-chartered physicist. Adjusting data for any reason is totally unacceptable UNLESS you have well defined, unequivable proof and mathematical reasons for so doing.
The Warmists love indulging in sophistry. And with regard to ocean temperatures their argument is particularly threadbare.
It’s an absolute fact that Ice is an insulating material, and you only have to look at a weather map to see that the area of open ocean in the Arctic is losing heat to the atmosphere…but hang about, the atmosphere is demonstrating a distinct ‘lack of warming’…
Couldn’t they just try and me a bit consistent?
trafamadore says: “So, if you were to recalculate/revise the red line in Fig 1 of the Rahmstorf paper, what would it look like? Isnt that the bottom line?”
Bob Tisdale says: “No. You apparently missed something. Rahmstorf et al assume CO2 warmed the oceans, but there’s no anthropogenic global warming component in the sea surface temperature data for the past 30 years.”
Yes. I _did_ apparently miss something. I went and read the Rahmstorf paper and I found it perfectly understandable even though I am not a climate specialist (I am used to technical papers, but I dont think it matters, the paper was written very simply). But I seem to have severe trouble understanding the logic of your post. Sorry.
Okay. So pick one or more:
-> The ocean is not warming (and reports like Kennedy et al 2011 are just wrong)(but the oceans rising is then a mystery, what with not enuf water being added just yet.)
-> The ocean is warming by natural causes that just by chance matches the GW on land. (Is this the basis for your myth, that scientists think the ocean is warming due to CO2 like on land but you think that the warming is dues to natural oscillations?)(If so, one does not preclude the other, right?)
-> I still have the choices wrong and fail miserably at reading blog posts.
-> Add a choice, but speak simply.
Global temperature data can be adjusted for solar variations, volcanic aerosols and ENSO using multivariate correlation analysis
How does this jibe with Santer’s 17 years? Are they also based on adjustments or are they based on what actually happens? Will these people and Santer come up with a unified statement when we reach the 17 year mark with no warmth?
P. Solar says:
November 28, 2012 at 9:36 am (Edit)
What a fraud this Rahmsdorf is!
They’ve been saying for years it’s NOTHING to do with the sun, now they adjusting for TSI !
Tacitly admitting that the models totally fail to model the major climate variations, they now make post hoc adjustments TO THE DATA to make it fit the models.
##################
On the contrary nobody who believes in AGW thinks the sun has no effect.
The effect of the sun is represent by TSI.
The problem is this. Lets take Ar4. The models are run with historical forcing. That means certain values for TSI, values for methane, C02, aersols. etc. AND during the forecast period
the models are run with “projected forcings”. Scenarios, What if? sensitivity studies.
To do this the modelers assume that certain values will stay constant and they vary other values.
For Ar3 and AR4 solar input was held constant out to the future. With Ar4 that means they assumed higher solar forcing than actually happened. For volcanoes they assumed no volcanic forcing.
Let’s do a simple example. Suppose, I write a model to predict how far a golf ball will travel if it is hit by a club going 120 mph at impact. Pretty simple physics. I set up my assumptions.
clubhead speed = 120mph, windspeed = 0, standard atmosphere, altitude sea level,
ball size ect, drag, blah, blah blah.
And I predict that the ball will carry 290.45 yards. GIVEN the assumptions are met. that is GIVEN the test goes off as planned.
Now I run a field test and collect observations to test my model. I do 10 hits, but, its hard to control the speed of the club, so actually, my tests happened at 116 to 118 mph.
And, the windspeed wasnt zero for every shot, there was a faint .5 meter per second breeze in my face. and so forth. Its not a lab, I cannot control the conditions. I can get close, but for testing 120mph, I actullay ended up testing 117, lets say.
My test results have a ball landing distance of 287 yards.
What do I do?
1. retest until I can absolutely control all the conditions ( hehe, right )
2. Rerun my model with the ACTUAL parameters.
3. Account for the differences by adjusting the observations, removing noise, outliers etc
4. Invoke popper and claim that the model is wrong and newtons laws of gravity
which the model relies on are suddenly disproven.
Err. [snip. You know better than that. ~ mod.] will do 4.
Most of the time you will do #2, or #1 IFF it is feasible and cost effective
But if you cant do 1 ( we cant control the sun) and if you cant do 2 ( run the model over )
Then the only thing you can do is 3.
3 is ugly. 3 is hard. 3 is prone to confirmation bias because you tend to only correct those things that go in your favor to bring results inline with predictions, but option 3 is sometime the only thing you can do for the present time. its a band aid.
What they need to do is run the old models OVER with the actual forcings as observed. This would mean.
A) they would have to “freeze” models and keep “frozen” versions available for retest.
B) allocate a computer resources to do this.
Gail Combs says:
… CRU’s Dr. Phil Jones, world renowned climatologist, can’t even plot a trend in Excel!
Gail, there is a saying that circulates around the internet regarding spreadsheets and statistics, “don’t!” Run a search on “friends don’t let friends use Excel/spreadsheets” for statistics. You’ll find entries such as:
http://www.statisticalengineering.com/Weibull/excel.html
Personally, I quit using Excel the first time it offered a negative number for a variance. I downloaded R and started seriously trying to learn its syntax. To be fair all spreadsheets have similar weaknesses and at present Excel is no worse than others. It used to be one of the very worst though and given it’s odd history could become so again at any time.
Mosh.
I really must go through my back e-mails.
My late friend and I, (mainly Jan, he was the brains, I was the intuitive suggestions,) devised a model using electronic analogues which simulated c20th temperature increase using TSI from Leifs own data.
No need for any GHG forcing at all.
DaveE.
5. We’ve missed something, like perhaps the weight of the club head or maybe something we don’t even know about. Maybe we just don’t know.
What I do know is someone’s got to Mosh.
DaveE.
[Reply: already snipped. ~ mod.]
Steven Mosher says:
November 28, 2012 at 2:39 pm
You naughty man, you snuck in the “d” word. That’ll have your warmist friends licking honey from your navel no doubt.
Yes, the whole AGW versus AGW-skeptic debate will boil down to an inductive versus deductive show-down. Your representation of Popper is incorrect.
The philosopher of science Carl Popper argued for science to be deductive, based on economic interpretation of measured facts, which can readily be experimentally falsified. However the age of cheap computing power has caused researchers to fall into the alluring trap of inductive “science”, in which assumptions and hypotheses are built up on eachother like a house of cards.
I looked at some dictionary definitions and other reference sources about these two words, inductive and deductive, since their meanings might be slipping and blurring. There was an interesting visual thesaurus linking words in a map of proximity and connectivity. Inductive was linked to synthetic and synthesis while deductive was linked to analysis and analytic. I like to think of it in terms of the length of the paths that one draws between observation and conclusion. Short and economic (“parsimonious”) = deductive; long and convoluted involving multiple serial assumptions = inductive.
Two teams of scientists, team inductive and team deductive, were given a task: design a speedometer for a car – a device for measuring and displaying the speed that a car is travelling.
So team inductive got to work. This team included a fair number of physicists with computational and modelling skills. It became immediately clear to them that this was a task requiring the procesing of multiple factors all impacting on speed: what was the energy and force driving the car forward, what was the origin of this energy? Chemical and thermodynamic energy from the combustion of fuel needed to be carefully evaluated and modelled. What was the efficiency of this conversion from chemical to kinetic energy – how much was lost in the inefficiency of the motor? Several team members were assigned to modelling these processes. How much energy was lost as friction and heat through the gas exhaust? Simulation of the turbulent fluid flow and associated heat fluxes along the exhaust pipe was clearly called for.
Then of course there were hours of immense fun to be had modelling and evaluating the fluid friction of the air passing over the car. This of course was modified by the dynamics of the air itself – what was the prevailing wind direction? Then of course there was the friction between the tyre and the road. An important input here was the curvature of path of the travelling car and associated sideways force and geometric distortion of the tyre, adding heat to the tyre affecting its friction, and whether or not this induced tyre to road shear and slippage, each in turn calling for further modelling inputs. Of course tyre dynamics were temperature-related so local climate was again a critical factor and another useful variable.
So it became clear to team inductive that to have any hope whatsoever of measuring speed in a credible way, to give an output that would be accepted by internationally recogonised car speed scientists associated with the high profile journals and societies, that a large number of data inputs were needed: chemical measurement probes in the fuel tank to asses the fuel chemical potential energy; probes within the ignition chamber to assess on a millisecond basis pressures and temperatures to illucidate combustion energy. Then multiple sensors were required in the exhaust pipe to provide input for fluid flow modelling of the exhaust gasses. Sensors were also required at many locations on the car’s surface to assess airflow and boundary layer turbulence, as the exact location of the laminar-turbulent transition was a key factor in getting the drag models to work reliably. Sensors were needed within the tyres also. Other factors and associated sensor inputs were also identified and subject to in-depth research and computer simulation.
Then team deductive got to work. They measured the circumference of the wheels. And set up a sensor to measure the rate of rotation of the wheels. From this they got a speedometer.
trafamadore says: “The ocean is warming by natural causes that just by chance matches the GW on land.”
Your choice here is pretty close to reality, but it’s wrong.
The oceans warm naturally. That is, there is no apparent anthropogenic global warming component in the warming of the oceans. Land surface air temperatures simply mimic and exaggerate the warming oceans. If there is a measureable CO2-driven anthropogenic component in the warming of global land surface air temperatures, it’s on top of the response to the natural warming of the oceans, but it also shares the warming responsibilities with other forcings and factors, like land-use change, black soot on snow, poorly sited surface stations, overly zealous corrections to temperature records, etc.
Sorry. I’ve discussed this in numerous blog posts and videos, but I didn’t mention it this time. The reason: it basically wasn’t necessary. This was a discussion of the failings of the methods Rahmstorf et al used in their attempt to remove ENSO from the surface temperature record. This post showed very basically why they couldn’t do it.
Werner Brozek says:
“How does this jibe with Santer’s 17 years? Are they also based on adjustments or are they based on what actually happens? Will these people and Santer come up with a unified statement when we reach the 17 year mark with no warmth?”
We both know what will happen: at 17 years they will move the goal posts.
Mosh, if the models weren’t continually changed, then your proposed actions would make sense, however the models being used today are continually modified trying to backfit parameters that suddenly appear as, gasp, part of the nature of the climate. Low clouds, high clouds, aerosols, water particle sizes, vulcanism, the list goes on. You continue to pretend that the code being executed is unchanged, with only parametric changes, and we both know that isn’t the case. There have repeatedly been massive rewrites of the code, and not only to add additional parameters, but to eliminate bugs that have been there, hmm how long? You and I both know that the paleo-computations are still limited in their resolution, and couldn’t predict the worldwide weather for a month out, let alone for a year. You could even have the certain knowledge of the boundary conditions (now) and predict from now to 1 year from now – we both know how well that will work. Since climate is weather over time, don’t you think that in all honesty, the inability to predict the weather makes predicting the climate a “little” suspect. It’s even more suspect when people go to great lengths to claim that the world wasn’t ever this warm. Wheat grew in Greenland. Steve, it takes 110 frost free days for wheat to germinate and ripen. Or maybe you believe the Vikings had Russian Winter Red. They raised wheat and grapes in Greenland for years.
So tell me, when with the electrical fields within the atmosphere be included in the predictions? What effects do electrical discharges like the Aurora Borealis have on stratospheric clouds? Can sprite discharges modify the path of the jet streams? What about the low frequency resonances in the ionosphere, what is their contribution? Are these second order, third order effects? Who’s studied it?
There is so much we don’t know about our world, and the hubris of those who think they know it all, then hack the data to make it fit their views is unbelievable.
Don’t bother to give one of your typical cryptic, content free answers. JP and the gang will eat it up.
S Green says: “As an AGW proponent I do not recognize either figures 2 or 3 describing how I portray warming of sea surface temperatures, even though they are labelled as such.”
Figure 2 represents the sea surface temperature of the East Pacific Ocean from pole to pole, and from the dateline to the Isthmus of Panama, using the NOAA Reynolds OI.v2 dataset. It’s the best sea surface temperature dataset available. In fact, it’s been called the “truth.” The coordinates used are 90S-90N, 180-80W. The sea surface temperature anomalies of the East Pacific have not warmed in 30 years. That’s reality.
Figure 3 represents the sea surface temperature anomalies of the Rest of the World, 90S-90N, 80W-180, with the trend removed. Same dataset. It shows the sea surface temperatures for the Rest of the World diverging from the ENSO index during the 1988/89 and 1998-01 La Niña events. Because those divergences exist, and because they are ENSO related, Rahmstorf et al and similar papers do not account for the impact of those divergences on the warming of the Rest of the World data when they attempt to remove ENSO from the surface temperature records. Hence, their claim that the warming is caused CO2 is blatantly and obviously wrong.
S Green says: “The claim FR (and also me and others) make is that ENSO adds noise onto global temperature (the transitory 1998 spike for example) and so it would be useful to attempt to adjust this out and see what remains.”
Your claims and assumptions are obviously and fatally flawed. Figure 3 illustrates the reason. ENSO is a process that creates and releases heat naturally. The divergences result from warm water that’s left over from the 1986/87/88 and 1997/98 El Niño events. There is no ENSO index that can account for that warm water.
It’s pretty obvious. I can’t understand why that’s so difficult to recognize, S Green.
Steven Mosher’s misunderstanding of climate models