By Dr. Sebastian Lüning, Prof. Fritz Vahrenholt and Pierre Gosselin
One of the main points of criticism of the CO2-dominated climate models is that they fail to reproduce the temperature fluctuations over the last 10,000 years. This surprises no one as these models assign scant climate impact to major factors, i.e. the sun. As numerous IPCC-ignored studies show, the post-Ice Age temperature curve for the most part ran synchronously with solar activity fluctuations. The obvious discrepancy between modeled theory and measured reality has been brought up time and again.
The journal Climate of the Past Discussions has published a new paper written by a team led by Gerrit Lohmann of the Alfred Wegener Institute (AWI) in Bremerhaven, Germany. The group compared geologically reconstructed ocean-temperature data over the last 6000 years to results from modeling. If the models were indeed reliable, as is often claimed, then there would be good agreement. Unfortunately in Lohmann’s case, agreement was non-existent.
Lohmann et al plotted the geologically reconstructed temperatures and compared them to modeled temperature curves from the ECHO-G Model. What did they find? The modeled trends underestimated the geologically reconstructed temperature trend by a factor of two to five. Other scientists have come up with similar results (e.g. Lorenz et al. 2006, Brewer et al. 2007, Schneider et al. 2010).
The comprehensive temperature data collection of the Lohmann team distinctly shows the characteristic millennial scale temperature cycle for many regions investigated, see Figure 1 below. Temperatures fluctuated rhythmically over a range of one to three degrees Celsius. In many cases these are suspected to be solar-synchronous cycles, like the ones American Gerard Bond successfully showed using sediment cores from the North Atlantic more than 10 years ago. And here’s an even more astonishing observation: In more than half of the regions investigated, temperatures have actually fallen over the last 6000 years.
Figure 1: Temperature reconstructions based on Mg/Ca method and trends with error bars. From Lohmann et al. (2012).
What can we conclude from all this? Obviously the models do not even come close to properly reproducing the reconstructed temperatures of the past. This brings us to a fork in the road, with each path leading to a completely different destination: 1) geologists would likely trust their temperatures and have doubts concerning the reliability of the climate model. Or 2) mathematicians and physicists think the reconstructions are wrong and their models correct. The latter is the view that the Lohmann troop is initially leaning to. We have to point out that Gerrit Lohmann studied mathematics and physics, and is not a geo-scientist. Lohmann et al prefer to conjure thoughts on whether the dynamics between ocean conditions and the organisms could have falsified the temperature reconstructions, and so they conclude:
“These findings challenge the quantitative comparability of climate model sensitivity and reconstructed temperature trends from proxy data.“
Now comes the unexpected. The scientists then contemplate out loud if perhaps the long-term climate sensitivity has been set too low. In this case additional positive feedback mechanisms would have to be assumed. A higher climate sensitivity would then amplify the Milankovitch cyclic to the extent that the observed discrepancy would disappear, this according to Lohmann and colleagues. If this were the case, then one would have to calculate an even higher climate sensitivity for CO2 as well, which on a century-scale would produce an even higher future warming than what has been assumed by the IPCC up to now. An amazing interpretation.
The thought that the climate model might be fundamentally faulty regarding the weighting of individual climate factors does not even occur to Lohmann. There’s a lot that indicates that some important factors have been completely under-estimated (e.g. sun) and other climate factors have been grossly over-estimated (e.g. CO2). Indeed the word “solar” is not mentioned once in the entire paper.
So where does their thought-blockage come from? For one it is a fact that physicist Lohmann comes from the modeling side, and stands firmly behind the CO2-centred IPCC climate models. In their introduction Lohmann & colleagues write:
“Numerical climate models are clearly unequalled in their ability to simulate a broad suite of phenomena in the climate system […]“
Lohmann’s priorities are made clear already in the very first sentence of their paper:
“A serious problem of future environmental conditions is how increasing human industrialisation with growing emissions of greenhouse gases will induce a significant impact on the Earth’s climate.”
Here Lohmann makes it clear that alternative interpretations are excluded. This is hardly the scientific approach. A look at Lohmann’s resume sheds more light on how he thinks. From 1996 to 2000 Lohmann worked at the Max Planck Institute for Meteorology in Hamburg with warmists Klaus Hasselmann and Mojib Latif, both of whom feel very much at home at the IPCC. So in the end what we have here is a paper whose science proposes using modeled theory to dismiss real, observed data. Science turned on its head.
[Added: “SL wants to apologize to the authors of the discussed article for the lack of scientific preciseness in the retracted sentences.” ]
From 1996 to 2000 Lohmann worked at the Max Planck Institute for Meteorology in Hamburg with warmists Klaus Hasselmann and Mojib Latif, both of whom feel very much at home at the IPCC.
[Note the above text was changed on 4/16/12 at 130PM PST as the request of Dr. Sebastian Lüning – Anthony]
The model must be wrong them.
Keep trying.
Since Lohmann et al are already convinced they know the answer to climate change, its causes and its effects, it is clear that the sole reason for trying to make the stupid models echo their beliefs is for nothing other than propaganda reasons. Sorry for stating the obvious.
I would call Lohmann an amateur modeler and a True Believer. He should spend some time with meteorologists.
I am not one but I can use google:
Search for “Convection parameterization”. A goldmine. An example:
http://www.met.tamu.edu/class/metr452/models/2001/convection.html
“Just as all people have unique skills and abilities, the convective parameterization schemes of the various models will do different things well and different things poorly. Often, the skill of the model depends on the exact location and time for which it is forecasting. For example, the AVN under predicts mesoscale convective events across the Great Plains during the warmest part of the year. The ETA parameterizes convection differently over water than land, thus it often overestimates precipitation along the Gulf and Atlantic coasts. The ETA is also plagued by a greater amount of convective feedback than the NGM. The MRF now accounts for factors such as evaporation of falling rain like the RUC and NGM. […]”
“This brings us to a fork in the road, with each path leading to a completely different destination: 1) geologists would likely trust their temperatures and have doubts concerning the reliability of the climate model. Or 2) mathematicians and physicists think the reconstructions are wrong and their models correct.”
Classic ‘climate science’ in action – nothing else needs to be said.
But how do we know what climate was 6,000 years ago? I thought paleoclimate reconstructions were all proxy-pseudo science?
This is an unjustified generalisation. Often it is mathematicians who are the most sceptical about the accuracy of these models because they are the ones with the greatest appreciation of the difficulties involved.
“Science turned on it’s head”. Any similarity to the Jesuit defence of a universe centred on a flat Earth, including the use of an Inquisition to convince sceptics of the error of their ways, is entirely in the mind of the observer.
I’ve never understood why so many on both sides of the climate debate seem to think that climate sensitivity is static, i.e. that the feedback effects to warming (like incresed CO2, changes to albedo, ect.) are the same at every level of Global Tempurature. I would assume just the opposite, that the effect of a change in one forceing during an Ice Age could be vary different then the same change made during an Interglacial.
There is a popular phrase that neatly describes the state of being so absorbed in one’s own constructions that one’s direct view of external reality is occluded.
Eklund says:
April 15, 2012 at 12:25 pm
“But how do we know what climate was 6,000 years ago? I thought paleoclimate reconstructions were all proxy-pseudo science?”
Eklund, you should read your link before making assumptions. Pat Frank thoroughly explains the difference between scientific proxies and pseudo-scientific proxies.
I decided to read the full article.
Basically, it is a bunch of guys flailing around in a futile attempt to try and find desperate excuses why their models don’t match the realities of observed data. Bottom line: The data has to be wrong.
Anyhow, it is funded: “within the priority programme Interdynamik of the German Science Foundation (DFG).” So it’s just another case of grant addiction BS.
the computer models don’t fit recorded historical data because they are written to forcast future climate. has anyone considered using the historical data as the basis for a computer program to reflect that data and to form the basis to predict future climate?? seems to me it would be more accurate than the garbage being used today
Well written, Anthony. The surprise ending was hilarious. That which is assumed, no matter how unlikely, must be true.
A classic exmaple of the first rule of clmate science , if the model and reality differ in value its reality which is wrong.
Classic pathological science.
http://thepointman.wordpress.com/2011/10/14/global-warming-and-pathological-science/
Pointman
Schitzree says:
April 15, 2012 at 12:30 pm
“I’ve never understood why so many on both sides of the climate debate seem to think that climate sensitivity is static, i.e. that the feedback effects to warming (like incresed CO2, changes to albedo, ect.) are the same at every level of Global Tempurature. I would assume just the opposite, that the effect of a change in one forceing during an Ice Age could be vary different then the same change made during an Interglacial.”
Schitzree, what you’re describing is a nonlinear feedback, and everyone agrees that at least some of the feedbacks are nonlinear, starting with the “default negative feedback” that even the most hardline warmists don’t deny, the blackbody or graybody radiation of the surface that is described by the Stefan-Boltzmann Law, and is proportional to the 4th power of the absolute temperature; so it is non-linear.
The fundamental error in all models and here again, is that they assume that every change of 1 W/m2 is the same, whatever the source. As if 1 W/m2 change in solar (specifically in the UV spectrum in the stratosphere on ozone) has the same effect as 1 W/m2 change in IR by CO2 in the lower troposphere.
Nobody who ever worked with a real process in the real world would assume such a thing, but that is exactly what happens with the climate models…
but, but, but…
Have they tried an ensemble of models?
You know, even if all models are wrong, maybe the ensemble is right.
/sarc
xtron says:
April 15, 2012 at 12:59 pm
“the computer models don’t fit recorded historical data because they are written to forcast future climate. has anyone considered using the historical data as the basis for a computer program to reflect that data and to form the basis to predict future climate?? seems to me it would be more accurate than the garbage being used today”
The GCM’s are tested using the instrument record of global temperatures. Now, they don’t perform well with that, especially given the fact that they are designed to deliver the catastrophic warming in 2100 that the IPCC demands and that the first-stage-thinker physicists like Lohmann expect. They get it roughly right; the rest is papered over by assuming a history of aerosol forcing that explains the deviation.
“Aerosols and “cloud lifetime effect” cited as “enormous uncertainty” in global radiation balance”
http://wattsupwiththat.com/2009/10/06/aerosols-and-cloud-lifetime-effect-cited-as-enormous-uncertainty-in-global-radiation-balance/
…they actually USE this uncertainty to “fix” the half-broken hindcasting of the models in the test runs…
AnonyMoose says:
April 15, 2012 at 1:01 pm
“Well written, Anthony. The surprise ending was hilarious. That which is assumed, no matter how unlikely, must be true.”
The compliment should go to Pierre Gosselin, who translated for Vahrenholt and Lüning, I assume… and runs his own blog,
http://notrickszone.com/
REPLY: True, I didn’t write it, nor translate it. The kudos go to them not me – Anthony
I’ve been in the den of the lion. In a previous life I was responsible for a visualization product which could get virtually arbitrarily dense pixel resolution provided the data was there. In pitching the system to MPI and KRZH, we adjourned to lunch. I made the palpable mistake of asking how good their resolution was on atmospheric phenomena – “For paleo-reconstructions we can do cubes 100 Km on a side.” I bit my tongue and finished my lunch.
They didn’t buy the supercomputer from us, but they did buy my viz product. Now they can see their fantasies in UEBER RESOLUTION. Not much science though…
Lohmann & colleagues write: “Numerical climate models are clearly unequalled in their ability to simulate a broad suite of phenomena in the climate system […]“
——————-
Sure Lohman, but your problem is that your simulations do not agree with the observed facts so your simulations are just fantasies. When did you forget this basic rule for scientists?
Clearly Lohman fails due to his own cognitive biases which he allows to get in the way of considering all the possibilities. He really should consider handing in his doctorate and hanging up his hat as he is not well suited to a career in science.
People like Lohman are not scientists. They are technical hacks who lack the critical thinking skills needed to be real scientists.
Eklund says: “But how do we know what climate was 6,000 years ago? I thought paleoclimate reconstructions were all proxy-pseudo science?”
Thanks for the great link, Eklund. It was fascinating, well written, and another nail in the coffin of AGW (as if we needed any more). Pity you didn’t read it all the way to the end. Oh, you just read the headline? That would explain it.
Yesterday this site noted that all of the Hadley predictions and proclamations were available for a retroactive review of accuracy;http://wattsupwiththat.com/2012/04/14/met-office-coping-to-predictions/
I note that in COP4, Hadley states unequivocally that its models are accurate in a 6,000 year retrospective model. Because these a re PDFs, I cannot copy the paragraph, but it is on page 4 and entitled Uncertainty In Climate Change Predictions.
http://wattsupwiththat.files.wordpress.com/2012/04/cop4.pdf
On the whole climate sensitivity thing. The modellers appear to rely on correlation between Antarctic temperature and CO2 implying cause and effect in order to diagnose the overall sensitivity. But what if the temperature fluctuations in the Arctic and Antarctic are caused almost entirely by changes the North to South rates of heat/energy flow (equator to pole) rather than by the local effects of greenhouse gases? If that is the case then the whole climate sensitivity argument is turned on its head.
If the annual mean temperature changes at the Vostok core site (for instance) result from hemispheric scale changes in equator to pole insolation gradients then the role for greenhouse gases is reduced and there is no need to invoke outrageous amounts of positive feedback.