New AWI Research Confirms: Climate Models Cannot Reproduce Temperatures Of The Last 6000 Years

By Dr. Sebastian Lüning, Prof. Fritz Vahrenholt and Pierre Gosselin

One of the main points of criticism of the CO2-dominated climate models is that they fail to reproduce the temperature fluctuations over the last 10,000 years. This surprises no one as these models assign scant climate impact to major factors, i.e. the sun. As numerous IPCC-ignored studies show, the post-Ice Age temperature curve for the most part ran synchronously with solar activity fluctuations. The obvious discrepancy between modeled theory and measured reality has been brought up time and again.

The journal Climate of the Past Discussions has published a new paper written by a team led by Gerrit Lohmann of the Alfred Wegener Institute (AWI) in Bremerhaven, Germany. The group compared geologically reconstructed ocean-temperature data over the last 6000 years to results from modeling. If the models were indeed reliable, as is often claimed, then there would be good agreement. Unfortunately in Lohmann’s case, agreement was non-existent.

Lohmann et al plotted the geologically reconstructed temperatures and compared them to modeled temperature curves from the ECHO-G Model. What did they find? The modeled trends underestimated the geologically reconstructed temperature trend by a factor of two to five. Other scientists have come up with similar results (e.g. Lorenz et al. 2006, Brewer et al. 2007, Schneider et al. 2010).

The comprehensive temperature data collection of the Lohmann team distinctly shows the characteristic millennial scale temperature cycle for many regions investigated, see Figure 1 below. Temperatures fluctuated rhythmically over a range of one to three degrees Celsius. In many cases these are suspected to be solar-synchronous cycles, like the ones American Gerard Bond successfully showed using sediment cores from the North Atlantic more than 10 years ago. And here’s an even more astonishing observation: In more than half of the regions investigated, temperatures have actually fallen over the last 6000 years.

luning_Fig1

Figure 1: Temperature reconstructions based on Mg/Ca method and trends with error bars. From Lohmann et al. (2012).

What can we conclude from all this? Obviously the models do not even come close to properly reproducing the reconstructed temperatures of the past. This brings us to a fork in the road, with each path leading to a completely different destination: 1) geologists would likely trust their temperatures and have doubts concerning the reliability of the climate model. Or 2) mathematicians and physicists think the reconstructions are wrong and their models correct. The latter is the view that the Lohmann troop is initially leaning to. We have to point out that Gerrit Lohmann studied mathematics and physics, and is not a geo-scientist. Lohmann et al prefer to conjure thoughts on whether the dynamics between ocean conditions and the organisms could have falsified the temperature reconstructions, and so they conclude:

These findings challenge the quantitative comparability of climate model sensitivity and reconstructed temperature trends from proxy data.

Now comes the unexpected. The scientists then contemplate out loud if perhaps the long-term climate sensitivity has been set too low. In this case additional positive feedback mechanisms would have to be assumed. A higher climate sensitivity would then amplify the Milankovitch cyclic to the extent that the observed discrepancy would disappear, this according to Lohmann and colleagues. If this were the case, then one would have to calculate an even higher climate sensitivity for CO2 as well, which on a century-scale would produce an even higher future warming than what has been assumed by the IPCC up to now. An amazing interpretation.

The thought that the climate model might be fundamentally faulty regarding the weighting of individual climate factors does not even occur to Lohmann. There’s a lot that indicates that some important factors have been completely under-estimated (e.g. sun) and other climate factors have been grossly over-estimated (e.g. CO2). Indeed the word “solar” is not mentioned once in the entire paper.

So where does their thought-blockage come from? For one it is a fact that physicist Lohmann comes from the modeling side, and stands firmly behind the CO2-centred IPCC climate models. In their introduction Lohmann & colleagues write:

Numerical climate models are clearly unequalled in their ability to simulate a broad suite of phenomena in the climate system […]“

Lohmann’s priorities are made clear already in the very first sentence of their paper:

A serious problem of future environmental conditions is how increasing human industrialisation with growing emissions of greenhouse gases will induce a significant impact on the Earth’s climate.

Here Lohmann makes it clear that alternative interpretations are excluded. This is hardly the scientific approach. A look at Lohmann’s resume sheds more light on how he thinks. From 1996 to 2000 Lohmann worked at the Max Planck Institute for Meteorology in Hamburg with warmists Klaus Hasselmann and Mojib Latif, both of whom feel very much at home at the IPCC. So in the end what we have here is a paper whose science proposes using modeled theory to dismiss real, observed data. Science turned on its head.

 

[Added: “SL wants to apologize to the authors of the discussed article for the lack of scientific preciseness in the retracted sentences.”  ]

 

From 1996 to 2000 Lohmann worked at the Max Planck Institute for Meteorology in Hamburg with warmists Klaus Hasselmann and Mojib Latif, both of whom feel very much at home at the IPCC.

[Note the above text was changed on 4/16/12 at 130PM PST as the request of Dr. Sebastian Lüning – Anthony]

The climate data they don't want you to find — free, to your inbox.
Join readers who get 5–8 new articles daily — no algorithms, no shadow bans.
0 0 votes
Article Rating
60 Comments
Inline Feedbacks
View all comments
LazyTeenager
April 16, 2012 6:54 am

From the paper abstract
“Alkenone-based SST records show a similar pattern as the simulated annual mean SSTs, but the simulated SST trends underestimate the alkenone-based SST trends by a factor of two to five. ”
So I think this is interpreted as the alkenone and model values match, except for a wobbly scaling factor, but the Mg/Ca ratio values do not. This also implies that the Mg/Ca values do not match the alkenone values.
The proxies do not agree with each other. So it could in fact be plausible that the Mg/Ca proxies are measuring something else and the modelling is not as bad as represented above.
Probably need to look at the paper and not just the abstract.

Dr. Lurtz
April 16, 2012 7:00 am

RHS says:
April 15, 2012 at 9:24 pm
If the models are wrong, lets just get a new model. Claudia Schiffer, Kathy Ireland, Kate Moss, etc. See, there are plenty of models to chose from!
/Cheap Humor…
I can’t afford a new model; I am paying for new windmills.

rgbatduke
April 16, 2012 7:09 am

Wow. That was a good one. Double interface error. The formula was F = mg = ma only F, g and a were vectors. And I have no idea why the thing posted as it did so in mid typing stream as I hit enter to start a newline. Oh, well.
…according to my wind-chimes in the back yard) it is basically impossible to predict the precise motion of that piece of paper. If one drops it many times, one drop might go straight down, the next it might be blown two meters laterally on the way down. Yet I teach Newton’s Laws and gravitation, and believe them to be true in context and useful as well. It’s just a mistake to think that the simple model is really adequate to predict a much more complex reality.
Models are therefore enormously useful. On the one hand, they give us a quick and dirty way to predict and understand reality, one that often works pretty well in spite of the idealizations. On the other, where they fail it is a clear sign that there is more going on than is included in the models. If one drops a round smooth marble a thousand times and carefully record its position as a function of time, analysis of the motion and comparison with the model of uniform gravity might permit you to infer linear (and or nonlinear) drag forces! A failed model teaches us things!
Or sometimes, it works the other way. A simple model is “suddenly” seen to be a special case of a more general model that has greater explanatory power. Newton’s discovery of gravitation worked like that — he compared the acceleration of his apple at the earth’s surface at radius R_e to the acceleration of the moon in a circular orbit of radius R_m and discovered that they were in the ratio R_e^2/R_m^2.
It is sad, though, when a physicist or mathematician believes in their toy model so much that they forget that it is a toy, or worse, forget that in the end the model must be compared to reality or else you will not ever learn where your model fails, where some key piece of physics you ignored in your idealization turns out not to be ignorable. It is even worse when a piece of supposed science is prefaced with social platitudes explaining how the work will help save the planet from evil instead of simply presenting the data, the analysis, and the results without making the claim that the models are correct and it is the data that must be at fault. That’s simply absurd.
So sad. Anthony summarizes the paper so succinctly above — climate models cannot reproduce the proxy-derived geothermal history of the planet. Forget 6000 years — they cannot reproduce it on any significant timescale. It would have sufficed for them to have simply presented the data, the computation, noted that the latter doesn’t fit the former and stopped. It would have been reasonable to assert that the failure implies a probable failure of the toy model (given their many other failings on shorter timescales than this). But to assert that the data must be incorrect because the toy model is correct is simply laughable.
BTW, regular readers who have been following the other threads will be pleased by my report on the Durham weather yesterday. The NWS was predicting a high of 91F and a low of 61F. When I woke up, the low Sunday morning was round 57F. The 24 hour high recorded by my thermometer outside was 82F, and that was for around a five minute window when the sun was actually on the thermometer housing (I don’t have a big fancy fan-blowing weather station, only a simple hanging wireless thermometer located arguably too close to the house on the northeast side). The air temperature on that side of the house — with a steady breeze blowing all day — probably never exceeded 81F.
Again, people almost never actually check the NWS forecast against reality — their perception of the day’s temperatures is what they were forecast to be as they were in an office or indoors most of the day and never actually experienced the temperature outdoors in the warmest part of the day.
10F is an enormous error. They’re calling for 91 again today and tomorrow. Today I might believe it — it is 71 and quite sunny, and if the humidity climbs just right it could warm up (humid air but no clouds). OTOH, if it clouds up as it is supposed to, 91 is perhaps dubious. Still, I would be very interested in seeing whether or not the mean error in NWS forecasts for high and low temperatures have a systematic error on the warm side, on average. What a sublime way to “subtly” create the perception of warming in a society that is largely out of touch with the outdoors, living in air-conditioned environments! The prediction is the reality!
Sadly, I have no real data on this — it is just anecdotal. It seems to me that the NWS forecast here errs far more often on the warm side than on the cool side. In fact, it hardly ever errs on the cool side, predicts a high or low temperature that is too low. Could a “warming bias” have crept somehow into the NWS computers? Do they not have any sort of feedback loop that simply increments or decrements a correction factor empirically to maintain a mean error of zero? Or is their output the result of running climate models that at this point are hopelessly biased on the warming side (and nobody is checking to notice this)?
rgb

rgbatduke
April 16, 2012 7:14 am

BTW, is everybody else seeing all posts in italics, or is it just my browser? 1/3 of the way through my first post above (at the point it went in spontaneously without my clicking post comment) italics turned on and now everybody’s posts appear to be in italics, but only from that point on. Sigh.
rgb
[FIXED. Thanks. -REP]

DirkH
April 16, 2012 1:11 pm

LazyTeenager says:
April 16, 2012 at 6:40 am
“Went looking
Rather than “models” we have the one model: ECHO-G. This is described here: http://www-pcmdi.llnl.gov/ipcc/model_documentation/ECHO-G.pdf
It is a 2001 vintage model and used as one input for AR4. In short quite old.
Don’t know how it compares to others leaving open the question of whether this model typically underestimates temperatures.”
So we agree that AR4 is outdated, and its projections were incorrect? Great! We’re making progress.
Maybe in 2025 we can come back together and assess whether AR5 was any better.

April 16, 2012 1:24 pm

Lazy teenager said: “It is a 2001 vintage model and used as one input for AR4. In short quite old.”
So the old models were total bollocks. But they are good now. eh?

April 16, 2012 11:12 pm

LazyTeenager says:
April 16, 2012 at 6:34 am
Older models have more semi empirical relationships built, in while newer models use more an initio calculations. This means that the newer models behave better over a wider range of physical conditions.

Children are supposed to behave — computer models are supposed to produce accurate results.

Bill Wood
April 17, 2012 8:01 am

If two models disagree significantly, they must both be correct. Perhaps this is the proper way to model a chaotic system.
/sarc off
What happened to Popper’s view that proper science required falsifiability? If all the components of a model are accepted as proven science and the model does not track with historic data or provide reasonably accurate predictions as future data becomes available, either the model lacks some necessary components or the components have been improperly assembled in constructing the model.

Guillaume Leduc
April 23, 2012 10:55 am

“There’s a lot that indicates that some important factors have been completely under-estimated (e.g. sun) and other climate factors have been grossly over-estimated (e.g. CO2). Indeed the word “solar” is not mentioned once in the entire paper.”
In this study we have done a sensitivity analysis of the Holocene SST trends to changes in insolation associated with orbital parameters. In the models we used, only orbital parameters were modified, as it has been the first-order cimate forcing over the time interval studied. BY DESIGN, the sun activity or the CO2 cannot have been under or over-estimated as they were prescribed constant in the model.
It is very clear that “Dr. Sebastian Lüning, Prof. Fritz Vahrenholt and Pierre Gosselin” did not even catch the fundamental basis of what has been done in the article. This blog and its audience are probably the very last outpost where “Dr. Sebastian Lüning, Prof. Fritz Vahrenholt and Pierre Gosselin” can write such crap without feeling any shame. Thanks for them, idiots!