Peer-reviewed pocket-calculator climate model exposes serious errors in complex computer models and reveals that Man’s influence on the climate is negligible

What went wrong?

A major peer-reviewed climate physics paper in the first issue (January 2015: vol. 60 no. 1) of the prestigious Science Bulletin (formerly Chinese Science Bulletin), the journal of the Chinese Academy of Sciences and, as the Orient’s equivalent of Science or Nature, one of the world’s top six learned journals of science, exposes elementary but serious errors in the general-circulation models relied on by the UN’s climate panel, the IPCC. The errors were the reason for concern about Man’s effect on climate. Without them, there is no climate crisis.

Thanks to the generosity of the Heartland Institute, the paper is open-access. It may be downloaded free from http://www.scibull.com:8080/EN/abstract/abstract509579.shtml. Click on “PDF” just above the abstract.

The IPCC has long predicted that doubling the CO2 in the air might eventually warm the Earth by 3.3 C°. However, the new, simple model presented in the Science Bulletin predicts no more than 1 C° warming instead – and possibly much less. The model, developed over eight years, is so easy to use that a high-school math teacher or undergrad student can get credible results in minutes running it on a pocket scientific calculator.

The paper, Why models run hot: results from an irreducibly simple climate model, by Christopher Monckton of Brenchley, Willie Soon, David Legates and Matt Briggs, survived three rounds of tough peer review in which two of the reviewers had at first opposed the paper on the ground that it questioned the IPCC’s predictions.

When the paper’s four authors first tested the finished model’s global-warming predictions against those of the complex computer models and against observed real-world temperature change, their simple model was closer to the measured rate of global warming than all the projections of the complex “general-circulation” models:

clip_image002

Next, the four researchers applied the model to studying why the official models concur in over-predicting global warming. In 1990, the UN’s climate panel predicted with “substantial confidence” that the world would warm at twice the rate that has been observed since.

 

clip_image004 The very greatly exaggerated predictions (orange region) of atmospheric global warming in the IPCC’s 1990 First Assessment Report, compared with the mean anomalies (dark blue) and trend (bright blue straight line) of three terrestrial and two satellite monthly global mean temperature datasets since 1990.The measured, real-world rate of global warming over the past 25 years, equivalent to less than 1.4 C° per century, is about half the IPCC’s central prediction in 1990.

The new, simple climate model helps to expose the errors in the complex models the IPCC and governments rely upon. Those errors caused the over-predictions on which concern about Man’s influence on the climate was needlessly built.

Among the errors of the complex climate models that the simple model exposes are the following –

  • The assumption that “temperature feedbacks” would double or triple direct manmade greenhouse warming is the largest error made by the complex climate models. Feedbacks may well reduce warming, not amplify it.
  • The Bode system-gain equation models mutual amplification of feedbacks in electronic circuits, but, when complex models erroneously apply it to the climate on the IPCC’s false assumption of strongly net-amplifying feedbacks, it greatly over-predicts global warming. They are using the wrong equation.
  • Modellers have failed to cut their central estimate of global warming in line with a new, lower feedback estimate from the IPCC. They still predict 3.3 C° of warming per CO2 doubling, when on this ground alone they should only be predicting 2.2 C° – about half from direct warming and half from amplifying feedbacks.
  • Though the complex models say there is 0.6 C° manmade warming “in the pipeline” even if we stop emitting greenhouse gases, the simple model – confirmed by almost two decades without any significant global warming – shows there is no committed but unrealized manmade warming still to come.
  • There is no scientific justification for the IPCC’s extreme RCP 8.5 global warming scenario that predicts up to 12 Cº global warming as a result of our industrial emissions of greenhouse gases.

Once errors like these are corrected, the most likely global warming in response to a doubling of CO2 concentration is not 3.3 Cº but 1 Cº or less. Even if all available fossil fuels were burned, less than 2.2 C° warming would result.

Lord Monckton, the paper’s lead author, created the new model on the basis of earlier research by him published in journals such as Physics and Society, UK Quarterly Economic Bulletin, Annual Proceedings of the World Federation of Scientists’ Seminars on Planetary Emergencies, and Energy & Environment. He said: “Our irreducibly simple climate model does not replace more complex models, but it does expose major errors and exaggerations in those models, such as the over-emphasis on positive or amplifying temperature feedbacks. For instance, take away the erroneous assumption that strongly net-positive feedback triples the rate of manmade global warming and the imagined climate crisis vanishes.”

Dr Willie Soon, an eminent solar physicist at the Harvard-Smithsonian Center for Astrophysics, said: “Our work suggests that Man’s influence on climate may have been much overstated. The role of the Sun has been undervalued. Our model helps to present a more balanced view.”

Dr David Legates, Professor of Geography at the University of Delaware and formerly the State Climatologist, said: “This simple model is an invaluable teaching aid. Our paper is, in effect, the manual for the model, discussing appropriate values for the input parameters and demonstrating by examples how the model works.”

Dr Matt Briggs, “Statistician to the Stars”, said: “A high-school student with a pocket scientific calculator can now use this remarkable model and obtain credible estimates of global warming simply and quickly, as well as acquiring a better understanding of how climate sensitivity is determined. As a statistician, I know the value of keeping things simple and the dangers in thinking that more complex models are necessarily better. Once people can understand how climate sensitivity is determined, they will realize how little evidence for alarm there is.”

The climate data they don't want you to find — free, to your inbox.
Join readers who get 5–8 new articles daily — no algorithms, no shadow bans.
0 0 votes
Article Rating
833 Comments
Inline Feedbacks
View all comments
Phil Clarke
January 22, 2015 2:09 am

I am sure we all wish the peer-reviewed literature to be accurate. This paper is flawed. Here is some free peer-review:
The only section in the Policymaker’s Summary of the FAR that meets Lord Monckton’s description (‘We predict’) is the one I quoted above. Here it is again
Based on current model results, we predict:
• under the IPCC Business-as-Usual (Scenario A)
emissions of greenhouse gases, a rate of increase of
global mean temperature during the next century of
about 0 3°C per decade (with an uncertainty range of
0 2°C to 0 5°C per decade), this is greater than that
seen over the past 10,000 years This will result in a
likely increase in global mean temperature of about
1°C above the present value by 2025 and 3C before
the end of the next century The rise will not be
steady because of the influence of other factors
• under the other IPCC emission scenarios which
assume progressively increasing levels of controls
rates of increase in global mean temperature of about
0 2°C per decade (Scenario B), just above 0 1°C per
decade (Scenario C) and about 0 1 °C per decade
(Scenario D)
(Policymakers summary page xii)
This, despite his Lordship’s insistence that ‘we did not use any of the scenarios’. The paper has
“In 1990, FAR predicted with‘‘substantial confidence’’ that,
in the 35 years 1991–2025, global temperature would rise by
1.0 [0.7, 1.5] K, equivalent to 2.8 [1.9, 4.2] K century-1.”
This is wrong, the words ‘substantial confidence’ have been borrowed from another paragraph, which refers only to ‘broad scale features of climate change’, and the 1C by 2025 prediction was clearly conditional, based on a particular scenario – Scenario A, under Scenarios B-D, the prediction was for 0.1-0.2C per decade, which is what actually occurred. The report later gives the forcing trajectories for each scenario and Scenario A had CO2 forcing at 1.85 in 2000 and 2.88 in 2025. (Scenarios B-D all had around 1.75 and 2.3 respectively). This information is in Table 2.7, page 57.
Monckton et al state that the 2011 value for CO2 forcing was 1.82 W/m2, in other words by 2011 we were still below the value Scenario A, the one used by Monckton et al, reached 11 years earlier! The forcings for Scenarios B-C were closer to the mark, as were their temperature projections. No doubt one can find empirical evidence of model-observation discrepencies but this is not it. The IPCC gave 4 predictions for 4 scenarios, this paper misrepresents 1 of those scenarios, which did not come to pass as the IPCC prediction. It was not. This should be removed.
Also, Fig 1 in the paper has the following flaws,
– As proven above, it plots just 1 of the 4 FAR scenarios. One which did not materialise.
– It is wrongly captioned. The caption says the blue line is ‘RSS, UAH, NCDC, HadCRUT4 and GISS monthly global anomalies’. It is not, it is RSS and UAH only, as the graph legend correctly states (the most basic peer review should surely have spotted this?)
– There are no error bars on the observed temperatures
– The FAR trend line should start with the 95% confidence interval bounds associated with the uncertainties of the underlying time series (in other words, not a point start but an interval start). It does not.
You’re welcome.

Reply to  Phil Clarke
January 22, 2015 5:45 am

At last, some genuine and valid criticism. I had not recalled that IPCC had made its 1 k by 2025 prediction under Scenario A. However, Scenario A was its business-as-usual scenario, and it had incorrectly predicted a far greater rate of forcing, and hence of temperature change, than actually occurred.
Figure 1 is indeed incorrectly captioned, though the graph itself is explicit. And the difference between RSS+UAH and those two plus all three terrestrial datasets is 0.01 K over the period.
There are no error bars on the temperatures because (apart from HadCRUt) none are supplied.
The FAR trend line commences at the same point, relative to the data, at which the projection was made. In the absence of uncertainty values for four of the five datasets, that is a respectable starting point. And wherever one starts it, it is the slope of the prediction that matters, and that slope (on all datasets, or just on RSS and UAH) is approximately twice the observed outturn.
In short, though there was business as usual after IPCC 1990, in that CO2 concentration continued to rise at ever-faster rates, the prediction under the business-as-usual scenario was wildly wrong. And that prediction, which was of course the principal feature of the climate system that the models were expected to capture correctly with “substantial confidence”, ought not to have been made with “substantial confidence”, for it was substantially incorrect.

Phil Clarke
Reply to  Monckton of Brenchley
January 22, 2015 9:31 am

“At last, some genuine and valid criticism. I had not recalled that IPCC had made its 1 k by 2025 prediction under Scenario A. However, Scenario A was its business-as-usual scenario, and it had incorrectly predicted a far greater rate of forcing, and hence of temperature change, than actually occurred.”
Thanks, but that was my point. The forcing scenario on which the projection was made did not transpire, the other scenarios B through D turned out to be closer and yet the paper uses the temperature plot only for Scenario A, even though you must have known that the forcings were unrealistic, as ’empirical evidence of the models running hot’. This is plain wrong, it is no such thing; scenario A was never tested with real world data whereas the temperature trends that the models predicted under Scenarios B-D, were 0.1-0.2C / decade which is in line with the 1.34C/decade plotted. It is evidence that the models work, but that one of the four IPCC forcings predictions was unduly pessimistic, but forcings are not the point of the paper, are they?
As an aside, you are making too much of the term ‘Business as usual’, it was defined as “the energy supply is coal intensive and on the demand side only modest efficiency increases are achieved. [..]”)
And once again, the IPCC were clear that they had ‘substantial confidence’ only in the ability of the models to capture ‘broad scale’ features of climate change …
“climate models are only as good as our understanding of the processes which they describe, and this is far from perfect The ranges in the climate predictions given above reflect the uncertainties due to model imperfections, the largest of these is cloud feedback (those factors affecting the cloud amount and distribution and the interaction of clouds with solar and terrestrial radiation), which leads to a factor of two uncertainty in the size of the warming Others arise from the transfer of energy between the atmosphere and ocean, the atmosphere and land surfaces, and between the upper and deep layers of the ocean The treatment of sea-ice and convection in the models is also crude Nevertheless, for reasons given in the box overleaf, we have substantial confidence that models can predict at least the broad scale features of climate change”

Matthew R Marler
Reply to  Monckton of Brenchley
January 22, 2015 11:57 am

Phil Clarke: scenario A was never tested with real world data whereas the temperature trends that the models predicted under Scenarios B-D, were 0.1-0.2C / decade which is in line with the 1.34C/decade plotted.
Is there a typo? How is 0.1-0.2C / decade in line with 1.34C/decade?
we have substantial confidence that models can predict at least the broad scale features of climate change
Are the “broad scale features” defined so vaguely that no misfit between models and temperature can discredit the models? With the CO2 trajectory that has actually evolved, the modeled values of temperature are too hot. Are the models and data broadly consistent in the sense that the model inaccuracies are not yet 10C?

Reply to  Monckton of Brenchley
January 23, 2015 1:05 pm

In response to Mr Clarke, the business-as-usual forcing scenario was the one predicted by the IPCC if business went on as usual. Which it did. Power generation remains coal-intensive – indeed, with China’s huge coal-fired expansion, more coal-intensive than in 1990. And on the demand side only modest efficiency advances have been achieved. It is not the slightest use defending the IPCC on the ground that its 1990 forcing predictions were unrealistic. It should not have made unrealistic predictions and then claimed it had captured the essential features of the climate – not the least of which, in the context of Man’s supposed alteration of it, were the predicted rates of forcing and of temperature change, both of which were wild exaggerations.
How much warming we’re going to get is a broad-scale feature of the climate. The plain intention of the IPCC was to convey the impression that they knew what they were talking about and were confident of their predictions. Their substantial confidence, however, has turned out to be substantially misplaced, and no amount of semantical prestidigitation will alter that fact. They made business-as-usual predictions; business carried on pretty much as usual; CO2 concentration continued to rise; but neither the forcing nor the warming they predicted came to pass.
In 2007 IPCC made further predictions, a little more modest than those of 1990. But those, too, have not come to pass. Compared with the 2005 baseline, we have now had ten years – but there has been practically no global warming over those ten years, contrary to the IPCC’s then prediction. Over and over again, the predictions have been prodigiously exaggerated. Just look at RCP 8.5 in IPCC’s 2013 report. Expert reviewers (this one, at any rate) told them RCP 8.5 was absurd, based on exaggerated population growth estimates that the UN has long abandoned. But they pressed ahead anyway, and it is the RCP 8.5 graph – nonsense from top to bottom – that the Leftosphere reproduces time and again.
These exaggerations are now being noticed. Apologists for them – paid or unpaid – are inexorably losing ground to those who can see perfectly well that the various predictions of doom that were made have not come to pass and have not the slightest probability of coming to pass. In 1990 the IPCC wrote that the ranges (what real mathematicians would call intervals) were designed to allow for the uncertainties in the climate. But the observed rate of warming since 1990 does not fall anywhere on the IPCC’s business-as-usual interval. It is significantly below even the least value there. Whichever way one looks at it, the IPCC got it wrong.
Which is why the IPCC itself has been reducing some of its predictions in recent reports. Indeed, between the pre-final and final drafts of the 2013 report, at my insistence among others, it cut its near-term predictions by well over a third. But, as our paper points out, even though the IPCC reduced its estimate of the feedback sum from 2 to 1.5 Watts per square meter per Kelvin, it failed to reduce its central estimate of equilibrium climate sensitivity correspondingly from 3.2 to 2.2 K as it should have done. Instead, in a multi-thousand-page report whose principal task was to tell us how much global warming would occur at a CO2 doubling, it said it did not propose to give an estimate any more (though, buried deep in the report, it did mention that the CMIP5 models’ central estimate was 3.2 K, much the same as the CMIP3 projection of 3.26 K).
Frankly, it is simply disingenuous to seek to maintain that the IPCC’s track record of predictions has been a success. It has been disastrous. Why? Because there is a huge financial incentive to predict disasters, and none to tell the truth, which is that our contribution to global temperature over the next 500 years will be unexciting even on the business-as-usual scenario.

Phil Clarke
January 23, 2015 12:37 am

Is there a typo?
Yes. Apologies. Should be 1.34/century.
Are the “broad scale features” defined so vaguely that no misfit between models and temperature can discredit the models? With the CO2 trajectory that has actually evolved, the modeled values of temperature are too hot. Are the models and data broadly consistent in the sense that the model inaccuracies are not yet 10C?
No. The period under discussion in the paper was 1990-2025. To date, if one uses the forcings that actually transpired, the IPCC FAR projections were bang on the money. Monckton’s Fig.1 should have included the more realistic Scenarios B-D, but this would have revealed that his chosen exemplar actually demonstrates the exact opposite of his representation, that is, empirical evidence of the IPCC models being accurate.

Matthew R Marler
Reply to  Phil Clarke
January 23, 2015 10:28 am

Phil Clarke, thank you for your reply.

Reply to  Phil Clarke
January 23, 2015 12:45 pm

Well, Scenario A is the IPCC’s business-as-usual scenario. That was the one, just like Hansen’s business-as-usual scenario A a couple of years previously, that got all the headlines. That was the one they ought to be judged on. For it has been business as usual. CO2 emissions are not declining. CO2 concentrations, therefore, are rising at an ever-faster rate, after a brief respite during the recession years. And yet the global warming they predicted has not transpired. Their “business-as-usual” predictions were wrong.

Phil Clarke
Reply to  Monckton of Brenchley
January 24, 2015 4:37 am

No. The paper is about model performance. The scenario on which the IPCC should be judged is thus the one which most resembles how forcings actually evolved, which would be any of B, C or D, but not A, as the paper itself confirms that the 2011 forcings were lower than the 2000 value in that Scenario. But the temperature trends predicted under Scenarios B-D turned out to be within observational bounds…..
As the paper stands, Figs 1 and 2 are grossly misleading and should be removed or amended.
Oh, and you’re wrong about Hansen, also.
“Scenario A, since it is exponential, must eventually be on the high side of reality”
“Scenario B is perhaps the most plausible of the three cases”
Congressional Testimony.

Reply to  Monckton of Brenchley
January 24, 2015 5:26 pm

The paper is about model performance. That means predicting not only temperature change but the forcings that bring it about. The IPCC’s business-as-usual case was incorrect. It forecast twice the warming that has been observed since.

January 23, 2015 1:19 pm

Many thanks to all who have commented on this thread. If anyone has the time to emulate the indefatigable Mr Marler and to read through the thread from beginning to end, he will see that every time I was sneered at or was accused of scientific malpractice (such as “curve-fitting” to make our model represent past temperature change) I replied vigorously. Every time someone posted something infantile – another climate-Communist habit – I told him or her not to be childish.
It is time to demand that those who want to maintain the Party Line in the teeth of scientific the evidence should do so in a civilized, adult, and scientific fashion, in which event I shall respond – as I have here – with scientific discussion, straightforwardly expressed.
What is evident, when all that is said, is that there is a great deal of interest in our model, from both sides of the debate. There have been some 9000 downloads of either the abstract in Chinese or the full text in English from the Science Bulletin website – an almost unheard-of number for a scientific paper. There have been leading articles in the largest national daily paper in Germany (climate-Communist) and in the Daily Mail in the UK (climate-realist). The leftosphere has gone ballistic with sneering but scientifically limp responses.
I am looking forward to the inevitable rebuttal paper by various climate bigwigs, but am hoping that the Science Bulletin will not be bullied, as the craven publishers of journal Physics and Society were seven years ago, into failing to print our refutation of any such rebuttal. We have only a few months left before a totalitarian, unelected climate “government” is put in place at Paris, with nearly all nations sleep-walking into tyranny. There will be further papers supporting our argument this year – one of them, intriguingly, by a former true-believer who found himself in debate with me at a European university a couple of years ago. He was – and I was proud to be able to say he was – an honourable scientist of the old school, who was genuinely interested in the scientific concerns I raised during the debate. He has kept in touch since, and is now on the verge of publishing a major paper that will come to much the same conclusions as ours – if, that is, the Western climate journal that is now reviewing it will dare to allow it to be published. He now says, as we do, that climate sensitivity will be 1 K or less. But I cannot say who he is, for if he was known to have discussed the science in his paper with climate skeptics he would not be able to publish it in the West.
That, above all, is why I am so very angry at the trivialization and denial of science by so many climate-Communists. The scientist, as al-Haytham said, should be a seeker after truth. Whether the thermo-Fascists like it or not, the road to the truth is long and hard, and that is the road we must follow.
Many thanks to all genuine and scientifically-minded contributors (whatever their viewpoint) for their contributions.

Reply to  Monckton of Brenchley
January 23, 2015 1:21 pm

Many thanks to all genuine and scientifically-minded contributors (whatever their viewpoint) for their contributions.
You are welcome.

Phil Clarke
January 25, 2015 4:59 am

No. Forcings are an input to the models. Scenario A indeed turned out to be an underestimate of the way emissions would develop, in no small part due to the collapse of the Soviet Union. Now few people foresaw that event, least of all the IPCC, and it is precisely because forcings are unpredictable that the IPCC run the models with a range of scenario inputs, high to low.
To take just a single scenario, once we know it did not transpire, and ignore the more realistic scenarios, because the predicted trend turned out to be accurate, and to label this as ’empirical evidence’ of the models running hot falls well short of the level of accuracy, not to say honesty, expected in the academic literature.
But I tire of repeating myself.

Ray
January 29, 2015 9:39 pm

On the mistake in using classical Voigt line shapes for climate models . . . .
http://marshall.org/wp-content/uploads/2013/10/Happer-Speech-8-10-13.pdf . . .
http://www.sealevel.info/Happer_UNC_2014-09-08/ . . .
https://www.youtube.com/watch?v=gMdYmAo08O4 . . .
If i understand correctly a key element of failure in the climate models is the classical expressions for
Lorentzian and Voigt line shapes are wrong,,, and its not just co2, but all the EM effected gasses . . .
The far wings of Voigt lines on the absorption bands do not exist in the real world . . .
simple straight line formulas for Del/Co2 = Del/Tp dont work, and it means positive feedback theories are in even more trouble.. A discovery that looks to date to 2008 . . .
J.-M. Hartmann, C. Boulet, D. Robert, “Collisional effects on molecular spectra. Elservier, 2008.
http://www.researchandmarkets.com/reports/1758603/collisional_effects_on_molecular_spectra.pdf . . .
and
http://lmsd.chem.elte.hu/hrms/Tran.pdf . . .
///… It is now well known that the widely used Voigt profile does not well describe the measured absorption shapes of molecular gases. For an isolated optical transition, this is due to the neglect of the collision-induced velocity changes and of the speed dependences of the collisional parameters. Examples of the influence of these non-Voigt effects on the extraction of spectral line parameters from laboratory measured spectra as well as on atmospheric spectra analysis will be presented. …///
And i suppose this information will reach them eventually , ,
I
looks like “it is now well known that the widely used Voigt profile does not well describe the measured absorption shapes of molecular gases.” does not apply to the climate modelers . . . .
The ramifications are ,,, CO2 effect is logarithmic. and we are well into the flat of the curve , ,
,, the same thing effects the water vapor lines too , , ,
Pressure effects on water vapour lines: beyond the Voigt profile
Phil. Trans. R. Soc. A (2012) 370, 2495–2508
http://rsta.royalsocietypublishing.org/content/roypta/370/1968/2495.full.pdf . . .
So if the AMO has rolled over and we head back toward the conditions of the 70’s Ice age scare ? . .
http://climateaudit.files.wordpress.com/2014/05/mann14_1_amo_noaa_kaplansst2.png . . .comment image . . .
http://stevengoddard.wordpress.com/1970s-ice-age-scare/ . . .
http://www.populartechnology.net/2013_02_01_archive.html
Detailed analysis of classical vs wing suppressed Voigt is a bit above my pay grade,
but i think i understand the ramifications of the recent work , , the 2008 Paper appears to
be one of the data points bringing Professor Happer to his conclusion . .
there are other derivatives popping out of peer review since.
So perhaps we have some clues to how the models went wrong, beyond the mistake of curve fitting the rising slope of the AMO et al . .
Too bad we wont be able to see the modelers frantically attempting to curve fit the down slope of the AMO in combination with the Log-Realities of mixed EM active gases . . .
It looks to me,,, its going to turn them into a clutch of screaming frustrated poo tossing monkeys.
Perhaps they will post the best parts on youtube,, it would get a lotta hits , , ,

1 4 5 6