 |
From Press Release:Four skeptical researchers’ new Chinese Academy paper devastatingly refutes climate campaigners’ attempt to criticize their simple model…
In January 2015, a paper by four leading climate researchers published in the prestigious Science Bulletin of the Chinese Academy of Sciences was downloaded more than 30,000 times from the website at scibull.com. By a factor of 10 it is the most-read paper in the journal’s 60-year archive.The paper presented a simple climate model that anyone with a pocket calculator can use to make more reliable estimates of future manmade global warming than the highly complex, billion-dollar general-circulation models previously used by governments and weather bureaux worldwide.
The irreducibly simple climate model not only showed there would be less than 1 C° global warming this century, rather than the 2-6 C° the “official” models are predicting: it also revealed why they are wrong.
By April, climate campaigners had published a paper that aimed to rebut the simple model, saying the skeptical researchers had not checked it against measured changes in temperature over the past century or more.
Now Christopher Monckton of Brenchley, Dr Willie Soon of the Harvard-Smithsonian Center for Astrophysics, Dr David Legates, geography professor at the University of Delaware, and Dr Matt Briggs, Statistician to the Stars, are back with a fresh Science Bulletin paper, Keeping it simple: the value of an irreducibly simple climate model, which explains that the simple model had not been tested against past temperature change because it was designed from scratch using basic physical principles.
Unlike the complex climate models, each of which uses as much power as a small town when it is running, the new, “green” model – which its inventor runs on a solar-powered scientific calculator – had not been repeatedly regressed (i.e., tweaked after the event) till it fitted past data.
Lord Monckton, the inventor of the new model and lead author of the paper, said: “Every time a model is tweaked to force it to fit past data, one departs from true physics. The complex models are fudged till they fit the past – but then they cannot predict the future. They exaggerate.
“We took the more scientific approach of using physics, not curve-fitting. But when the climate campaigners demanded that we should verify our model’s skill by ‘hindcasts’, we ran four tests of our model – one against predictions by the UN’s climate panel in 1990 and three against recent data. All four times, our model accurately hindcast real-world warming.
“On the first of our four test runs of our model (left), the 1990 forecast by the Intergovernmental Panel was a very long way further from reality than our simple model’s spot-on central estimate.” [more overleaf
──────
Figure 1 Four tests of the simple model’s hindcasts (solid-edged boxes: left) against observed warming. Departures from the green bar (the correct value) are in C°. Test 1: from 1990-2015 against IPCC’s 1990-2015 predictions (dashed boxes: top left) based on 1.0 [0.7, 1.5] C° straight-line warming to 2025. Tests 2-4: based on IPCC’s current estimates of all manmade forcings from 1750 to (2) 1950; (3) 1980; and (4) 2012. The simple model’s hindcasts (1-4, left) always match the real-world warming (the green bar) measured by the HadCRUT4 terrestrial dataset (Test 1) or RSS satellite dataset (2-4), but IPCC’s predictions (top left) have proven wildly above the true position. |
Dr Willie Soon was subjected to a well-funded and centrally-coordinated campaign of libels to the effect that he had not disclosed that a utility company had paid him to contribute to the skeptical researchers’ January paper. Inferentially, the aim was to divert attention from the paper’s findings that climate alarm was based on a series of elementary mistakes at the heart of the complex models. In fact, all four co-authors had written the January paper and the new paper on their own time and on their own dime.
Dr Soon said: “What matters to campaigners is the campaign, but what matters to scientists is the science. In 85 years’ time our little model’s prediction of just 0.9 C° global warming between now and 2100 will probably be a lot closer to observed reality than the campaigners’ prediction of 4 C° warming.”
Dr Matt Briggs said: “The climate campaigners’ attempted rebuttal of our original paper was littered with commonplace scientific errors. Here are just a few:
Ø “The campaigners cherry-picked one scenario instead of many, to try to show the large models were better than our simple one. Even then, the complex models were barely better than ours.
Ø “They implied we should tweak our model till it fitted past data. We used physics instead.
Ø “They said we should check our model against real-world warming. We have. It works.
Ø “They criticized our simple model but should have criticized the far less reliable complex models.
Ø “They complained that our simple model had left out ‘many physical processes’. Of course it did: it was simple. Its skill lies in rejecting the unnecessary, retaining only the essential processes.
Ø “They assumed that future warming rates can be reliably deduced from past warming rates. Yet there are grave measurement, coverage and bias uncertainties, particularly in pre-1979 data.
Ø “They assumed that natural and manmade climate influences can be distinguished. They cannot.
Ø “They said we should not have used a single pulse of manmade forcing. But most models do that.
Ø “They said our model had not been “validated” when their own test showed it worked well.
Ø “They said they disagreed with our model when they merely disagreed with our parameters.
Ø “They said we should not project past temperature trends forward. We did no such thing.
Ø “They used root-mean-squared-error statistics, but RMSE statistics are a poor validation tool.
Ø “They incorrectly referred to the closed-loop feedback gain as the “system gain”, but in feedback-driven systems it is the open-loop gain that is the system gain.
Ø “They inaccurately described our grounds for finding temperature feedbacks net-negative.
Ø “They assumed that 810,000 years was a period much the same as 55 million years. It is not.
Ø “They said we had misrepresented a paper we had cited, but their quotation from that paper omitted a vital phrase that confirmed our interpretation of the paper’s results.
Ø “They said net-negative feedbacks would not have allowed ice ages to end. Yet the paper they themselves cited described two non-feedback causes of sudden major global temperature change.
Ø “They said temperature buoys had found a ‘net heating’ of half a Watt per square meter in the oceans: but Watts per square meter do not measure ‘heating’: they measure heat flow.
Ø “They implied the ‘heating’ of the oceans was significant, but over the entire 11-year run of reliable ARGO sea-temperature data the warming rate is equivalent to only 1 C° every 430 years.
Ø “They said the complex models had correctly predicted warming since 1998, but since January 1997 there has been no global warming at all. Not one of the complex models had predicted that.
Ø “They praised the complex models, but did not state that the models’ central warming prediction in 1990 has proved to be almost three times the observed warming in the 25 years since then.
Ø “They failed to explain how a substantial reduction in temperature feedbacks in response to an unchanged forcing might lead, as they implied it did, to unchanged, high climate sensitivity.”
Professor David Legates said: “As we say in our new paper, the complex general-circulation models now face a crisis of credibility. It is perplexing that, as those models’ predictions prove ever more exaggerated, their creators express ever greater confidence in them. It is time for a rethink. Our model shows there is no manmade climate problem. So far, it is proving to be correct, which is more than can be said for the billion-dollar brains operated by the profiteers of doom.”
The new paper is open-access at http://link.springer.com/article/10.1007/s11434-015-0856-2
Keeping it simple: the value of an irreducibly simple climate model
Christopher Monckton of Brenchley,Willie W.-H. Soon,David R. Legates,William M. Briggs
Abstract
Richardson et al. (Sci Bull, 2015. doi:10.1007/s11434-015-0806-z) suggest that the irreducibly simple climate model described in Monckton of Brenchley et al. (Sci Bull 60:122–135, 2015. doi:10.1007/s11434-014-0699-2) was not validated against observations, relying instead on synthetic test data based on underestimated global warming, illogical parameter choice and near-instantaneous response at odds with ocean warming and other observations. However, the simple model, informed by its authors’ choice of parameters, usually hindcasts observed temperature change more closely than the general-circulation models, and finds high climate sensitivity implausible. With IPCC’s choice of parameters, the model is further validated in that it duly replicates IPCC’s sensitivity interval. Also, fast climate system response is consistent with near-zero or net-negative temperature feedback. Given the large uncertainties in the initial conditions and evolutionary processes determinative of climate sensitivity, subject to obvious caveats a simple sensitivity-focused model need not, and the present model does not, exhibit significantly less predictive skill than the general-circulation models.
Like this:
Like Loading...
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
People will rightly point out that this “simple model” isn’t perfect. It is actually over-simple, obviously. It can clearly be improved upon.
It doesn’t pretend to be perfect. Nobody ever claimed that it was.
Maybe it should be chucked out immediately and replaced by a superior approach.
BUT – isn’t that the point?
The fact that it performs BETTER at matching current trends than multi-million dollar computer models reveals the seriousness of the almost immediate failure of almost every single one of these models.
Some people will say that this simple model is amateur crap.
Then that is an even greater indictment of the computer models that it out-performs.
Then the conclusion is that amateur crap produces a better correspondence with reality than multimillion dollar computer models.
Isn’t that the point?
That’s also maybe why it has caused so much irritation to the “consensus” obsessed.
Ridiculing it only make it’s superior performance seem even more remarkable.
Upthread you were decrying those of us that say that models must be compared with data. Now you are saying this one performs better, but how can you judge it to be better without comparing the model results to data?
Some confusion here. I have discussed two very different periods of data collection.
Upthread, I was critical of the comparison with historical data, of questionable accuracy. I used the example of early 20th century bucket measurements of SST’s.
Here I am saying that this model shows better correspondence to the satellite era trends.
Specifically the RSS/UAH/radiosonde trends that have nothing to do with engine intakes and UHI problems.
So, I am not contradicting myself at all. I do not decry comparison with data. Since that is the essence of science. But the data must be known to be good data. We should not be manufacturing our hypothesis to correspond with poor data which is contaminated by a vast array of known and unresolved biases of unknown sign or significance.
Surely, the question that Monkton et al, set out to discuss – is why do models run hot in the current era of modelling. Not, why do they not predict sea surface temperatures as measured by buckets in 1910.
My own view is that any fool can predict the past…
It’s the future that seems to pose the greater problem.
And so far, computer models seem to struggle to predict the present!!
The only real data that matters to a forecast model is the skill at forecasting out of sample data in the real world.
Given that the entire concept of “climate” is a generalization of weather over some span of time, and forecasting weather is irreducibly complex, the wonder is not “how well the bear dances, but that it dances at all.”
It’s not precise enough or expensive enough or complicated enough! All these 3 things are de rigeur for any government product!
There is also the statement that simple models usually work in limited (or defined) circumstances. Classic examples are those related to motion i.e. F=ma, v=at. In all cases of normality (what we can see) they work so well that they are the basis of “rocket science”. It is only in the case of the “abnormal” that they fail.
Just because a model is simple does not mean that it is not useful in the bulk of instances, in fact the complex models may hide more than they demonstrate.
When it comes to complex, chaotic systems, top-down beats bottom-to-top 9.5 times out of ten. Like Climate. Or a war. Any hardbitten wargame designer knows that much. Only the newbies fall into that trap. We are dab hands at hindcasting and all too aware of the perils of forecasting.
I was always taught to – KISS = Keep It Simple Stupid,
that way you can find any errors.
Yup.
“Dr Soon said: ‘What matters to campaigners is the campaign, but what matters to scientists is the science.'”
The challenge is to get the press and the public to seek and discern the campaigners from the scientists! Campaigners are sensational and scientists are boring in the eyes of the masses. So many drink the kool-aid because the sensation brings drama to their otherwise boring lives. The media ratings go down when “all is well” and politicians lose the “fear factor” to push their agendas.
The media has brainwashed the public to think that conflict is necessary to prove one’s place in society.
RE HP-35
If the clunky battery charger looks like it was designed by a mechanical engineer, it was. My first assignment at HP was to design and tool the charger. Later, I worked on the chem milled beryllium copper tactile feed back of the key board. I also found ‘Fabradon II’ a print style that when two color plastic molded, made the letters and numbers look very crisp to the eye. I still have two 35’s and one still works. I bought the first on that had a ‘low’ on/off switch. Back then I chewed my finger nails to the quick and had trouble working the switch. I brought it to the team and the switch was changed to a ‘higher’ switch. What I didn’t know is that everyone on the development team got a 35 which is why I have two.
Thank you for a fine product.
Yes. I had an HP-45.
All the climate models are bad, this one is just less bad.
So, the simple model papers are available for free downloading, but the rebuttal is behind a paywall?
Maybe I missed it, but did some angel arrange for the simple model texts to be free?
Or are the authors of the rebuttal somewhat less than willing to allow their arguments to be viewed by the great unwashed masses?
The Heartland Institute very generously agreed to fund the open-access fees for both of our papers. But perhaps it is becoming harder to find funding for the true-believing side of the case, now that the discrepancy between prediction and outturn is as wide as it is.
Lest we forget… several years ago, Willis Eschenbach distilled the climate models performance/outputs down to a simple black box equation.
Willis Eschenbach has a keen eye and a delight in calculation, just for the sake of finding out what the answer is. That is how a true man of science operates, in an atmosphere of continual wonderment and of growing excitement as the truth is approached. These characteristics shine through in his postings here, which I have often urged him to collect in a best-selling book.
There is also value, though, in getting a simple model and its predictions of not so much global warming peer reviewed and on the record. For the time will come – you heard it here first – when governments will establish enquiries to find out how and by whom they were misled, and who profited at taxpayers’ expense. When that happens, the investigators will go back through the literature and find our paper (whose predictions will turn out to be far closer to observation than those of the billion-dollar brains).
And they will see the various attempts of the usual suspects to lie and cheat and wriggle and sneer in response to our paper. And they will deduce that a small but nasty and vociferous section of the scientific community had engaged in a monstrous freud. Then the prosecution will begin. Then, and only then, will science begin to recover from the damage these wretches have caused at such profit to themselves and at such loss to everyone else.
+1000
I long for that day and I agree it will come. All the sooner and thanks to the tireless efforts and dedication of people like yourself and the scientists who refuse to bow to the manipulators and crooks of our society.
“a monstrous freud”
Is this a simple typo, or a brilliant multi-layered pun ?
No prosecutions. Let the facts be known and convey their damnation. Producing “martyrs” would be counterproductive. Look at climategate. The bad actors paid — dearly — and in the coin with which they were most unwilling to part. I wouldn’t change places with any of them, not for all the grants in Vicksburg and all the gold in Acapulco. Hoisted by their own cravats, they are. Just let them twist slowly in the wind.
In the long term, they are their own worst enemies, unless they choose to become our best friends: They are faced with the choice of being shining examples or horrible warnings. Either one suits me. Fiddle peer review? Let ’em! That always comes back to bite them in the ass in the long run. That’s why their papers are always falling flat within a month of publication. When your teach told you that cheating on exams was only cheating yourself, he wasn’t kidding — that one was for real.
There will never be any prosecutions since the establishment is in on the game, and are the worse offenders.
One can understand that politicians may not understand the science. One can even understand that politicians may tend, without question, to accept the science as presented to them by government approved scientists, and one can therefore understand that politicians may truly accept that CO2 is a problem and the global emission of which needs to be curtailed.
However, even if one accepts all of that, even a school child (say of 14) would readily appreciate that the policy response is misconceived and does not result in the meaningful reduction of CO2.
Carbon taxes/carbon credits does not reduce global CO2. It merely exports CO2 emissions from one place to another. In fact it could lead to increased CO2 emissions because of the need to move raw material to place of production and the finished article to the market of consumption.
The only form of energy production available today that would result in really substantial reductions in CO2 emissions is nuclear. Going nuclear is the only feasible response should one sign up the evils of CO2, and yet there has been very little progress in that regard since the AGW scare took off in the 1980s.
As soon as one appreciates that the sun does not shine at night, and presently we have no means of large scale energy storage for storing surplus energy when the sun is shining (hydro being a very limited option in some places), one immediately knows that solar cannot provide a significant saving in CO2 emissions since conventional backup is required from fossil fuel powered generators for the time when the sun does not shine, or when the grazing angle is low especially in high latitude countries in the Autumn, Winter and Spring.
As soon as one appreciates that wind is intermittent, and presently we have no means of large scale energy storage for storing surplus energy when the wind is blowing (hydro being a very limited option in some places), one immediately knows that windfarms cannot provide a significant saving in CO2 emissions since conventional backup is required from fossil fuel powered generators for the time when the wind does not blow.
Whilst windfarms may produce about 22 to 24% of their nameplate capacity leading one to (at first glance) presume that this means that fossil fuel generation is not required for 22 to 24% of the time thus leading to a reduction in CO2 emissions of some 22 to 24%, this is not in fact the case because of the manner in which the conventionally powered fossil fuel back up generation has to be run in ramp up/ramp down mode which results in no saving in CO2 emissions. We all drive a car and we all know how urban driving with its start/stop characteristics consumes about 50% more fuel than running the car at a steady/constant 60mph (100kph) on a freeway/motorway, and this is the same for the conventionally powered back up generation. Having to be used in ramp up/ramp down mode produces almost the same amount of CO2 as had the back up generator been left running at designed output all the time.
These are school children errors in the political response to the AGW/CO2 alarmism, and it is politicians who make and force implementation of policy. The policy response is misconceived since it does not result in the reduction of CO2 emissions and merely puts up the cost of energy. In fact, I would say that mitigation is in itself an obviously misconceived policy, and adaption is patently to be preferred. Again, that is a political failing.
Politicians cannot afford to see a proper enquiry into this farce because at the end of the day it is their policy response which has been so misguided, no matter what the science may or may not say or truly be.
Lest we forget….Willis Eschenbach distilled the climate models performance/outputs down to a simple black box equation that gives the wrong answer!
So here is a different and slightly more complicated model that seems to be pretty good at giving the right answer.
I am embarrassed as a Canadian that our government has been funding one of the worst extant climate models. For a very large saving of cash we could have had a free one that works much better.Why are the modellers not ashamed of their bilking the public out of so much money? Is there some character test you have to fail to become a climate modeller?
A fine response above from Richard Verney.
Thank you for the paper NOT being paywalled! Richardson and Haufs… is on the same springer site!
Energy balance models cannot be invalidated by general circulation models.
The contrary is true.
A way to escape the necessity to have energy in balance (over time) is to believe that energy is accumulating on the long term in the oceans, the evidence of which is scant; and the mechanisms to release such hypothetical accumulation is not provided.
Similar model as the one discussed here, with a similarly simple approach:
http://climate.mr-int.ch/index.php/en/modelling-uk/two-layers-model-uk
http://climate.mr-int.ch/index.php/en/modelling-uk/primary-forcing-uk
http://climate.mr-int.ch/index.php/en/modelling-uk/feedback-uk
With which Equilibrium Climate Sensitivity is calculated at 0.4 to 0.8 K per doubling CO2, meaning that only 25-30% of the observed warming can be attributed to CO2.
http://climate.mr-int.ch/images/graphs/ts_forcing_by_co2.png
How much energy has been accumulated in the oceans long term?
After approximately 4.5 billion years of solar irradiance plus DWLWIR, the average temperature of the oceans is only about 3 to 4 deg C. This does not suggest that the oceans accumulate much energy long term.
Thanks for all your wonderful work.
Christopher Monckton of Brenchley, Dr Willie Soon, Dr David Legates, Dr Matt Briggs and Willis Eschenbach just to mention a few.
Good men one and all.
While my interest is in understanding ( implementing ) the physics reconciling within 1% our observed mean temperature with the energy we receive from the sun before attacking the 4th decimal place variations we’ve seen over the last century or are likely to over the next , I’ll stick with my irreducibly simple empirical model : linear extrapolation . And I wouldn’t be at all surprised if it overestimates coming warming .
http://cosy.com/Science/CO2vTkelvin.jpg
Monckton. I find myself in the same position as Mary Brown in not understanding your diagram at all. First What is the significance of the different orange and turquoise colouring in the columns?
What is it that your models match.? Is it eg in column 4 the RSS temperature in 2012 based on the IPCC forcings from 1750 2012 when entered into your models as the text seems to say? But if that is so the RSS data only runs from 1980 – how does that relate to IPCC forcings from 1750? When you say “match ” I suppose you mean that the output range straddles the green( Real ) line.?
Unless one is completely familiar with your ” simple’ model procedures it is hard to extract any clear meaning from your diagram.
Looking at all of history, it is apparent that climate sensitivity can not be significantly different from zero.
If you understand the relation between mathematics and the physical world, you understand that, for a forcing to have an effect, it must exist for a period of time and the effect of the forcing is calculated by its duration. If the forcing varies, (or not) the effect is determined by the time-integral of the forcing (or the time-integral of a function thereof).
The CO2 level has been above about 150 ppmv for at least the entire Phanerozoic eon (the last 542 million or so years). If CO2 was a forcing, its effect on average global temperature (AGT) would be calculated according to its time-integral (or the time-integral of a function thereof) for about 542 million years. Because there is no way for that calculation to consistently result in the current AGT, CO2 cannot be a forcing.
Variations of this proof and identification of what does cause climate change (R^2 > 0.97) are at http://agwunveiled.blogspot.com
“If you understand the relation between mathematics and the physical world, you understand that, for a forcing to have an effect, it must exist for a period of time and the effect of the forcing is calculated by its duration. If the forcing varies, (or not) the effect is determined by the time-integral of the forcing (or the time-integral of a function thereof).”
Absolutely, which as you (and others) have clearly demonstrated rules out CO2 as a forcing and shows the time-integral of solar activity is the forcing, (modulated by ocean oscillations, which are lagged effects of solar forcing).
http://2.bp.blogspot.com/-wLCEUB9Aw28/U-cGLCEaBPI/AAAAAAAAGMQ/NAl4KtFKmog/s1600/sunspot+integral+2.jpg
http://1.bp.blogspot.com/_nOY5jaKJXHM/S1vF1X3GdLI/AAAAAAAAAs0/okk6loUxm_o/s400/Fullscreen+capture+1232010+75750+PM.jpg
Endosperm you remove from the eyes.
those were the good old days !
when calculators were simple things that liked simple calculations, many a happy lesson was spent finding someone with a Texas Instrument, a quick flick of ` 0 inverse tan` would send it into a loop and the only way to clear it was unplug the battery. or to prove they had program errors like by `2 squared, square root` only took a couple of runs to show it didnt get back to 2 again. and the shiny Casios with the stonking coils and capacitors inside that occasionally went into meltdown.
the best trick was an fm radio and a casio, everlasting sound of applause as the radio interference kicked in, I gave my `build your own` sinclair RPN to my son a couple of years ago and showed him the `lunar landing` programme.
happy days
Those who are also having problems downloading the paper may try instead directly:
http://link.springer.com/article/10.1007/s11434-015-0856-2
Worked here.
Testing of either simple or complex models against history requires that that you should know what that history was like.There are several breakpoints in the twentieth century temperature record where the mechanism controlling production of the temperature curve changes. It is not permissible to use statistical smoothing methods willie-nillie to eliminate such differences.One obvious break-point is in 1940 corresponds to the sudden introduction of World War II cooling. With it the warm spell that started in 1915 comes to an end. Records that show warming in the forties are dead wrong. The first half of the forties was a deep and bitter cold that is variously distorted in temperature curves. Even 1947 was still so cold that a blizzard shut down the City of New York for several weeks. The next thirty tears were simply recovery from that cold wave and temperature did not reach 1940 levels again until about 1979. In the late seventies there was a temperature rise that stalled out about that time. From then on till 1997 there was no temperature rise of any kind and a hiatus existed for 18 years. That one is missing from the official temperature curves because it was eliminated by over-writing it with a phony global warming curve. What it eliminated was the hiatus of the eighties and nineties. It was covered up by a fake warming called :late twentieth century warming. The official temperature curve that shows this warming must be corrected to show a thirty year horizontal step in the middle of the smooth temperature rise they feature. I am ignoring the El Ninos also present in this region as oscillatory features of the temperature curve. In 1999 a short step warming rose to connect it with the next hiatus. In only three years it raised twenty-first century temperatures up by one third of a degree Celsius and then stopped. This was actually the only real temperature increase during the swatellite era that began in 1979. Hansen quickly claimed that such temperature rise could only be caused by greenhouse warming. He was wrong – physics does not allow you to turn it on and off like that. The following temperature regions now have to be separately analyzed for any model comparisons. The first one is the warming from 1915 to 1940. Second is the cooling and recovery from 1940 to 1979. Next is the hiatus of the eighties and nineties.The step warming that follows it in 1999 connects the hiatus of the eighties and nineties with today’s hiatus. And the last region for model testing is the present hiatus, from the end of step warming until today. This division of the global temperature curve must always be observed when analyzing its properties. Using a single statistically smoothed curve to represent the entire temperature history is impermissible.
This is Arno Arrak’s comment in readable paragraphs:
Testing of either simple or complex models against history requires that that you should know what that history was like.
There are several breakpoints in the twentieth century temperature record where the mechanism controlling production of the temperature curve changes. It is not permissible to use statistical smoothing methods willie-nillie to eliminate such differences.
One obvious break-point is in 1940 corresponds to the sudden introduction of World War II cooling. With it the warm spell that started in 1915 comes to an end. Records that show warming in the forties are dead wrong. The first half of the forties was a deep and bitter cold that is variously distorted in temperature curves. Even 1947 was still so cold that a blizzard shut down the City of New York for several weeks. The next thirty tears were simply recovery from that cold wave and temperature did not reach 1940 levels again until about 1979.
In the late seventies there was a temperature rise that stalled out about that time. From then on till 1997 there was no temperature rise of any kind and a hiatus existed for 18 years. That one is missing from the official temperature curves because it was eliminated by over-writing it with a phony global warming curve. What it eliminated was the hiatus of the eighties and nineties.
It was covered up by a fake warming called :late twentieth century warming. The official temperature curve that shows this warming must be corrected to show a thirty year horizontal step in the middle of the smooth temperature rise they feature. I am ignoring the El Ninos also present in this region as oscillatory features of the temperature curve.
In 1999 a short step warming rose to connect it with the next hiatus.
In only three years it raised twenty-first century temperatures up by one third of a degree Celsius and then stopped. This was actually the only real temperature increase during the satellite era that began in 1979.
Hansen quickly claimed that such temperature rise could only be caused by greenhouse warming. He was wrong – physics does not allow you to turn it on and off like that.
The following temperature regions now have to be separately analyzed for any model comparisons. The first one is the warming from 1915 to 1940. Second is the cooling and recovery from 1940 to 1979. Next is the hiatus of the eighties and nineties.The step warming that follows it in 1999 connects the hiatus of the eighties and nineties with today’s hiatus. And the last region for model testing is the present hiatus, from the end of step warming until today.
This division of the global temperature curve must always be observed when analyzing its properties.
Using a single statistically smoothed curve to represent the entire temperature history is impermissible.
It is easy to read in this format, and there are a number of good points made.
The satellite record, like all data sets has issues, but to the extent that it is valid and (reasonably) reliable, it is the case that there have been TWO ‘pauses’. The first as from launch (1979) up to the onset of the Super El Nino of 1998, and the other following that natural event to date with a simple one off and isolated step change in temperatures coinciding with the 1998 Super El Nino which step change appears to have been driven by that natural event.
I have been commenting on this for years, and I am surprised that whenever recent temperature are examined, or the ‘pause’ looked at, or articles on climate sensitivity are raised one rarely sees any discussion that there are TWO ‘pauses’ not one in the satellite temperature data sets.
Much of the warming seen in the land based thermometer data sets as from say the mid 1970s is likely to be an artefact of data adjustment/homogenisation and/or pollution by station drop outs and UHI. This is perhaps why Michael Mann’s (Briffa’s) tree rings showed no warming in the period mid 1970s up to early 1990s and why Michael Mann decided that instead of using tree ring data post the late 1960s/early 1970s in his seminal paper/hockey stick plot, the land based thermometer record should be spliced onto his series to show (rapid) warming when in fact there may well have been no warming at all.
To sometimes reach stupid conclusions is human, to reach ridiculously stupid conclusions you need super computers.
That is a crucial criticism. It is a false ‘a priori’ premise of the ‘climate campaigners’ that change must be manmade and natural changes are background noise.
John
As the real value of CO2 is zero, obviously any model that is closer to that figure than current models will perform better. Doesn’t change the fact that your models would be worthless if you applied them to Venus or Mars. Until you accept that gravity, mass and incoming solar energy are responsible for virtually all the average temperatures and that atmospheric composition (especially in regards to trace gasses) is almost entirely irrelevant, there is no hope for you!
They complained that our simple model had left out ‘many physical processes’. Of course it did: it was simple. Its skill lies in rejecting the unnecessary, retaining only the essential processes.
We call that one “top-down” in the game biz. I have been shouting out for top-down model design for years now. The errors are easy to spot and correct. Alternate scenarios are far more easy and relevant. There is no “crack-the-whip” going on with the data.
We are finally seeing it happen now with several new models — all of which show lower CO2 TCR and ECS than IPCC CMIP 3 or 5.
So called ‘Man made global warming’ is not about science…it’s about money. The global economies are broke (200T in debt and climbing) and the pols don’t have the guts to increase taxes or cut government so they are using this fraud, the world is going to end, as their tool to scare people into paying hundreds of billions of dollars a year in higher energy taxes to fight an imaginary problem. The result will be massive economic turmoil, unemployment and death of millions (mainly in third world countries).
‘Our model shows there is no manmade climate problem.’
*****************************
And there’s the whole crux of the issue. There simply ISN’T a man-made problem. But that is a decidedly unwelcome observation, hence the tsunami of climate lies.
Warming, yes. Problem, not so much.
As worthless as any other climate model.
What a farce, a model where most of the science is unknown.
Reblogged this on JunkScience.com and commented:
Predictions are difficult, especially about the future. I suppose we are to be more impressed by the size of the wizard’s machine.