
Guest essay by Eric Worrall
The researchers claim adding historical data derived fudge factors to correct the discrepancy between climate models and historical observations, producing a Frankenmodel mix of fudge factors and defective physics, will make climate predictions more reliable.
New approach to global-warming projections could make regional estimates more precise
Computer models found to overestimate warming rate in some regions, underestimate it in others
Date: May 15, 2018
Source: McGill University
Summary:
A new method for projecting how the temperature will respond to human impacts supports the outlook for substantial global warming throughout this century – but also indicates that, in many regions, warming patterns are likely to vary significantly from those estimated by widely used computer models.
…
“By establishing a historical relationship, the new method effectively models the collective atmospheric response to the huge numbers of interacting forces and structures, ranging from clouds to weather systems to ocean currents,” says Shaun Lovejoy, a McGill physics professor and senior author of the study.
“Our approach vindicates the conclusion of the Intergovernmental Panel on Climate Change (IPCC) that drastic reductions in greenhouse gas emissions are needed in order to avoid catastrophic warming,” he adds. “But it also brings some important nuances, and underscores a need to develop historical methods for regional climate projections in order to evaluate climate-change impacts and inform policy.”
…
Read more: https://www.sciencedaily.com/releases/2018/05/180515113555.htm
The abstract of the study;
Regional Climate Sensitivity‐ and Historical‐Based Projections to 2100
Raphaël Hébert, Shaun Lovejoy
First published: 13 March 2018
Abstract
Reliable climate projections at the regional scale are needed in order to evaluate climate change impacts and inform policy. We develop an alternative method for projections based on the transient climate sensitivity (TCS), which relies on a linear relationship between the forced temperature response and the strongly increasing anthropogenic forcing. The TCS is evaluated at the regional scale (5° by 5°), and projections are made accordingly to 2100 using the high and low Representative Concentration Pathways emission scenarios. We find that there are large spatial discrepancies between the regional TCS from 5 historical data sets and 32 global climate model (GCM) historical runs and furthermore that the global mean GCM TCS is about 15% too high. Given that the GCM Representative Concentration Pathway scenario runs are mostly linear with respect to their (inadequate) TCS, we conclude that historical methods of regional projection are better suited given that they are directly calibrated on the real world (historical) climate.
Plain Language Summary?
In this paper, we estimate the transient climate sensitivity, that is, the expected short‐term increase in temperature for a doubling of carbon dioxide concentration in the atmosphere, for historical regional series of temperature. We compare our results with historical simulations made using global climate models and find that there are significant regional discrepancies between the two. We argue that historical methods can be more reliable, especially for the more policy‐relevant short‐term projections, given that the discrepancies of the global climate models directly bias their projections.
Read more (paywalled): https://agupubs.onlinelibrary.wiley.com/doi/abs/10.1002/2017GL076649
The researchers hope this mixture of historical fudge factors and defective physics will be more acceptable as the basis of climate policy decisions than just using the defective physics.
Do I understand correctly that the famed Climate Sensitivity is not one global number, but that it is highly localized? What a beauty – now the uncertainty interval means simply that you could always find a locale for any particular value. That’s a great research project.
Before this thread gets derailed…the photo caption says “Fromm” instead of “From”…
“…Our approach vindicates the conclusion of the Intergovernmental Panel on Climate Change (IPCC) that drastic reductions in greenhouse gas emissions are needed in order to avoid catastrophic warming…”
Which part of their “approach” addresses that conclusion? Where was warming demonstrated to be “catastrophic?” And why would someone’s “approach” knowingly “vindicate the conclusion” of an intergovernmental party? Shouldn’t the results of a scientific study potentially do that rather than the approach?
This latest rationalisation of methods from the developers of Climate Models makes Frankenstein look like scientific fact by comparison.
Demand that CMIP6 models all use the same inputs for hindcasts. Clouds, aerosols, etc.
Please be advised that this researcher had already looked at some historical data, data that climate scientists appear to studiously ignore such as the time series for the UAH satellite lower troposphere temperature relative to ground based observatory CO2 concentration data.
The results to date show:
a. There is no statistically significant correlation between satellite lower troposphere temperature and atmospheric CO2 concentration, that is, the term CO2:temperature climate sensitivity is meaningless;
b. There is a statistically significant correlation between the temperature and the rate of change of CO2 concentration;
c. Temperature change precedes CO2 change so it cannot possibly be caused by the latter;
d. The temperature and the rate of change of CO2 concentration have practically identical autocorrelation coefficients with a prominent maximum at about 42 months;
e. This is confirmed by having practically identical Fourier amplitude spectra with a glaringly obvious peak at a period of 42 months which period does not appear to feature in discussion of any climate models.
The Fourier amplitude spectra display a number of maxima such as the Moon’s 27.21 day draconic period and its 29.53 day synodic period. This should be evident to anyone knowing that the Moon passes between the Sun and the Earth every month. However it would appear that climate scientists are incapable of performing such elementary time series analysis.
For more detail see:
https://www.climateauditor.com
We heard you the first time. Thanks
“…projections based on the transient climate sensitivity (TCS), which relies on a linear relationship between the forced temperature response and the strongly increasing anthropogenic forcing.”
Where is there proof of this linear relationship between anthropogenic forcing and temperature?
This is a craftly presented dose of pseudo-scientific jargon.
An assumption, built into their models.
In my work, I insist on modellers setting out all their assumptions, so that both they and we know where the results are coming from. Climate modellers hide their assumptions away and then claim that doing arithmetic to assumptions produces non-assumptions.
That is simply false. Any number that uses an assumption to get to that number is also an assumption.
“Plain Language Summary?”
Models which incorporate the worst of both worlds were called “semi-empirical” when I was a student.
Science is the art of discovering that you got it wrong. Its not the art of convincing others that you are right. That’s politics.
That correct . Science is never right. The best that we can hope for is that its not wrong.
AGW hypotheses are clearly wrong.
Meanwhile, religion is the art of persuading people to believe things which patently don’t make sense.
…Plain Language Summary?
In this paper, we estimate the transient climate sensitivity, that is, the expected short‐term increase in temperature for a doubling of carbon dioxide concentration in the atmosphere, for historical regional series of temperature. We compare our results with historical simulations made using global climate models…
Plainer Language Summary
The models are bust – their predictions don’t match reality.
So our method simply looks at reality, and picks times when the temperature is going up.
Since we are coming out of the LIA, we can easily show a historical rise over the Modern period.
Therefore we’re all going to die!!!!
P.S. Send Money…
This McGill study is warmist bullsh!t. Climate sensitivity to atmospheric CO2 is clearly less than about 1C/(2xCO2) and probably much less – closer to 0.0C than 1.0C.
[excerpt from below]
There is ample Earth-scale evidence that TCS is less than or equal to about 1C/(2xCO2).
There is no credible evidence that it is much higher than that.
At this magnitude of TCS, the global warming crisis does not exist.
Best, Allan
https://wattsupwiththat.com/2018/05/05/the-biggest-deception-in-the-human-caused-global-warming-deception/comment-page-1/#comment-2809025
I suggest that global warming alarmism has never been credibly demonstrated to be correct and has repeatedly been falsified.
The argument is about one parameter – the sensitivity of climate to increasing CO2. Let’s call that TCS.
There is ample Earth-scale evidence that TCS is less than or equal to about 1C/(2xCO2).
There is no credible evidence that it is much higher than that.
At this magnitude of TCS, the global warming crisis does not exist.
Examples:
1. Prehistoric CO2 concentrations in the atmosphere were many times today levels, and there was no runaway or catastrophic warming.
2. Fossil fuel combustion and atmospheric CO2 strongly accelerated after about 1940, but Earth cooled significantly from ~1940 to ~1977.
3. Atmospheric CO2 increased after ~1940 , but the warming rates that occurred pre-1940 and post-1977 were about equal.
4. Even if you attribute ALL the warming that occurred in the modern era to increasing atmospheric CO2, you only calculate a TCS of about 1C/(2xCO2). [Christy and McNider (1994 & 2017), Lewis and Curry (2018).]
There are many more lines of argument that TCS is low – these are a few.
Regards, Allan
Climate communicreator Kathy Hayhoe recommends
“The models need more eye of newt and less toads’ skins when there’s a full moon.”
I’m not quite sure if some commenters know the paper, it’s remarkable indeed. I cited some parts of the conclusions over at Judys , see https://judithcurry.com/2018/04/30/why-dessler-et-al-s-critique-of-energy-budget-climate-sensitivity-estimation-is-mistaken/#comment-871885
The core finding: “Given these facts, it is questionable how accurate the mean projection of the MME will be given that its spatial pattern of warming evolves little over the 21st century and remains very close to its RTCS over the historical period, which significantly differs over most of the globe from the actual RTCS of historical observations. Therefore, the resulting MME projection will suffer from the same faults as the historical runs; that is, it will be a projection of the faulty GCM climate in historical runs. GCMs are important research tools, but their regional projections are not yet reliable enough to be taken at face value. It is therefore required to develop further historical approaches that will thus project the real world climate rather than the GCM climates.
(MME: multi model mean; RCTS: regional transient climate sensivity)
The paper clearly contradicts some approaches to increase the observed ( relatively low) TCR of about 1.3 according to Lewis/Curry 2018 by estimating that models better simulate the possible “warming patterns”. It shows that those modelled patterns for the future are only extrapolations of wrong patterns from the presence. The study bolsters the observed sensivity values and is a very worth read. Google the DOI of the paper and you’ll find a full. It’s always better to discuss about a known issue instead of writing about a guess.
The (climate) science is barely relevant here. The appropriate perspective is basic psychology. You have to expect a modeller to defend his models. The alternative is to expect him to dispassionately recognise their inherent, irremediable defects and seek employment elsewhere. This is contrary to human nature.
Perceived self-worth is as important to modellers as to the rest of us.
Modellers should be honest, and admit that they are ALWAYS just modelling assumptions.
As the doyen of modelling said: “all models are wrong, but some are useful.”
Models can be valuable, but only if they are used correctly.
Fewer cyclones with climate change – but why?
7 April 2011
“We can’t give a lucid answer at this time,” replied Knutson – which he admitted, was a concern.
One clue, however, came from the modelling.
Knutson said that removing carbon dioxide from the models wiped off around half of the cyclone’s predicted intensity.
https://www.newscientist.com/blogs/shortsharpscience/2011/04/fewer-cylones-with-climate-cha.html
[the link 404’s out . . . mod]
Link still works for me.
Apologies.
I tried ‘oogle’ of the headline: “Fewer cyclones with climate change – but why?”, and it was on the first page;
“Short sharp science: Fewer cyclones with climate change – but why?”
Hope that works for anyone interested.
Here’s the erudite (speculative) answer from the article:
Back to university for you Steve…
Comparing one computer model (even if that’s not what you called it) with another computer model is not science.
If they had one reliable model without any fudge factors then that would be the only model that they would use. The fact that they have a plethora of models with fudge factors giving different results signifies that a lot of guess work is involved. Only one model can possible be right but they do not know which one and it may be that all their models are wrong. I believe that they have hard coded that CO2 causes warming so that their climate simulations all beg the question and are hence useless. Predictions made by models that are both wrong and useless are themselves wrong and useless. There is also plenty of scientific rational to support the idea that the climate sensitivity of CO2 is zero.
Eric Worrall May 16, 2018 at 3:52 pm said: The researchers were careful to avoid mentioning equilibrium response, they focussed on transient climate response…
No offense meant to anyone at all, but some of this is getting sillier and sillier. Study after study with no practical solution pops up like weeds in the garden. Alarmism is turning into some weird cult mentality that seems to fill a need for unaddressed spirituality. Maybe these twits should go to church on Sunday. They might chill out a little.
I’m fascinated by silly stuff. Climate sci-fi is entering the Twilight Zone of ‘the sillier and more alarming, the better’, but the weather guessers can’t even provide a mildly accurate weather forecast any more. This Climate Sci-fi stuff is rapidly approaching the category of faddishness, something that will soon pass because it is baloney. But it gets the prognasticators money and attention and soothes their need for approval. It must be a lonely line of work when your only response comes from a computer screen.
However, some of the silly stuff coming out of this is useful for sci-fi and fantasy fiction, so I’m making notes of a lot of it in my Big What If? Notebook, for future use. For example, what if “the researchers” (unknown but closeted, like Irish monks copying stuff in cells in the 10th century) predict that a decades-long heat wave would sweep the entire planet in mid-winter (because they forgot that seasonal changes are hemispherical) and instead, an excessive snow load pounds the northern hemisphere and the southern hemisphere gets excessive rain and flooding. Oh, wait – that’s already happening, isn’t it?
The more I see of these research “results”, the more I think these people should find other jobs, something that connects them to the real world. Gardening on a rooftop in summer is a good start. I think what they’re trying to tell us is that, deep down inside, they just don’t like winter weather and want it to go away permanently, and it is NOT going to happen. I want to see all the alarmists mired in snow up to their hoohahs.
I may grumble a little about keeping my furnace running longer than normal, but I’m in the real world. The Climate Sci-Fi peeps are not.
Chthulhu rules! Thanks for listening!
Rant over. Breakfast!!!
Hi Sara,
Andrew Weaver was Professor of Climate Modelling at University of Victoria, British Columbia until recently, when he became head of the 3-elected-member fringe Green Party of British. Columbia.
Pretty sure that was NOT a good thing for British Columbia or Canada.
Warmist climate models typically employ(ed) values of climate sensitivity that are (were) 3 to 10 times too high, and so these models deliberately run hot. Quelle surprise!
These climate models are not remotely scientific – they are political instruments – they just run hot, which is what they are designed to do. Why? To motivate corrupt and ignorant politicians and scare the idiots who vote for them.
Global warming alarmism is a multi-trillion dollar scam – now the most costly scam in human history. Corrupt politicians love a big scam – the bigger the better – because they can skim off the most graft – a little scam is hardly worth the trouble.
In the same post no less!!
Thanks for that!
I’m really pleased to see that this is becoming more and more apparent. Years ago when I would argue about using fudge factors in the models, almost no one, including scientists, seemed to know enough to comment. The interesting thing is – there is not just *a* fudge factor for the various models. Each model has its own fudge factor. This, IMHO, clearly indicates that it is not only the physics of the CO2/temp relationship that is unknown.
I’ve modeled commodity prices for about twenty-five years now. I started with an 80286 processor. The advantage I have had in thinking about these climate models is that, when I ran a model, I would bet on the output. It is very difficult to ignore when the model is not right when you look at your balance sheet. People who model climate do not have this advantage, it seems. (g)
Hi Kermit,
Since ALL the models run hot, and the modelers know it, they will not bet – because they would lose their money.
They make their money in other ways, by contributing to the global warming scam, getting research grants to scare people with fictions about very-scary global warming, travelling to conferences in exotic locales, getting money under-the-table for supporting green energy schemes, etc.
It’s a great life, being a global warming scam alarmist.
Ummmmm. Fudge……
Climate is controlled by natural cycles. Earth is just past the 2003+/- peak of a millennial cycle and the current cooling trend will likely continue until the next Little Ice Age minimum at about 2650.See the Energy and Environment paper at http://journals.sagepub.com/doi/full/10.1177/0958305X16686488
and an earlier accessible blog version at http://climatesense-norpag.blogspot.com/2017/02/the-coming-cooling-usefully-accurate_17.html
Here is part of section 1 of the paper which deals with climate sensitivity.
“For the atmosphere as a whole therefore cloud processes, including convection and its interaction with boundary layer and larger-scale circulation, remain major sources of uncertainty, which propagate through the coupled climate system. Various approaches to improve the precision of multi-model projections have been explored, but there is still no agreed strategy for weighting the projections from different models based on their historical performance so that there is no direct means of translating quantitative measures of past performance into confident statements about fidelity of future climate projections. The use of a multi-model ensemble in the IPCC assessment reports is an attempt to characterize the impact of parameterization uncertainty on climate change predictions. The shortcomings in the modeling methods, and in the resulting estimates of confidence levels, make no allowance for these uncertainties in the models. In fact, the average of a multi-model ensemble has no physical correlate in the real world.
The IPCC AR4 SPM report section 8.6 deals with forcing, feedbacks and climate sensitivity. It recognizes the shortcomings of the models. Section 8.6.4 concludes in paragraph 4 (4): “Moreover it is not yet clear which tests are critical for constraining the future projections, consequently a set of model metrics that might be used to narrow the range of plausible climate change feedbacks and climate sensitivity has yet to be developed”
What could be clearer? The IPCC itself said in 2007 that it doesn’t even know what metrics to put into the models to test their reliability. That is, it doesn’t know what future temperatures will be and therefore can’t calculate the climate sensitivity to CO2. This also begs a further question of what erroneous assumptions (e.g., that CO2 is the main climate driver) went into the “plausible” models to be tested any way. The IPCC itself has now recognized this uncertainty in estimating CS – the AR5 SPM says in Footnote 16 page 16 (5): “No best estimate for equilibrium climate sensitivity can now be given because of a lack of agreement on values across assessed lines of evidence and studies.” Paradoxically the claim is still made that the UNFCCC Agenda 21 actions can dial up a desired temperature by controlling CO2 levels. This is cognitive dissonance so extreme as to be irrational. There is no empirical evidence which requires that anthropogenic CO2 has any significant effect on global temperatures.
The climate model forecasts, on which the entire Catastrophic Anthropogenic Global Warming meme rests, are structured with no regard to the natural 60+/- year and, more importantly, 1,000 year periodicities that are so obvious in the temperature record. The modelers approach is simply a scientific disaster and lacks even average commonsense. It is exactly like taking the temperature trend from, say, February to July and projecting it ahead linearly for 20 years beyond an inversion point. The models are generally back-tuned for less than 150 years when the relevant time scale is millennial. The radiative forcings shown in Fig. 1 reflect the past assumptions. The IPCC future temperature projections depend in addition on the Representative Concentration Pathways (RCPs) chosen for analysis. The RCPs depend on highly speculative scenarios, principally population and energy source and price forecasts, dreamt up by sundry sources. The cost/benefit analysis of actions taken to limit CO2 levels depends on the discount rate used and allowances made, if any, for the positive future positive economic effects of CO2 production on agriculture and of fossil fuel based energy production. The structural uncertainties inherent in this phase of the temperature projections are clearly so large, especially when added to the uncertainties of the science already discussed, that the outcomes provide no basis for action or even rational discussion by government policymakers. The IPCC range of ECS estimates reflects merely the predilections of the modelers – a classic case of “Weapons of Math Destruction” (6).
“The structural uncertainties inherent in this phase of the temperature projections are clearly so large, especially when added to the uncertainties of the science already discussed, that the outcomes provide no basis for action or even rational discussion by government policymakers. The IPCC range of ECS estimates reflects merely the predilections of the modelers – a classic case of “Weapons of Math Destruction” (6).”
Wow, well written and pretty damning. If you have to use fudge factors to even go back 150 years, then it is obvious you are not including relevant calculations in your model. Any such model is worthless.
Fudge factor
A fudge factor is an ad hoc quantity introduced into a calculation, formula or model in order to make it fit observations or expectations. Examples include Einstein’s Cosmological Constant, dark energy, dark matter and inflation.
And that’s why it is theory. Or is it? Stay tuned.
To be clear, Catastrophic Man-made Global Warming is not a Theory, it is a Hypothesis, and it is a FAILED Hypothesis because it has been repeatedly disproved.
The argument is about one parameter – the sensitivity of climate to increasing CO2. Let’s call that TCS.
There is ample Earth-scale evidence that TCS is less than or equal to about 1C/(2xCO2).
There is no credible evidence that TCS is 3C to 10C/(doubling of CO2), as warmists have falsely claimed now and/or in the past.
At TCS less than or equal to 1C/(2xCO2), THE GLOBAL WARMING CRISIS DOES NOT EXIST, except in the fevered minds of warmist fanatics.
Examples of evidence of low TCS:
1. Prehistoric CO2 concentrations in the atmosphere were many times today levels, and there was no runaway or catastrophic warming.
2. Fossil fuel combustion and atmospheric CO2 strongly accelerated after about 1940, but Earth cooled significantly from ~1940 to ~1977.
3. Atmospheric CO2 strongly increased after ~1940 , but the warming rates that occurred pre-1940 and post-1977 were about equal.
4. Even if you attribute ALL the warming that occurred in the modern era to increasing atmospheric CO2, you only calculate a TCS of about 1C/(2xCO2). [Christy and McNider (1994 & 2017), Lewis and Curry (2018).]
There are many more lines of argument that TCS is low – these are just a few.
Scoundrels and imbeciles continue to advocate that increasing atmospheric CO2 is causing dangerous global warming, wilder weather, etc. These warmist claims are not only unproven, they are false. Increasing atmospheric CO2 is beneficial to humanity and the environment – plant and crop growth is enhanced, and any resulting warming will be mild and beneficial.
Regards, Allan
A fudge factor is an ad hoax quantity, more than it is an “ad hoc” quantity.
This Hebert & Lovejoy paper is sensible and well argued. It shows that the incorrect normalized pattern of historical warming in global climate models looks almost the same as the future warming under RCP scenarios. It quantifies the historical and future warming pattern correlation in 32 CMIP5 models, at 0.95 to 0.98 depending on scenario. The CMIP5 ensemble-mean warming pattern does not evolve over the 21st century, even for RCP2.6, remaining virtually identical between 2000-2020 and each sucessive 20-year period up to 2080-2100. They conclude that this is warming pattern is unrealistic and that projected warming is likely to be about 15% too high in the global mean, based on what happened over the historical period in models and in reality.
The study’s finding that the CMIP5 ensemble-mean global warming response in historical simulations is only 15% above that observed reflects its use of the same RCP forcing data for both models (for which is it is arguably a reasonable approximation, on average) and observations (for which it is much too low). The 15% excess warming in models becomes a 40% excess when using extended AR5 forcings with revised, updated greenhouse gas forcing formulae and post-1990 ozone & aerosol forcing changes, as in Lewis & Curry 2018.
In my opinion your forecasts are still much too high because you continue to ignore the millennial peak and project straight ahead beyond the millennial inflection point at about 2004.
See fig 12 from Energy and Environment paper at http://journals.sagepub.com/doi/full/10.1177/0958305X16686488
and an earlier accessible blog version at
http://climatesense-norpag.blogspot.com/2017/02/the-coming-cooling-usefully-accurate_17.html
Fig. 12. Comparative Temperature Forecasts to 2100.
Fig. 12 compares the IPCC forecast with the Akasofu (31) forecast (redharmonic) and with the simple and most reasonable working hypothesis of this paper (green line) that the “Golden Spike” temperature peak at about 2003 is the most recent peak in the millennial cycle. Akasofu forecasts a further temperature increase to 2100 to be 0.5°C ± 0.2C, rather than 4.0 C +/- 2.0C predicted by the IPCC. but this interpretation ignores the Millennial inflexion point at 2004. Fig. 12 shows that the well documented 60-year temperature cycle coincidentally also peaks at about 2003.Looking at the shorter 60+/- year wavelength modulation of the millennial trend, the most straightforward hypothesis is that the cooling trends from 2003 forward will simply be a mirror image of the recent rising trends. This is illustrated by the green curve in Fig. 12, which shows cooling until 2038, slight warming to 2073 and then cooling to the end of the century, by which time almost all of the 20th century warming will have been reversed.
As for TCS etc See also 6:31 am comment above
Best Regards Norman Page
Not a surprise. I became a skeptic in the early days of the global warming claims when I studiously began looking into the claims. Back then most universities had open websites with easy access to almost all research information. I quickly encountered one of the main proponents making a statement similar to: “Some have questioned my removing contradictory data from my results. I claim this is justified because the theory is so obviously true that contrary results must be incorrect.” Clearly this individual wasn’t doing science. Ever since I have closely examined claims and found that very few of the warming alarmists are doing actual science.