A collection of fudge from The Team, sweet!
ClimateGate FOIA grepper! – Email 636
Solution 1: fudge the issue. Just accept that we are Fast-trackers and can therefore get away with anything.
[Hat tip: M. Hulme]
In any simple global formula, there should be at least two clearly identifiable sources of uncertainty. One is the sensitivity (d(melt)/dT) and the other is the total available ice. In the TAR, the latter never comes into it in their analysis (i.e., the ‘derivation’ of the GSIC formula) — but my point is that it *does* come in by accident due to the quadratic fudge factor. The total volume range is 5-32cm, which is, at the very least, inconsistent with other material in the chapter (see below). 5cm is clearly utterly ridiculous.
Email 5054, Colin Harpham, UEA, 2007
I will press on with trying to work out why the temperature needs a ‘fudge factor’ along with the poorer modelling for winter.
Email 1461, Milind Kandlikar, 2004
With GCMs the issue is different. Tuning may be a way to fudge the physics. For example, understanding of clouds or aerosols is far from complete – so (ideally) researchers build the “best” model they can within the constraints of physical understanding and computational capacity. Then they tweak parameters to provide a good approximation to observations. It is this context that all the talk about “detuning” is confusing. How does one speak of “detuning” using the same physical models as before? A “detuned” model merely uses a different set of parameters that match observations – it not hard to find multiple combinations of parameters that give the similar model outputs (in complex models with many parameters/degrees of freedom) So how useful is a detuned model that uses old physics? Why is this being seen as some sort of a breakthrough?
We had to remove the reference to “700 years in France” as I am not sure what this is , and it is not in the text anyway. The use of “likely” , “very likely” and my additional fudge word “unusual” are all carefully chosen where used.
Email 723, Elaine Barrow, UEA, 1997
Either the scale needs adjusting, or we need to fudge the figures…
;****** APPLIES A VERY ARTIFICIAL CORRECTION FOR DECLINE*********
;
yrloc=[1400,findgen(19)*5.+1904]
valadj=[0.,0.,0.,0.,0.,-0.1,-0.25,-0.3,0.,-0.1,0.3,0.8,1.2,1.7,2.5,2.6,2.6,$
2.6,2.6,2.6]*0.75 ; fudge factor
if n_elements(yrloc) ne n_elements(valadj) then message,’Oooops!’
h/t to Tom Nelson
Maybe it isn’t fudge, but a social issue. Robert Bradley writes:
Here is my favorite quotation:
“[Model results] could also be sociological: getting the socially acceptable answer.”– Gerald North (Texas A&M) to Rob Bradley (Enron), June 20, 1998.
See “Gerald North on Climate Modeling Revisited (re Climategate 2.0)”: http://www.masterresource.org/2011/11/gerald-north-on-climate-modeling-revisited-re-climategate-2-0/
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
![fudge-factor-elite[1]](http://wattsupwiththat.files.wordpress.com/2011/11/fudge-factor-elite1.jpg?w=251&resize=251%2C300)
C’mon guys. You’re taking these quotes out of context.
/sarc
Jim
Briffa_sep98 code
;****** APPLIES A VERY ARTIFICIAL CORRECTION FOR DECLINE*********
;
yrloc=[1400,findgen(19)*5.+1904]
valadj=[0.,0.,0.,0.,0.,-0.1,-0.25,-0.3,0.,-0.1,0.3,0.8,1.2,1.7,2.5,2.6,2.6,$
2.6,2.6,2.6]*0.75 ; fudge factor
if n_elements(yrloc) ne n_elements(valadj) then message,’Oooops!’
Did anyone ever compute this fudge-factor on Excel?? Sorry, I forget. This looks like the correction needed to slow down the pre-1950 global temp rise and turn the 1950-1970 fall into continuing rise and give a bit extra-extra on top of UHI extra for the last results.
Measure it with a micrometer.
Mark it with a grease pencil.
Cut it with an axe.
OT not OT
Brilliant forensics breaking by Steve McI et al:
Hide all the declines
ps Steve’s “hide the Decline 2.0” find adds fuel to my belief that it’s the Soon and Baliunas story that will get legs and be the downfall of the Team fraudsters
It may be brown and aromatic, but it aint fudge.
From 0850.txt
Tim Barnett says in 2007:
“the actual forcing data is a must. right now we have some famous models that all agree surprisely well with 20th obs, but whose forcing is really different. clearly, some tuning or very good luck involved. I doubt the modeling world will be able to get away with this much longer….so let’s preempt any potential problems.”
Good to see that posters here recognise the difference between serious comment and sarcasm.
If you tune a simplified model to mimic GCMs, what would happen if the GCMs were wrong as well?
Email 1604.txt Jonathan Gregory of the Met Office writes:
“The test of the tool is that it can reliably reproduce the results of several coupled GCMs, both for temperature change and thermal expansion, for different kinds of scenarios (e.g. 1%, historical forcing, stabilisation). This is a stringent requirement. I do not think that a 2D climate model is necessarily preferable to a 1D climate model for this task. In fact, the more elaborate the simple
model – the more physics it has got in it – the less amenable it will probably be to being tuned to reproduce GCM results. It is essential that this calibration can be demonstrated to be good, because in view (a) it is the GCMs which are basically providing the results. The tool must not be allowed to go its own way. It is not necessary to interpret the tuning parameters of a simple model in terms of the physics of the real climate system when it is being used for this purpose. It is not necessarily the case that a simple climate model is the best choice anyway.”
And what might happen if you use the hockey stick to tune a GCM?
0166.txt Max Beran of Didcot asks of Keith Briffa:
“>The climate models, bless’em, indicate a temperature increase of the order
>of less than 5 to more than 10 standard deviations by the 2080s. Accepting
>the robustness of the sensitivities implicit in the Hockey stick
>reconstruction (much used to tune and confirm GCM behaviour), that suggests
>to me that we can anticipate a similar order of growth in tree ring width
>and density?”
and
“>So at what point does the tree ring to temperature sensitivity break down?
>And what might its impacts be on the hockey stick and through that the GCM
>tuning? Have there been other periods when your post-1940 reversal occurred
>perhaps due to macroclimatic forces? Could these also account for the
>discrepancy between the hockey stick and what we thought we used to know
>about the climate since 1000 AD?”
Is there any evidence to support this claim that the Hockey Stick was used to tune the models?
RE: Bill Yarber
Do you mean WAG, rather than SWAG?
Tweak the parameters. What could go wrong?
http://www.scientificamerican.com/article.cfm?id=finance-why-economic-models-are-always-wrong
Why Economic Models Are Always Wrong [actual study cited relates to geophysical models.]
Financial-risk models got us in trouble before the 2008 crash, and they’re almost sure to get us in trouble again
By David H. Freedman
Wednesday, October 26, 2011
Math: 1+1=2
CAGW Fudge Math 1+1=8 (+/- 6)
Even if it’s right, it can be terribly misleading.
Has ANYONE thought to go ask Dr Richard Muller about these emails?
I wonder if he knows yet he was working with fudged data…
@LucyS
Thanks for the link to CA. That slackens the jaw…
Are there any nuts in the fudge?
I find the terminology distressing. As it can lead to talk of fudge packing. Then will come Nick or Joel or the Common Sewer Rat stopping by to say “Not that there’s anything wrong with that.” And I don’t particularly like Seinfeld.
“a detuned model that uses old physics” … Ah, yes, the “old physics”. Proper physics, which was actually based on detailed observation, measurement and interpretation of factual data about the world. You can see why they wouldn’t want anything to do with that.
@Another Gareth November 30, 2011 at 3:28 pm:
That email #0166 is a butt kicker.
I will press on with trying to work out why the temperature needs a ‘fudge factor’ along with the poorer modelling for winter.
Email 1461, Milind Kandlikar, 2004
With GCMs the issue is different. Tuning may be a way to fudge the physics. For example, understanding of clouds or aerosols is far from complete – so (ideally) researchers build the “best” model they can within the constraints of physical understanding and computational capacity. Then they tweak parameters to provide a good approximation to observations. It is this context that all the talk about “detuning” is confusing. How does one speak of “detuning” using the same physical models as before? A “detuned” model merely uses a different set of parameters that match observations – it not hard to find multiple combinations of parameters that give the similar model outputs (in complex models with many parameters/degrees of freedom) So how useful is a detuned model that uses old physics? Why is this being seen as some sort of a breakthrough?
This email on fudging the physics and ‘de-tuning’ the models and wondering why bringing back real physics, “old physics”, is seen as some sort of breakthrough is from 2004, the piece from Glassman helps put it into their modelling context history, the GCM’s.
==============================================
CO2 ACQUITTAL
Rocket Scientist’s Journal
by Jeffrey A. Glassman, PhD
Revised 11/16/09.
“C. GREENHOUSE CATASTROPHE MODELS (GCMs)
Since the industrial revolution, man has been dumping CO2 into the atmosphere at an accelerating rate. However the measured increase in the atmosphere amounts to only about half of that manmade CO2. This is what National Geographic called, “The Case of the Missing Carbon”. Appenzeller [2004].
Climatologists claim that the increases in CO2 are manmade, notwithstanding the accounting problems. Relying on their greenhouse gas theory, they convinced themselves, and the vulnerable public, that the CO2 causes global warming. What they did next was revise their own embryonic global climate models, previously called GCMs, converting them into greenhouse gas, catastrophe models. The revised GCMs were less able to replicate global climate, but by manual adjustments could show manmade CO2 causing global warming within a few degrees and a fraction!
The history of this commandeering is documented in scores of peer-reviewed journal articles and numerous press releases by the sanctified authors. Three documents are sufficient for the observations here, though reading them is rocket science. (An extensive bibliography on climate, complete with downloadable documents, covering the peer-reviewed literature and companion articles by peer-published authors is available on line from NASA at http://pubs.giss.nasa.gov/.) The three are Hansen, et al., [1997], Hansen, et al., [2002], and Hansen, et al., [2005]. Among Hansen’s many co-authors is NASA’s Gavin Schmidt, above. He is a frequent contributor to the peer–reviewed literature, and he is responsible for a readable and revealing blog unabashedly promoting AGW. http://www.realclimate.org/.
The three peer-reviewed articles show that the Global Climate Models weren’t able to predict climate in 1997. They show that in the next five years, the operators decoupled their models from the ocean and the sun, and converted them into models to support the greenhouse gas catastrophe. They have since restored some solar and ocean effects, but it is a token and a concession to their critics. The GCMs still can’t account for even the little ice age, much less the interglacial warming.
All by themselves, the titles of the documents are revealing. The domain of the models has been changed from the climate in general to the “interannual and decadal climate”. In this way Hansen et al. placed the little ice age anomaly outside the domain of their GCMs. Thus the little ice age anomaly was no longer a counterexample, a disproof. The word “forcing” appears in each document title. This is a reference to an external condition Hansen et al. impose on the GCMs, and to which the GCMs must respond. The key forcing is a steadily growing and historically unprecedented increase in atmospheric CO2. “Efficacy” is a word coined by the authors to indicate how well the GCMs reproduce the greenhouse effect they want.
In the articles, Hansen et al. show the recent name change from Global Climate Models to Global Circulation Models, a revision appropriate to their abandonment of the goal to predict global climate. The climatologists are still engaged in the daunting and heroic task of making the GCMs replicate just one reasonable, static climate condition, a condition they can then perturb with a load of manmade CO2. The accuracy and sensitivity of their models is no longer how well the models fit earth’s climate, but how well the dozens of GCM versions track one another to reproduce a certain, preconceived level of Anthropogenic Global Warming. This suggests that the models may still be called GCMs, but now standing for Greenhouse Catastrophe Models.
In these GCMs, the CO2 concentration is not just a forcing, a boundary condition to which the GCM reacts, but exclusively so. In the GCMs, no part of the CO2 concentration is a “feedback”, a consequence of other variables. The GCMs appear to have no provision for the respiration of CO2 by the oceans. They neither account for the uptake of CO2 in the cold waters, nor the exhaust of CO2 from the warmed and CO2–saturated waters, nor the circulation by which the oceans scrub CO2 from the air. Because the GCMs have been split into loosely–coupled atmospheric models and primitive ocean models, they have no mechanism by which to reproduce the temperature dependency of CO2 on water temperature evident in the Vostok data.[*]
GCMs have a long history. They contain solid, well-developed sub-models from physics. These are the bricks in the GCM structure. Unfortunately, the mortar won’t set. The operators have adjusted and tuned many of the physical relationships to reproduce a preconceived, desired climate scenario. There is no mechanism left in the models by which to change CO2 from a forcing to a feedback.
Just as the presence of measurable global warming does not prove anthropogenic global warming, the inclusion of some good physics does not validate the GCMs. They are no better than the underlying conjecture, and may not be used responsibly to demonstrate runaway greenhouse effects. Science and ethics demand validation before prediction. That criterion was not met before the climatologists used their models to influence public opinion and public policy.
The conversion of the climate models into greenhouse catastrophe models was exceptionally poor science. It is also evidence of the failure of the vaunted peer review process to protect the scientific process.
=====================
[*]”Because the GCMs have been split into loosely–coupled atmospheric models and primitive ocean models, they have no mechanism by which to reproduce the temperature dependency of CO2 on water temperature evident in the Vostok data.”
This refers back to the section before on Vostok:
“
Oh blast. Forgot to put in the link: http://www.rocketscientistsjournal.com/2006/10/co2_acquittal.html
To be fair to Hulme he ends with
“So this is the situation as seen by me right now. I guess Solution 1 would
not pass decent Nature reviewers and Solution 3 may never materialise. If
people want Solution 2 then I can ask Grubler for what he can give us.
Mike”
The epistemic structure for GCM modeler’s thought processes:
a) without scientifically validated observations in nature, modelers start by positing (a priori) there is significant/ alarming/ concernist AGW by CO2 by fossil fuel
b) make models that show your ‘a priori’ premise
c) fudge models to compensate for lack of conformance to reality
d) when iterative fudging doesn’t bring your models in conformance with observations of nature then attack the observations and do gatekeeping to block MSM interest in the observations that invalidate GCMs.
e) take the above a) through d) as perfect justification for a lot more funding for GCMs
f) go to bank to deposit earnings
John
I believe 99 percent of people now disbelieve AGW
well if you fudge the figures alittle
;****** APPLIES A VERY ARTIFICIAL CORRECTION FOR DECLINE*********
yrloc=[1400,findgen(19)*5.+1904]
valadj=[0.,0.,0.,0.,0.,-0.1,-0.25,-0.3,0.,-0.1,0.3,0.8,1.2,1.7,2.5,2.6,2.6,$
2.6,2.6,2.6]*0.75 ; fudge factor
so basically they just multiplied the last few data points with a linearly increasing multiplier. Bravo fellas
Simeon, from how I read that those values are not multiplied. Multiplication by zero gives what?
Exactly.
Those are values ADDED.