Claim: Adding Fudge Factors Makes Climate Models Scarier

Page 6, Propagation of Error and the Reliability of Global Air Temperature Projections by Pat Frank

Guest essay by Eric Worrall

h/t Willie Soon – Climate models do a poor job of reproducing observed climate. But climate scientists seem to think they can produce more accurate projections by adding fudge factors to their models, to force better agreement between models and observations.

The most accurate climate change models predict the most alarming consequences, study finds

By Chris Mooney December 6 at 1:00 PM

The climate change simulations that best capture current planetary conditions are also the ones that predict the most dire levels of human-driven warming, according to a statistical study released in the journal Nature Wednesday.

The study, by Patrick Brown and Ken Caldeira of the Carnegie Institution for Science in Stanford, Calif., examined the high-powered climate change simulations, or “models,” that researchers use to project the future of the planet based on the physical equations that govern the behavior of the atmosphere and oceans.

The researchers then looked at what the models that best captured current conditions high in the atmosphere predicted was coming. Those models generally predicted a higher level of warming than models that did not capture these conditions as well.

Read more:

The abstract of the study;

Greater future global warming inferred from Earth’s recent energy budget

Patrick T. Brown & Ken Caldeira

Nature 552, 45–50 (07 December 2017)


Climate models provide the principal means of projecting global warming over the remainder of the twenty-first century but modelled estimates of warming vary by a factor of approximately two even under the same radiative forcing scenarios. Across-model relationships between currently observable attributes of the climate system and the simulated magnitude of future warming have the potential to inform projections. Here we show that robust across-model relationships exist between the global spatial patterns of several fundamental attributes of Earth’s top-of-atmosphere energy budget and the magnitude of projected global warming. When we constrain the model projections with observations, we obtain greater means and narrower ranges of future global warming across the major radiative forcing scenarios, in general. In particular, we find that the observationally informed warming projection for the end of the twenty-first century for the steepest radiative forcing scenario is about 15 per cent warmer (+0.5 degrees Celsius) with a reduction of about a third in the two-standard-deviation spread (−1.2 degrees Celsius) relative to the raw model projections reported by the Intergovernmental Panel on Climate Change. Our results suggest that achieving any given global temperature stabilization target will require steeper greenhouse gas emissions reductions than previously calculated.

Read more (Paywalled):

Force fitting a model to observations is potentially a worthwhile exercise, to help explore the impact of model errors. What I’m concerned about is the apparent attempt to draw premature real world conclusions from this arbitrary force fitting exercise.

Consider the diagram at the top of the page, from Pat Frank’s paper “Propagation of Error and the Reliability of Global Air Temperature Projections”. Cloud forcing is a major component of the climate system, which climate models clearly get very wrong. Producing the expected result with hind casting despite major errors is not evidence that scientists are correctly modelling the Earth’s climate system.

Scientists occasionally get lucky, but the odds of improving models with a few arbitrary corrections, without any real understanding of why models get the climate so wrong, is like the odds of winning a lottery. Announcing the real world implications of an arbitrary force fitting exercise is like telling everyone you have the winning lottery ticket before the draw – not impossible, but very unlikely.

Correction (EW): h/t Nick Stokes – The description I wrote implied Caldiera and Brown added the fudge factors themselves, which is incorrect. What they did was preferentially weight models built with other people’s fudge factors, models which appear to do a better job of hind casting TOA energy imbalance, which by inference means they get clouds less wrong than other models.

… How clouds might change is quite complex, however, and as the models are unable to fully capture this behavior due to the small scale on which it occurs, the programs instead tend to include statistically based assumptions about the behavior of clouds. This is called “parameterization.”

But researchers aren’t very confident that the parameterizations are right. “So what you’re looking at is, the behavior of what I would say is the weak link in the model,” Winton said.

This is where the Brown and Caldeira study comes in, basically identifying models that, by virtue of this programming or other factors, seem to do a better job of representing the current behavior of clouds. However, Winton and two other scientists consulted by The Post all said that they respected the study’s attempt, but weren’t fully convinced. …

Read more: Washington Post (Same link as above)

0 0 votes
Article Rating
Newest Most Voted
Inline Feedbacks
View all comments
December 6, 2017 7:21 pm

The more adjustments are required to make an erroneous model match the past, the less certain its projections of the future will be.

Nick Stokes
Reply to  co2isnotevil
December 6, 2017 8:11 pm

They aren’t adjusting models. They use existing CMIP5 results. They effectively weight the models according to their match to TOA balance, climatology and seasonal cycle. Not to the observed GMST sequence. It’s a sophisticated version of the proposal, often mooted here, to make a selection of the best performers.

Richard M
Reply to  Nick Stokes
December 6, 2017 8:34 pm

NIck, in this case it is really the worst performers.

We know from Christy/McNider 2017 that models are running hot. We also know models have a positive cloud feedback assumption. So they went and looked at clouds and found the ones that got the clouds closest were the ones that ran the hottest (which means even more wrong). This is, in fact, more evidence they got cloud feedback wrong.

Reply to  Nick Stokes
December 6, 2017 8:37 pm

The ‘weights’ are adjustments to the model. Besides, instantaneous TOA balance is rarely ever true; half the time up_flux > down_flux and the other half of the time down_flux < up_flux. This is true across both diurnal and seasonal periodic forcing.

In June, the N hemisphere is receiving about 80 W/m^2 more than it's emitting and in December, it's emitting about 80 W/m^2 more than its receiving. In the S hemisphere, the delta is larger at about 90 W/m^2 and opposite in phase. The delta is larger in the S because of a larger fraction of ocean and a longer time constant. Because of this asymmetry, the 2 hemispheres never exactly cancel.

Nick Stokes
Reply to  Nick Stokes
December 6, 2017 9:03 pm

“The ‘weights’ are adjustments to the model.”
No models are adjusted. Weighting is an adjustment to the way they are averaged. On TOA average, it’s a bit more complicated than that, as I explain below. They compare three different aspects of three different CERES-observable TOA variables, regressing against ΔT to get the weights. The seasonal variation that you refer to is one of the attributes they use.

Reply to  Nick Stokes
December 6, 2017 9:25 pm

There are far more adjustments than you seem to think when matching to past behavior, especially when attempting to match past data prior to the satellite era when the precision and coverage of what’s being matched to is far lower. If the models are running intrinsically hot, as their results certainly indicate, when matched to a cooler history then the model would otherwise produce, the future must get even warmer to compensate and achieve ultimate balance at TOA.

Leo Smith
Reply to  Nick Stokes
December 6, 2017 10:01 pm

Like picking the dice that have come up sixes more often than any other, then, Nick?

Reply to  Nick Stokes
December 6, 2017 10:29 pm

They should also factor in the tidal gauge readings from Hong Kong and Oslo.
Also the TV ratings from St Petersburgh.
They might be relevant.
What do you think, Oh wise one?

Reply to  Nick Stokes
December 6, 2017 10:50 pm

I don’t know if you’ve actually looked at any model code, but the code for ModelE used for AR4 is available on line, It’s jumble of poorly organized Fortran ‘spaghetti’ code with so many random constants its absurd. Pay special attention to SOLARM where you can see the many empirical constants used to determine GHG absorption.

Reply to  Nick Stokes
December 7, 2017 2:25 am

Eric Worrall
December 6, 2017 at 9:07 pm

“Thanks Nick, I’ve added a correction to the post.”

Thanks to Nick Stockes for the input and thanks to Eric for taking account of it. This is the way things should work

Bill Illis
Reply to  Nick Stokes
December 7, 2017 3:00 am

They checked to see which of the models got the “cloud feedback” most accurate.

Well, since we have absolutely no idea what the real “cloud feedback” is, how can they make any claim with respect to that. Obviously, they just pretended there was something they were looking at.

The Earth energy imbalance is currently about 0.5 W/m2. NONE of the climate models have a figure that low, so this whole paper is just a pile of made-up fluff.

Reply to  Nick Stokes
December 7, 2017 7:28 am

“cloud feedback” ..

Global warming “theory” says night get warmer……sounds like they are saying clouds reflect heat down at night….and reflect heat up during the day

….and that leaves so much wiggle room they can make up anything

Reply to  Nick Stokes
December 7, 2017 10:00 am

“cloud feedback”

Since Bode’s quantification of feedback has absolutely no correspondence to anything related to how the climate operates, optimizing model weights to match an irrelevant concept is a classic example of garbage in, garbage out. They might as well be optimizing weights so that the temperature tracks Dow Jones.

Reply to  Nick Stokes
December 8, 2017 12:25 pm

“to make a selection of the best performers.”

By “best”, you mean “most alarming”, not “closest to real world climate data”, presumably…

george e. smith
Reply to  co2isnotevil
December 7, 2017 1:59 pm

Throwing a dart is a statistical study, and about as reliable.


What good is a statistical study of events that are only going to happen once ??

December 6, 2017 7:24 pm

How does fitting the model to the observations differ from playing Texas Marksman?

David L. Hagen
Reply to  Tom Halla
December 6, 2017 8:37 pm

The Texas Marksman is more accurate!

Reply to  Tom Halla
December 7, 2017 5:22 am

my thoughts as well.

December 6, 2017 7:26 pm

How did ‘models’ ever become truth? This is just crazy.

Reply to  markl
December 6, 2017 7:49 pm

How was a hypothesis (e.g .model) ever presented as evidence?

They have to believe that they have completely or sufficiently characterized the system and that the observed processes and systems can be accurately reproduced inside a deterministic machine with limited memory and processing capacity but at sufficient scale to close the envelope and produce a coherent solution.

Leo Smith
Reply to  markl
December 6, 2017 10:07 pm

How did ‘models’ ever become truth? This is just crazy.

Everything you think you know about anything is a model. Knowledge itself is a model.The truth is ultimately inaccessible except through imperfect models. Most of the ‘problems of metaphysics’ are simply exposures of model imperfection.

The great thing about science is that it selects models that do not contradict experience, that’s all. But ‘Climate Science’ selects models that not only are not congruent with experience, they are profitable and politically useful.

Go figure.

Reply to  Leo Smith
December 6, 2017 11:58 pm

“Everything you think you know about anything is a model.”

I disagree. Everything I know about women comes from the women in my life, none of whom have ever been models.

Reply to  Leo Smith
December 7, 2017 8:21 am


Actually, everything you know about women comes from the model you built in your mind from the evidence provided by the women in your life. All “knowledge” is just a piece of the internal world model your brain uses to process your sensory experiences, That is why paradigm shifts are so hard – they require you to rewire your internal world model to include the new ideas or suffer from the cognitive dissonance that results from the data not matching your internal model.

Pop Piasa
December 6, 2017 7:41 pm

Here we go again. If the range of models covers every possible degree of warming, they can parade about with the one which comes out closest and say that they knew it all along. If there is no warming, they will claim that observations conflict with the models and are not to be trusted.

Andy Pattullo
December 6, 2017 7:42 pm

Maybe it’s more like picking the winning numbers for the lottery AFTER the draw. Now that’s the sort of skill set the alarmists might be able to muster. And then of course they would feel fully entitled to the prize.

Hocus Locus
December 6, 2017 7:57 pm

“We’re doing God’s work.”
Uttered by countless people over the millennia, even godless peoples.
Go figure.

If you are performing good deeds under a benevolent God,
please disregard this message.

December 6, 2017 8:08 pm

The absurd modeling infatuation continues.

Joel O’Bryan
December 6, 2017 8:09 pm

The climate GCMs are now steadily coming under a withering assault of their exposed sophistry. Mainstream non-climate scientists are beginning to notice the tuning methods they have employed with their “secret sauces” of input parameter junk to achieve desired CO2 sensitivity outputs.

The Climateers are bravely circling the wagons. Through the past few decades, they have carefully ensured placement of Editorial enablers at Nature and Science and many other journals. That ensures Pal reviews for those who comply, Editorial rejection for those who resist.

They claim science is an institution that must now be defended. Defended from what? Science is a method. It is not a thing.

But defend junk models and their tuned outputs they will. Without the models and their dire future projections, they have nothing. Nada. Trillions of future dollars to redistribute and siphon off a piece of the action are at stake for the climateers.

The howling and caterwauling will only increase in the coming 3 years. The climateers have been living on borrowed time for 5 years now. The effects of a weak El Nino in 2014 and a strong El Nino in 2015 are now waning quickly. La Nina is here. A weakening solar cycle is here, possibly even a Dalton-like minimum at hand. Cold hard winters are about to smash their junk. (Take my double entendre of “junk” as you like)

The climateers’ time is up. They had a good run for 30 years. They figured they’d have it in the bag by now. Nature and politics didn’t cooperate.

But they will fight on. They have no other choice. Their reputations and salaries are at stake.

Joel O’Bryan
Reply to  Eric Worrall
December 6, 2017 8:38 pm

Mt Agung has yet to give them the stratospheric SO2 injection they need. It still might.
They need a VEI-6/Pinatubo-level event from a volcano in the next 6-12 months. It could happen. I’m sure they are praying to their volcano god now. A VEI-7 Tambora-level event would be their wettest. wildest dream come true to save them from reputation disaster.

Reply to  Eric Worrall
December 7, 2017 2:32 am

If and when Mt Agung does another VEI5 or 6 event we will see 3-4 year peak in TLS and similar lower tropo cooling. As the aerosols disperse we will see a persistent drop in TLS and surface warming.

This is exactly what happened after El Chichon and Mt Pinatubo.

With more the complete monitoring we now have, the effect will be both predictable and verifiable.

4 Eyes
Reply to  Eric Worrall
December 7, 2017 3:42 am

Let’s commit it to long term memory now – ENSO was pointing towards a La Nina BEFORE this latest volcano started erupting.

Gary Pearse.
Reply to  Joel O’Bryan
December 9, 2017 1:20 am

Joel, they don’t even get it that the 30yrs (or so) is half a visible multidecadal natural cycle and now they are battling against the dip of the next 30yr cycle. A number of skeptics predicted this levelling off and future dip that’s now upon us. There is lots we don’t know about climate, but it is ignorance or deliberate pigheadedness to miss out on this more obvious short term reality. I think they thought they could homogenize themselves out of this dilemma, Of course your going to be out 300% if you treat the rising half of a cycle as a linear you can extrapolate.

The models are simple minded solutions to high school climate with fudge factor curve fitting. Climate is complex but its yet to attract much in the way of talent to the task. I never see a discussion of the idea that a totally erroneous model could follow observations excellently for a time but perforce is certain to diverge off uselessly at some point. I’ve worried that eventually someone would accidentally nail it for half a lifetime with a totally useless bundle of perimeters. But there now seems little chance of that. If they hadn’t been in such a hurry to have disaster upon us while they are still alive, they could have at least got the downturn right and maybe had more weight to their progs. The best model so far is mine. Here’s the code: (100 model mean)*0.25 Check it out. We won’t exceed 0.8C this century.

Michael Jankowski
December 6, 2017 8:09 pm

“…high-powered climate change simulations…”


Nick Stokes
December 6, 2017 8:17 pm

“Force fitting a model to observations is potentially a worthwhile exercise, to help explore the impact of model errors.”
They aren’t fitting a model to observations. They use concordance of model output with certain observable properties to make a weighted sum, by regression. The properties are not the sequence of observed temperatures, but climatology, seasonal cycle and TOA balance. They basocally say that models that get that right to date should get more weight in the average. They don’t change models to better fit the past; they weight models in the average that do that, and then use those weights for the predictions.

Nick Stokes
Reply to  Nick Stokes
December 6, 2017 8:34 pm

“climatology, seasonal cycle and TOA balance”
That’s not quite right. They look at three attributes (mean climatology, the magnitude of the seasonal cycle, and the magnitude of monthly variability) of three TOA properties (OSR, OLR and ↓N), nine combinations. They regress existing Δ T against those, and derive the weights from that.

Joel O’Bryan
Reply to  Nick Stokes
December 6, 2017 8:54 pm

An Inconvenient observation:
Models that most closely get TOA balance “right” (as in close the budget) are typically the hottest running models.

So, Crank up the klaxon volume to level 11, please.

Somewhere those models suffer a fundamental flaw. Probably a common fatal flaw.

Reply to  Nick Stokes
December 6, 2017 9:33 pm

Nick Stokes;
They regress existing Δ T against those, and derive the weights from that.

To paraphrase Einstein, that’s not even absurd.

The notion that you can take a bunch of models that are known to be inaccurate, and average them in the hopes that their inaccuracies would somehow cancel each other out was always absurd. The notion that you can perform some statistical magical analysis on a bunch of models known to be inaccurate as an improvement over simply averaging them is… not even absurd.

If a model is inaccurate, and you don’t know why, then you don’t know which parts are wrong and which parts are right (if any). The only way to get a better answer is to figure out what the model is getting wrong and fixing it. Mushing multiple wrong models together by coming up with some way to weight them so that their combined result gives a different answer that you think is better even though you still don’t know what is wrong and what your weighting has done to the things that are wrong is… not even absurd.

Nick Stokes
Reply to  Nick Stokes
December 6, 2017 9:51 pm

“The notion that you can take a bunch of models that are known to be inaccurate”
No, you don’t know that. You’re basing it on a discrepancy between model temperature and earth temperature over a decade or so. But GCM’s don’t claim to predict weather over such periods, so they aren’t wrong. And it’s no use trying to select models on the basis that some did get that closer. If they did, it is just by chance that their ENSO variations lined up in time with what happened on Earth. No expectation they will continue doing that.

AFAICS, the logic of this paper works thus. There is a longer period of observation that allows us to associate TOA properties with surface T. And it is more reasonable to expect models to get those TOA prioperties right, so they are a valid basis for selection, or weighting. So they weight the models that do that best. If you are going to claim that models are known to be inaccurate, you have to measure that against something they should be expected to get right.

Reply to  Nick Stokes
December 6, 2017 10:24 pm

Nick Stokes;
No, you don’t know that. You’re basing it on a discrepancy between model temperature and earth temperature

Since you didn’t ask what I was basing my assertion on, and I didn’t say, you cannot say what I was basing it on.

nd it is more reasonable to expect models to get those TOA prioperties right, so they are a valid basis for selection, or weighting.

1. Why weight them at all? Why not take the models that don’t get TOA properties right and simply throw them out altogether? By keeping them at ANY weight, you’re by default saying they should have some influence on the result even though by your own metric, they’re wrong!

2. In fact, it is NOT reasonable to expect that models that get TOA properties right deserve more weighting. Unless you know what parts of them are wrong, you’re still coming up with a fancy way of mushing results known to be wrong together in the insane idea that this can result in a more accurate answer. Nonsense. If you want an accurate model, you have to build an accurate model!. Choosing ONE model property and assuming that is a predictor of over all model accuracy is absurd. The models could have gotten close to reality by getting a bunch of things right, or a bunch of things wrong, or some combination thereof. Mushing them together in ANY way is… not even absurd.

Nick Stokes
Reply to  Nick Stokes
December 6, 2017 10:28 pm

” and I didn’t say”
Well, you could. What was it?

Reply to  Nick Stokes
December 6, 2017 10:31 pm

I have a problem.
I am trying to think of a more appropriate word than “ludicrous”.

Reply to  Nick Stokes
December 6, 2017 10:49 pm

Nick Stokes December 6, 2017 at 10:28 pm
” and I didn’t say”
Well, you could. What was it?

There are many reasons to declare the models inaccurate, but for the purposes of this discussion, let us examine just one. The models do not agree with one another. They use a wide range of values for things like aerosols and clouds. At best, only ONE of them can be right. At worst, ALL of them are wrong. Gievn that, AT BEST, only one of them can be right, mushing them together to arrive at a “better result” is… not even absurd. Choosing one metric to determine which of them deserve the most weighting is equally… not even absurd.

If you created a RANGE of metrics, and chose models that got the broadest range of metrics accurate as the ones to focus on for future improvement, that would make marginally more sense to me. But even then, mushing them together by some weighting system would be absurd, and keeping the worst ones at ANY weighting at all even more absurd. Pardon me. Not even absurd.

Reply to  Nick Stokes
December 6, 2017 10:53 pm

And Nick… you didn’t respond to my points 1. and 2. above. Don’t think for a moment that no one noticed.

Nick Stokes
Reply to  Nick Stokes
December 6, 2017 11:53 pm

1. “Why not take the models that don’t get TOA properties right and simply throw them out altogether?”
That is a form of weighting. Proportional weighting reflects the fact that “right” isn’t absolute.

2. “In fact, it is NOT reasonable to expect that models that get TOA properties right deserve more weighting”
That rather undercuts 1). But why not? In fact, they cross-validate, testing that if one model is left out of the (PLS) regression, the regression relation between the 9 TOA variables and ΔT still holds. They also describe a number of other statistical tests.

Reply to  Nick Stokes
December 7, 2017 4:06 am

Davidmhoffer “1. Why weight them at all? Why not take the models that don’t get TOA properties right and simply throw them out altogether? By keeping them at ANY weight, you’re by default saying they should have some influence on the result even though by your own metric, they’re wrong!”

I would think the answer to this is pretty obvious and I’m sure you too know why they don’t throw out the more obviously dud of the current ensemble of duds. If you maintain a suite of models which between them output pretty much all conceivable behaviours in a warming world and then arrange for some algorithmic procedure to generate a combined output whilst weighting those which are currently running closest to the observations then you can in effect never be wrong. Unless it becomes a cooling world of course. It is nothing more than the usual trivially idiotic modelling charade which everyone is now thoroughly accustomed to. This time it attempts to place the modelling effort on the same forever unfalsifiable footing as the ‘human caused climate change’ hypothesis itself. Stokes is in his usual role of attempting to put statistical lipstick on this abject nonsense of a climate model pig. He believes it’s the ‘sophisticated’ thing to do apparently.

Reply to  Nick Stokes
December 7, 2017 7:55 am

Nick Stokes;
That is a form of weighting. Proportional weighting reflects the fact that “right” isn’t absolute.

If there is no absolute “right”, per your statement above, then what you are left with is a bunch of models which are, to a varying degree, wrong by your own assertion. Averaging them, or combining them via some weighting mechanism, provides nothing more than a means to combine things that we know are wrong in some insane belief that mushing them together produces some meaningful result. Not even absurd.

In fact, they cross-validate, testing that if one model is left out

And there you have it. The insane belief that models are somehow data and so can be used to validate one another. They are models. If they use similar approaches and values, they will produce similar results. That doesn’t mean they are right. If they use different approaches and values, and still produce similar results, then at best, one of them is right, and the others simply got a similar answer via multiple errors that cancel out. That’s at best. At worst, they ALL got the similar answers because of multiple errors that cancel out. The ONLY way to validate a model is against DATA.

Coming up with methods of averaging or otherwise combining model results via some weighting mechanism is a pathetic attempt to justify one’s existence and continued paycheck. It does not, and cannot, produce anything of value.

Reply to  Nick Stokes
December 7, 2017 8:10 am

Nick you need to go an learn some physics, this goes back to you trying to apply Newtons law of cooling to something you specifically aren’t allowed to do. You are dealing with a radiative transfer and some of what you are suggesting are just as silly as slayers. The thing you are modelling isn’t classical and it works in a way you clearly don’t understand so lets give you the same problem in a controlled enviroment.

In another link I was discussing a QM meta material which I hadn’t realized had been written up in the mainstream media but Willis had picked it up so lets show you it.

That is a greenhouse effect in a thin piece of plastic only we reversed it so it cools rather than heats. It would do slayers head in and now try your climate models on it 🙂

Now even if you regress your models back to a value the material is showing (that would be some negative scale value + offset on your models). Now the problem is the meta material isn’t linear as the plastic changes temperature, pressure, stretching, doping, incoming spectrum it changes slightly. You would end up having to do the worst multi-axis regression you can imagine and actually from experience it would be faster to write the correct QM formula and solve it.

Now lets give you a fairly simple test you and the modellers could write up in a nice paper. Get a piece of the Meta-material and model the greenhouse effect in it showing control conditions temp, pressure etc and your models match the output correctly at all values. You want the models believed then show they work on an easy lab test on a controlled QM greenhouse effect.

Reply to  Nick Stokes
December 8, 2017 12:40 pm

“But GCM’s don’t claim to predict weather over such periods, so they aren’t wrong.”

I think you mean “not EVEN wrong”, actually.

In order to be right or wrong, a thing has to bear at least some minimal relationship to something in the real world, and computer games climate models don’t even come close to that criterion.

Even the IPCC admit that, FFS!

I’m just thankful “climate scientists” such as yourself are never trusted with applying your “scientific knowledge” to some mission-critical task in the real world, such as designing a supermarket trolley, for example.

Reply to  Nick Stokes
December 7, 2017 6:55 am

No matter how you look at it, the study is fitting a curve to data. That only works outside the data set if the equation(s) for the curve actually are accurate representations of the processes in the climate. Effectively climate models are giant polynomials adjusted to fit the data for a system that is only partially understood. Using the IPCC terminology it is highly unlikely that any of the current models(except one) can or have produced a result that closely matches future data.

December 6, 2017 8:25 pm

Why look at the latest observations from the top of the atmosphere? My guess is that it is the only layer of the atmosphere that gives them the answer that they wanted. We know that the models are too hot at the surface. They are likely too cold in the upper stratosphere as well. They are basically wrong from bottom to top, but as the error changes sign from too hot to too cold, there is a layer that looks like the models have some skill at modelling the climate. That is not true. A virtually generated curve may intersect a curve generated by observation, but the virtual curve has no ability to predict the future of the observational curve, just because the once had a point in common.

December 6, 2017 8:33 pm

In the US, when brokerage houses advertise, to keep from deceiving the rubes they are required by law to include a very true statement of warning, usually like this:

“Past Performance Is No Guarantee Of Future Results”

Seems like we need to apply that law to climate modelers …


Nick Stokes
Reply to  Willis Eschenbach
December 6, 2017 8:36 pm

True of all investment. But people still invest it is still generally considered wise to look at past performance.

David L. Hagen
Reply to  Nick Stokes
December 6, 2017 8:46 pm

Nick Stokes – The major harm is when climate alarmists then use unvalidated climate models to coerce politicians to bury trillions of taxpayers dollars in “climate control” with negligible benefit.
Compare the very poor returns of climate control compared to all other worthwhile humanitarian global projects. See articles by Bjorn Lomborg and reviews by the Copenhagen Consensus
For a validated climate model see:

Joel O’Bryan
Reply to  Nick Stokes
December 6, 2017 9:29 pm


For at least half or more politicians, no coercion is needed. That half of politicians are like heroin-fentynal junkies demanding more smack from their dealers while using tax-payer money to buy their addiction to power.

Reply to  Nick Stokes
December 7, 2017 3:03 am


Economists are still debating the “cause” of the 1937-38 recession. They generally know all the factors that are prime candidates for the “cause”. The question among them is the “weighting” of each factor. If we can’t even solve the “why” of the past, how is it we have some much hubris to know the why of the future.

george e. smith
Reply to  Nick Stokes
December 7, 2017 2:09 pm

People even buy books containing the numbers that have come up in lotteries and won big. So they bet those numbers, that have succeeded in the past. The big winner is the guy selling those books.


David L. Hagen
December 6, 2017 8:36 pm

Fitting an elephant’s trunk

The renowned physicist Freeman Dyson tells the story (1) of a transformative meeting in 1953 with Enrico Fermi, in which the young Dyson presented a new theoretical treatment that he felt could explain Fermi’s experimental findings. Fermi was, to say the least, not convinced. “In desperation I asked Fermi whether he was not impressed by the agreement between our calculated numbers and his measured numbers. He replied, ‘How many arbitrary parameters did you use for your calculations?’ I thought for a moment about our cut-off procedures and said, ‘Four.’ He said, ‘I remember my friend Johnny von Neumann used to say, with four parameters I can fit an elephant, and with five I can make him wiggle his trunk.’ With that, the conversation was over. I thanked Fermi for his time and trouble, and sadly took the next bus back to Ithaca to tell the bad news to the students.”

quoted by Jonathon A. Ditlev et al. in “There is More Than One Way to Model an Elephant. Experiment-Driven Modeling of the Actin Cytoskeleton” Biophys J. 2013 Feb 5; 104(3): 520–532. doi: 10.1016/j.bpj.2012.12.044

Performed by Jurgen Meyer et al. Drawing an Elephant with Four Complex Parameters

Judith Curry reviews in: Climate Models for Lawyers

Key summary points:
▪ GCMs have not been subject to the rigorous verification and validation procedures that is
the norm for engineering and regulatory science.
▪ There are valid concerns about a fundamental lack of predictability in the complex
nonlinear climate system.
▪ There are numerous arguments supporting the conclusion that climate models are not fit for
the purpose of identifying with high confidence the proportional amount of natural versus
human causes to the 20th century warming.
▪ There is growing evidence that climate models predict too much warming from increased
atmospheric carbon dioxide.
▪ The climate model simulation results for the 21st century reported by the IPCC do not
include key elements of climate variability, and hence are not useful as projections for how
the 21st century climate will actually evolve. . . .

Uncertainties in GCMs arise from uncertainty in model structure, model parameters and
parameterizations, and initial conditions. Calibration – ad hoc adjustments, or tuning – is
necessary to address parameters that are unknown or inapplicable at the model resolution, and
also in the linking of submodels. Continual ad hoc adjustments of a model can mask underlying
deficiencies in model structural form.
Concerns about evaluating climate models have been raised in context of model
calibration/tuning practices. A remarkable article was recently published in Science: “Climate
scientists open up their black boxes to scrutiny”

“Indeed, whether climate scientists like to admit it or not, nearly every model has been
calibrated precisely to the 20th century climate records—otherwise it would have ended
up in the trash. “It’s fair to say all models have tuned it,” says Isaac Held, a scientist at
the Geophysical Fluid Dynamics Laboratory, another prominent modeling center, in
Princeton, New Jersey.”

Global Climate Model Reality? See:

What parameters are varied in
The following parameters are varied in the experiment:

Ice fall speed through clouds – important for the development of clouds and determining type (rain, sleet, hail, snow) and amount of precipitation
Rate at which cloud droplets convert to rain.
‘critical relative humidity’ relates the grid box scale atmospheric humidity to the amount of cloud in that grid box
The amount of water there is in a cloud to when it starts raining, which is dependent on the condensation nuclei concentration – the more condensation nuclei there are (bits of dust, salt etc. in the atmosphere on which raindrops can form) the smaller the raindrops.
The rapidity a convective cloud (imagine a plume rising over a power station, or a bit thunder cloud) mixes in clear air from around it.
Empirically adjusted cloud fraction calculates how much cloud cover there will be when the air is saturated.
The initial state of the atmosphere – what it looks like when the model starts in 1810.
The effective radius for ice crystals in clouds – i.e. what radius would they have if they were perfectly spherical. It is important in the radiation scheme, to calculate how much incoming or outgoing radiation is reflected etc.
These parameters all allow for non-spherical ice particles in the radiation scheme.
The rate air mixes by turbulence in the boundary layer (the layer of the atmosphere closest to the Earth).
This has to do with the fact that the ability of turbulence to mix air varies with how stable the air is – the more stable the air, the less turbulent mixing you get.
transference of momentum and energy between tropical oceans and the air (wind) above them.
transference of momentum and energy between seas and the air (wind) above them.
number and size of plant roots in the soil – and, consequently, to how water is taken up from the soil and into the atmosphere by plant transpiration.
the diffusion of heat from the slab ocean to ice, where there is sea-ice present in the model.
Gravity waves are waves in the atmosphere for which gravity is the restoring force – think of air passing over a mountain, it is forced upwards over the mountain, and then gravity will pull it back down, resulting in an oscillation (you often see clouds form downstream of mountains as a result). The air particles oscillating in these waves tend to lose energy because of friction (drag), and this energy manifests itself as heat. This parameter determines the lowest model level on which gravity wave drag is applied
the way that gravity waves are formed as air interacts with surface features, such as mountains.
the albedo (reflectivity) of sea ice varies with temperature.
Diffusion coefficients and exponents govern how quickly something spreads through the material it is in – so, for example, if you put a drop of oil dyed purple into a beaker of un-dyed oil, how rapidly the dyed oil mixes with the oil around it until all the beaker has the same colour. Diffusion refers to mixing due to the random motion of particles, rather than turbulent mixing which happens when there are actual vortexes mixing things (which would happen if you stirred the beaker with a spoon). In the case of the atmosphere, the horizontal diffusion coefficient and exponent determines the rate of diffusion of heat from a warm air mass to a cold one.
The rate at which water vapour diffuses from a very humid air mass to a relatively dry one.

Robert from oz
December 6, 2017 8:57 pm

Best example I think is the oceans are becoming more acidic !

Joel O’Bryan
Reply to  Robert from oz
December 6, 2017 9:13 pm

Really? (can’t tell if you are being sarcastic)

Oceans might have dropped from average pH of 8.22 to pH 8.15 in the last century. Maybe, if you count all the stars in heavens just so-so, and the wind blowing up your arse is just right.

That is still basic. It will never get acidic. The acid/base buffering capacity available to the oceans is immense. It is so immense it is beyond calculable comprehension. Literally. The basalts on the ocean floors are “basic salts minerals”, as in lots of magnesium. They’ve been building for billions of years. The calcium carbonate (Tums) reserves of the continental shelves are immense and beyond estimation. They’ve been building for billions of years.

The ocean pH may have decreased a slight bit, just like in a Chem 101 lab acid-buffering experiment, where pH initially nudges downward with the first few drops of strong acid. But then pH goes nowhere for a lot of acid addition to the solution. pH remains stuck for a long time, until the buffer runs out. For our oceans, not gonna happen, ever, even in a billion years.

Robert from oz
Reply to  Joel O’Bryan
December 6, 2017 9:18 pm

Should have added sarc but it’s one scary story they like to milk for all it’s worth , less caustic or becoming neutral is just not scary .

tony mcleod
Reply to  Joel O’Bryan
December 6, 2017 9:47 pm

And yet the pH drops.

Nick Stokes
Reply to  Joel O’Bryan
December 6, 2017 10:04 pm

“Chem 101 lab acid-buffering experiment, where pH initially nudges downward with the first few drops of strong acid”
You’re thinking of the wrong experiment. That is what happens when you add acid to neutral water. If it is buffered, the first few drops make little change. That is what buffering means.

The carbonates on the ocean floor are indeed an immense potential buffer. But they are far away. If the real buffer, which is dissolved carbonate/bicarbonate, is shifted, as it is even when pH goes from 8.22 to 8.15, then carbonates will be dissolved from more available locations. And that includes the local fauna.

Joel O’Bryan
Reply to  Joel O’Bryan
December 6, 2017 10:12 pm

With a buffering experiment, the first few drops of [H+] do drop the pH as the excess [OH-] not buffered is countered.
This is quibble just a matter of how quickly the experimenter takes notice the initial pH drop in a buffered solution experiment. Blink and you may miss it. But it does drop. I’ve seen it. It may slowly come back up, it may not depending on the reagents in use and strengths. And then pH remains stuck until the buffering capacity is depleted.

But the pH remains fixed until buffering is gone, I agree with you on that point. Which is why OA is red herring in the climate debate. The science community has slowly come around to that conclusion, and that is why OA is no longer talked about by anyone of serious stature.

Joel O’Bryan
Reply to  Joel O’Bryan
December 6, 2017 10:30 pm

and by local fauna, I assume you are referring to the shell builders in the zooplankton foraminifera group, the coral polyps, and the other macro-shell builders like clams and oysters (bivalves) and other univalves. They not only endured ancient seas where the pCO2 was much higher, they flourished by fossil records. The biosphere was clearly much more productive under ancient higher CO2 levels than today. We are simply still in a CO2 starved state between glaciations compared to the the past where those shell builders evloved and flourished.

The biosphere and the shell builders will do just fine (or better) with even much higher pCO2 than today. The OA scaremongers are simply cheap carnival barkers calling people to come see their phantasmagorical scare story.

Reply to  Joel O’Bryan
December 7, 2017 12:23 am

Marine organisms use dissolved CO2 to live and grow. As a result the heavily populated uppermost few 100 metres of the oceans are generally depleted with respect to total dissolved CO2 content as compared with deeper levels. Adding small amounts of CO2 to the ocean surface may actually benefit these organisms, in the same way that adding CO2 to the atmosphere benefits plants and animals.

Patrick MJD
Reply to  Joel O’Bryan
December 7, 2017 2:04 am

“tony mcleod December 6, 2017 at 9:47 pm

And yet the pH drops.”

Are ESTIMATES as no-one actually knows anything about OCEAN pH levels. Because, pH changes dynamically. Rain is typically pH 5.5, past the neutral side into acid. Ocean pH is ALKALINE.

Reply to  Joel O’Bryan
December 7, 2017 2:40 pm

Ocean bio USES CO2 for it structures.

They can actually decrease the alkalinity in their surrounds to make it easier to extract the CO2..

Gary Pearse.
Reply to  Joel O’Bryan
December 9, 2017 1:43 am

Nick and Joel, the variability in shallower waters along coasts are the main source of the chicken little stuff on OA. Fresh water run-off, ice melt, heavy rains as in the wifty-poofty Pacific NW push pH closer to 7 with no buffering effect to help.

Fauna have lived with this since the beginning of their journey. Heck pelecypods live in lakes, some that are below 7. Ya see why linear thinking is not serving the science* well. Biota can alter the inorganic chem to suit – they aren’t passive agents at the mercy of these things.

Reply to  Joel O’Bryan
December 9, 2017 2:01 am

Basically EVERY river that has flowed into the world’s oceans over millions and millions of years has been on the acidic side of neutral..

Yet the oceans remain stubbornly around pH 8.2 +/- a bit

Anyone that thinks a small, but highly beneficial, enhancement of atmospheric CO2 is going to have the slightest effect, has got rocks or green sludge for brains.

tony mcleod
December 6, 2017 9:46 pm

Eric orrel
“What I’m concerned about is the apparent attempt to draw premature real world conclusions…”

Your concern is noted.

Joel O’Bryan
Reply to  tony mcleod
December 6, 2017 11:01 pm


The models that seem to close the energy balance (in vs out) at the TOA are also those that tend to run the hottest in CMIP5. But with this effort current to weight those “much hotter than observation” models more significantly, it also means the climate alarmism levels will go up in PR media releases.

It also simply means due to weighting that the next CMIP (CMIP6) ensemble average is quite likely to run even hotter than observation than CMIP5.

Why would they do this?

A: The climate modelers know thy are running on borrowed time compared to observation. They are making best use of that borrowed time to try a last ditch effort and get to a Montreal Protocol-like Climate CO2 fix before the coming natural cooling cycle kicks in. To get a Paris-plus on steroids. They are now desperate.

Somewhere those hottest running models have a fundamental flaw. I don’t presume to be smart enough to know what that is. But like a non-MD guy looking at an obviously very sick man and knowing he needs prompt medical attention is not the same as needing specialist MD training to diagnose exactly what is wrong with that man. Similarly anyone of at least average intelligence, who bothers to look, can see something is very wrong with climate modelling. What exactly that is I am not qualified to say. But I know a science sickness when I see it, and today’s climate modelling is very sick.

Joel O’Bryan, PhD

Tari Péter
Reply to  Joel O’Bryan
December 7, 2017 9:36 am

Joel O’Bryan:

“Somewhere those hottest running models have a fundamental flaw.”

No, the models are OK, the problem is with the underlying physics. Out there in the nature is acting a force, unknown so far for climate scientist and for anybody else for that matter. This is a natural phenomenon having thermal effect sometimes comparable with that of atmospheric CO2. That was the case at the end of the last century.

As this natural phenomenon is not included to the models, all its heating is contributed to CO2, practically doubling the supposed greenhouse effect of CO2. This is the reason behind the spooky results of climate models.

Recently this natural phenomenon is calmed down its heating is only 0.16 W/m2, next to nothing.

Regards to all

Tari Péter

Reply to  Joel O’Bryan
December 7, 2017 10:11 am

But Joel,
They also want to influence NOAA to go data adjusting the past again so they can match the new, even worse, model results. After all, wasn’t that what Karlization was all about?

Science or Fiction
December 6, 2017 10:11 pm

It is interesting to see the enormous range in energy fluxes in the CMIP5 models (CMIP5 =Climate Model Intercomparison Project) used by IPCC in their Assessment Report 5.
The energy balance over land and oceans: an assessment based on direct observations and CMIP5 climate models – Wild et al 2014

Here are some examples of the range of energy fluxes that is spanned out by the models (See Table 2: Simulated energy balance components averaged over land, oceans and the entire globe from 43 CMIP5/IPCC AR5 models at the TOA, atmosphere, and surface)

Surface (All units: W/m2):
Solar down: 18.6
Solar up: 10.5
Solar net: 17.2
Thermal down: 18.5
Thermal up: 11.8
Thermal net: 15.7
Net radiation: 17.2
Latent heat: 13.9
Sensible heat: 13.1
(Averages are taken over the period 2000–2004)

Taking into account that the current energy accumulation on earth is estimated from observation of ocean warming to be 0.6 W/m2 (Ref.: “considering a global heat storage of 0.6 W m–2» ref.: IPCC;AR5;WGI;page 181; 2.3.1 Global Mean Radiation Budget), I think it is fair to assume that the models would have been all over the place if not constrained by heavily tuning to fit various observations.

That the climate science community continues its tuning efforts should be no surprise. Expect these tuning efforts to continue in an ad-hoc manner.

Leo Smith
December 6, 2017 10:13 pm

The carbonates on the ocean floor are indeed an immense potential buffer. But they are far away.

I guess that’s where all the climate change excess heat is hiding too, nick, so its good that the excess heat and the carbonates are so far away we can safely ignore both of them…

…How much to they pay you to astroturf, by the way? Couldn’t you get a proper job?

Steve Richards
December 7, 2017 12:09 am

My last companies payroll system was so unreliable that they needed ten payroll programs and averaged the outputs! They could not get any program to be accurate because the financial rules were just too complex. They keep trying to make the programs better. Some new moms have variable maternity pay but the managers say that the weighted average output of the programs is good enough. Do I need a /Sark?

December 7, 2017 1:10 am

There are nothing wrong with these models produced by the US Global Change Research Program’s report which shows that all of nature’s thermometers indicate a rapid rise in global temperatures, proving that climate change is happening now and will worsen in the future (as the models predict):

Reply to  Eric Worrall
December 7, 2017 3:53 am

Look at the trends Eric. Do you honestly think these are going to go down? The models being used may not may be perfect, but they are uniform in the worsening predictions resulting from climate change. How much and to what degree is where the models may differ.

Reply to  Eric Worrall
December 7, 2017 4:33 am

The trend has been up since the 1699s.
The rate of change of the trend is NOT accelerating.
If anything, the peak may be in sight (ca within the next 200 years).
Sleep well my friend.

Reply to  ivankinsman
December 7, 2017 7:15 am

I can confidently say, in IPCC lingo, that we are nearing the end of an interglacial period and that temperatures over much of the globe will continue to go down until they hit -9C, despite the current 100 or so year warming of 1-2C. The outcome is HIGHLY likely, only the time period is in question.

Reply to  ivankinsman
December 7, 2017 8:41 am

@ ivankinsman
modeling was my first job, and there are many thing obviously wrong with them:
1) Overfitting. way too many parameters (hundreds!), so (aka “fitting the elephant and making his trunk wiggle”) is bound to occur.
2) Ignorance. way to few data, in time range, geographic sampling, and precision.
3) Linear. lip-service paid to chaotic system, but in practice linear modeling (“anomaly” and “forcing” are thing of the non-chaotic systems)
4) Climate Denial. denial that climate change never stopped happening, (Romans already complained about it, and Chinese, too, and Neanderthals before, you can bet), so implicit assumption that climate didn’t changed of its own in the last decades.
5) hubris. A theory that is not able to explain the past (not just fit to the past value, really explain all of it: LIA, MWP, switch between glacial and inter-glacial, etc.) is just as unable to predict anything, even when turned into models. NO current model can do that.
6) self-contradiction. In a proper theory, all models properly done according to the theory produce the same result. They don’t
7) Political fitting. Just throw the dice until it give you a six, just ask reports and models, and pays them when they fit the beforehand agreed upon storyline. There is absolutely no way an honest modeling job could give the result of IPCC ensemble model. There should be models that gave from -2 to +3 warming, or even -4 to +2. The simple fact that none did is proof of a rigged game. Billions of dollars have been thrown at “climate change”. If climate change really was an issue, the money would have been used the way money is used to cope with real problems, like Earthquakes, floods, volcanoes, Tsunami, etc. : serious measurement of what is happening where (local scale, not global scale), warning systems, norms on how and where to build, etc. Instead, the crisis was used to prompt a global socialist government (redistributing wealth from poor people in rich nations, to rich people in poor countries), pay modelers who produced the desired results (all other are banned), and pay for not even solution (bird chopper, expensive electric toys, etc. taht makes absolutly no difference on GW) some friends of governments makes huge money out of (and pay them back).

Reply to  paqyfelyc
December 7, 2017 10:35 am

Try thousands of arbitrary parameters. A frozen snapshot of the ModelE code prominently featured in AR4 is available here:

This snippet is in MODELM and is where the effects of GHG absorption are handled. Just this little snippet of code representing a small fraction of 1% of the code base has over 100 ’empirical’ constants quantifying GHG absorption by the atmosphere, none of which are sufficiently documented as to how they arrived at them. Note that they also don’t bother to model CH4. Note that K selects one of several bands of wavelengths, which are slightly different per gas.

C Select parameterized k-distribution gas absorption by H2O, O2, CO2
C ——————————————————————

GO TO (101,102,103,104,105,106,107,108,109,110,111,112,113,114),K
C——–K=6——-H2O DS0=.01
IF(TAU1.GT.0.02343) TAU1=0.02343
GO TO 120
C——–K=5——-H2O DS0=.03
+ *(1.+.02964*PLN)
IF(TAU1.GT.0.00520) TAU1=0.00520
GO TO 120
C——–K=4——-H2O DS0=.04
IF(TAU1.GT.0.00150) TAU1=0.0015
GO TO 120
C——–K=3——-H2O DS0=.04
GO TO 120
C——–K=2——-H2O DS0=.04
+ *(1.+.001517*PLN)
GO TO 120
C——–K=4——-O2 DS0=.002
GO TO 120
C——–K=3——-O2 DS0=.004
GO TO 120
C——–K=2——-O2 DS0=.013
GO TO 120
C——–K=4——-CO2 DS0=.002
IF(PLN.LT.175.0) TAU=(.00018*PLN+0.00001)*ULN
GO TO 120
C——–K=3——-CO2 DS0=.003
GO TO 120
C——–K=2——-CO2 DS0=.003
GO TO 120
GO TO 120
GO TO 120

Reply to  paqyfelyc
December 7, 2017 11:05 am

Man started out with the wheel and eventually landed a man on the moon. Climate change modelling is improving year-on-year. Of course predicting the changes taking place across the planet is an incredibly complex proposition. So was landing on the moon but the resources were put in and the end result achieved.

Reply to  ivankinsman
December 7, 2017 11:23 am

Ivan, the models still (insert your favored dismissive epithet). The theory used in the models, except for the Russian model (INM-CM4), consistently gives wrong results. As the one fairly accurate model used different assumptions, perhaps the assumptions from theory are just wrong?

Reply to  paqyfelyc
December 8, 2017 3:11 am

@ ivankinsman
Climate change modelling is so close to zero it has no difficulty in improving year-on-year, but it still remains as close to zero than ever. Just like like weather prediction, it takes exponentially more data and processing for just incremental progress from day 2 accuracy to day 5. And we don’t have the data, and we don’t even have the science to makes sense out of the data. We have zero understanding of climate, and cannot explain nothing of it, not even the most massive events like the switch between glacial and interglacial, and even less so the more subtile one like MWP or LIA. To claim we are advancing in our prediction of future when we have no clue about the past events is just mind-blowing. You have to be really naive to believe that.

It is just plainly wrong to compare climate modeling to moon landing. What was difficult for this feat was the engineering, not the science. The science was fairly simple, known and tried since centuries, and no computer was even needed.
And resources put in the climate science are incredibly low for a such massive claimed problem. Resources were put, as i remarked, in lining the pockets of makers of bird-chopper and electric toys, and strengthening the power of the worldwide dictatorship clique. Not in coping the way we actually do with real issue (tsunami, earthquake, floods, drought, etc.)
Just compare yourself the resource (your time, your money, etc.) you invest or are willing to invest in so many causes (social, health, or whatever) that are NOT supposed do end the world. it should blow your mind that you are not actually acting the way you should if you believed a single word you write, and you are in that respect no different of even high priests of the CAGW church.

December 7, 2017 1:17 am

I love it when CAGW advocates push their “worse-than-we-thought” fear-mongering BS.

The CAGW hypothesis is on the cusp of official disconfirmation with 23 years of anemic global warming, and a disparity between CAGW hypothetical projections vs reality exceeding 2 standard deviations.

Very soon, a combination of: the 30-year PDO cool cycle in full swing, a 30-year AMO cool cycle starting in the early 2020’s, the weakest solar cycle since 1790 starting in 2021 and a VEI 6+ volcanic eruption is added for good measure, a definite global cooling trend will appear from mid-1996, which will cause the disparity between CAGW projections vs. reality to exceed 4 standard deviations..

Once that occurs, the CAGW hypothesis will have to be tossed in the trash along with the flat-earth hypothesis, and the reputation of Leftist political hacks and grant-hounds will be severely damaged.

December 7, 2017 5:56 am

Model schmodel.

December 7, 2017 6:16 am

Adjusting a model parameter by adjusting a parameter derived from experimental data can be very effective in improving modeled output IF the adjusted parameter is the one and only cause of the modeled error. It can only increase the error if the adjusted parameter is not the cause of the error. I’ll use this example:

Many have noted an approximately 60 year ‘oscillation’ in some global temperature data. This may be related to ENSO, PDO, AMO, and more. An oscillation may be modeled with an amplitude, frequency, and phase. It may be quite enlightening to include such an oscillation in the global climate models, and then see what effect independently changing the amplitude, frequency, and phase has on the accuracy of the model output. The data indicating this ‘oscillation’ has only been measured for parts of repeated cycles making true values for these three parameters problematical. I believe it would be acceptable to use parameters for amplitude, frequency, and phase that best match the observed overall historical record even if the parameters used in the model don’t perfectly agree with measured data.

Conversely, if a monotonically increasing parameter for, say, “cloud feedback” is adjusted to match just a portion of the 60 year cycle, say for the 80s and 90s decades, and the ‘oscillation’ is ignored, the adjustment will only make the output dramatically worse.

December 7, 2017 7:40 am

so they added still more epicycles, to connect already epicycle-pack-full models, and, lo, behold: the climate still revolve around CO2, and even stronger that previously thought
Post-modern science looks so close old-fashioned non-science, Ptolemy must be RIHGLHAO (rolling in his grave laughing his ass out), showing the middle finger to Copernicus.

December 7, 2017 7:40 am

In college, we used to cal this Finagle’s constant.

December 7, 2017 8:11 am

Well I’ve adjusted the models down for the rate of natural warming and now they all agree there’s nothing to worry about. This here climastrology stuff is a real snack when you get your head around it.

Randy Bork
December 7, 2017 3:16 pm

Does anyone know, when a model run is checked for hind-cast accuracy, which temperature data set/revision it is compared to?

December 7, 2017 4:04 pm

As an ex-aerospace engineer, I’m used to using SeerSim to predict schedules and costs for producing software. Also used to having it used against us by the Government. It is a really cool tool. You can tweak about a hundred factors that gauge a company’s historical performance, software skills, methodologies used, software complexity, re-use software, etc. All on sliding scales. Since we had a target, management would send us back to make more adjustments until they got the results they wanted.
Engineers had a hard time arguing the Government’s factors, since they refused to share them with us. We didn’t share ours, either.
Our results never matched the Government’s estimates. Neither of us were ever in the ballpark. We were always too low, although most of that was due to weekly/monthly changes in the Government’s requirements (after contract was signed). That latter made it impossible to say whether Government or industry was closer to the truth, because all the variables changed.
Whenever someone mentions climate models, I think of this widely accepted model, and how both track reality so well – or in both cases, not at all. Befuddling SeerSim, we had constantly changing requirements. Befuddling climate models, we have a lack of understanding of how the Sun/orbit, or volcanic action – to mention the tip of the tip of the iceberg – throw us into short-term hot and cold sessions, that somehow are all ‘predicted’ by the models, but leave observers going WTF?

December 8, 2017 4:47 pm

Even Einstein used “fudge factors.” Of course Einstein also thought it wasn’t science until he understood them, where they came from, and what they meant.

December 11, 2017 2:14 am

‘With four parameters I can fit an elephant, and with five I can make him wiggle his trunk.’ – John von Neumann

%d bloggers like this: