From the “whoopsie, that’s not what I meant” department

Guest essay by Thomas Wiita
A recent poster here wrote that they had stopped looking at the Real Climate web site, and good for them. It has become a sad, inwardly focused group. It’s hard to see anyone in the Trump Administration thinking they’re getting value for money from their support of that site.
I still check in there occasionally and just now I found something too good not to share with the readers at WUWT.
Gavin has a post up in which he rebuts Judith Curry’s response to comments about her testimony at the Committee hearing. Let me step aside – here’s Gavin:
“Following on from the ‘interesting’ House Science Committee hearing two weeks ago, there was an excellent rebuttal curated by ClimateFeedback of the unsupported and often-times misleading claims from the majority witnesses. In response, Judy Curry has (yet again) declared herself unconvinced by the evidence for a dominant role for human forcing of recent climate changes. And as before she fails to give any quantitative argument to support her contention that human drivers are not the dominant cause of recent trends.
Her reasoning consists of a small number of plausible sounding, but ultimately unconvincing issues that are nonetheless worth diving into. She summarizes her claims in the following comment:
… They use models that are tuned to the period of interest, which should disqualify them from be used in attribution study for the same period (circular reasoning, and all that). The attribution studies fail to account for the large multi-decadal (and longer) oscillations in the ocean, which have been estimated to account for 20% to 40% to 50% to 100% of the recent warming. The models fail to account for solar indirect effects that have been hypothesized to be important. And finally, the CMIP5 climate models used values of aerosol forcing that are now thought to be far too large.
These claims are either wrong or simply don’t have the implications she claims. Let’s go through them one more time.
1) Models are NOT tuned [for the late 20th C/21st C warming] and using them for attribution is NOT circular reasoning.
Curry’s claim is wrong on at least two levels. The “models used” (otherwise known as the CMIP5 ensemble) were *not* tuned for consistency for the period of interest (the 1950-2010 trend is what was highlighted in the IPCC reports, about 0.8ºC warming) and the evidence is obvious from the fact that the trends in the individual model simulations over this period go from 0.35 to 1.29ºC! (or 0.84±0.45ºC (95% envelope)).”
The figure was copied straight from RC. There is one wonderful thing about Gavin’s argument, and one even more wonderful thing.
The wonderful thing is that he is arguing that Dr. Curry is wrong about the models being tuned to the actual data during the period because the models are so wrong (!).
The models were not tuned to consistency with the period of interest as shown by the fact that – the models are not consistent with the period of interest. Gavin points out that the models range all over the map, when you look at the 5% – 95% range of trends. He’s right, the models do not cluster tightly around the observations, and they should, if they were modeling the climate well.
Here’s the even more wonderful thing. If you read the relevant portions of the IPCC reports, looking for the comparison of observations to model projections, each is a masterpiece of obfuscation on this same point. You never see a clean, clear, understandable presentation of the models-to-actuals comparison. But look at those histograms above, direct from the hand of Gavin. It’s the clearest presentation I’ve ever run across that the models run hot. Thank you, Gavin.
I compare the trend-weighted area of the three right hand bars to the two left hand bars, which center around the tall bar of the mode of the projections. There is way more area under those three bars to the right, an easy way to see that the models run hot.
If you have your own favorite example that shows that the models run hot, share it with the rest of us, and I hope you enjoyed this one. And of course I submitted a one sentence comment at RC to the effect that the figure above shows that the models run hot, but RC still remembers how to squelch all thoughts that don’t hew to the party line so it didn’t appear. Some things never change.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
We don’t understand the earth’s atmosphere well enough to model it.
That’s it. There is nothing more.
‘only simulation ensembles made across systematically designed model families allow an estimate of the level of relevant irreducible imprecision.’
Analysis of junk results in junk.
“I do not believe in the collective wisdom of individual ignorance.” – Thomas Carlyle
” He’s right, the models do not cluster tightly around the observations, and they should, if they were modeling the climate well.”
Err no. There will always be structural uncertainty especially in models of large complex systems.
Next, the models run a little hot, that makes them PERFECT for establishing policy with a safety zone or buffer.
Long ago I built a model of how far a plane could fly with the fuel remaining. LOTS of unknowns, Lots of structural uncertainty. The model would always underestimate the distance the plane could fly. This a was Great feature. You never ran out of gas and crashed.
In general the models get the temperature right within 10-15% and they get trends right to about the same degree. If you are going to miss a prediction ( well predictions are ALWAYS wrong to some degree) it’s good that the IPCC models miss on the high side and predict too much warming. Its a good upper bound.
Brilliant! Models that overestimate are good because with any luck we can force governments into wasting even more taxpayers’ money than reality justifies. Unless you have another interpretation up your sleeve!
How about: observations show that climate, and its main component weather, is doing nothing out of the ordinary except in the models so we have decided to stop pretending there is some existential problem and go off and get useful jobs!
Nah!
Mosher,
Surface temperature groups use models too and they tune temperatures.
What BS . . .
“Structural uncertainty” – wow.
“the models run a little hot, that makes them PERFECT for establishing policy” – yes, if you *assume* that temperatures will continue to get hotter! It’s (nearly) always that it is our assumptions that get us in trouble.
“models get temperature right within 10-15%” – what does this mean? What values for temperature are you talking about?
KJ… I think he’s referring to the modern Percentigrade scale.
Given that real-world examples of ‘PERFECT for establishing policy’ already include diesel-powered cars in England (increasing real pollutants relative to gasoline) and idle desalination plants in Australia (badly allocated capital), I hope that you (SM) are able to see the weakness of your argument.
Science isn’t about wobbly overestimating based on some self proclaimed precautionary principle as a rationalization.
A similar principle seems to apply to the T databases. According to Tony Heller his study shows the adjustments to the T correlates with increased CO2 with an R2 = 0.98. Has anyone ever observed real statistical data to have such a perfect measure!?
Seems this field has a lot of ostriches posing as scientists.
Geesh, go tell the astronauts going Mars that they will have plenty of fuel; pity they overshoot the planet somewhat on their way to heaven.
This CO2 scam is a dead duck once Trump puts the boot in. China, India, Russia (whose models seem to track the best) could not care less about CO2 unless there is money coming their way. The UK seems likely to throw off the CO2 yoke as well. Here’s to more sanity! Finally!
Well, the correlation between min temps and dew points are in the upper 97% range(and cross correlation has dew points leading min temps by a month or so).
micro6500
Thanks for your graphs. I wonder if there is any localized similar correlation; would prove very useful to growers.
It was really left unsaid that the T adjustments tracking so perfectly with CO2 changes implies T data are “tuned” to that relationship. The discrepancy between satellite and “ground” T measures are becoming larger yet there are no reports on lapse rate changes to my knowledge. In any case it further puts a nail in the CO2 coffin in that this runs counter to the hypothesized hotspot warming in the upper tropical troposphere.
SM,
You are basically arguing the Precautionary Principle. If there were no costs involved in taking a conservative approach, then it would be acceptable. However, the world is being asked to turn its energy policy on its head without proper concern for the costs.
Lets, take a look at your plane simulation. While there is merit to being sure a plane never runs out of gas, it is important to be sure that one doesn’t err too much on the side of caution. Because in the real world, a plane that lands with too much fuel might be a safety hazard. It also means that a larger than necessary inventory of fuel needs to be kept on hand if the planes are refueled as recommended by your simulation.
I question your claim that the GCMs get the temperature “right within 10- 15%,” when they appear to be running about 3-times the observed temperatures.
I agree with Mosher, such can be a safety factor. I disagree, in this case, that it has been “Great”.
Setting a safety factor is a judgement call. Not knowing the safety factor is a tragedy waiting to happen. Whether it is a budget, or a pilot ejecting from a plane, safety factors and knowledge of capability are a prerequisite to a good decision. It costs to overpay or abandon a plane unnecessarily.
The scientists are not claiming it as a safety factor. The politicians are claiming: it is science. So, in this respect, Mosher is pointing out both are wrong.
Worse, advocates are using this to shut down disagreement with policy, while vilifying, “based on the science”, persons who disagree with proposed timelines, threat, or harm.
The reason that vilification should be thoroughly condemned is that you and I should have input to this. That is why using the current tactics are not only harming people, but are deceiving people as to what the real arguments are, and whether or not the actions proposed should be used or not.
Wanting to do something other than wasting money in a futile gesture of virtue signalling should not cause persons to be vilified.
The model would always underestimate the distance the plane could fly.
===========
what a surprise! a model that is always gives the wrong answer, coupled with an excuse as to why that is better than a model that get the right answer.
why not build a model that gets the right answer then add in a known safety factor? because then it would be called engineering, not climate science.
Steve,
Models that consistently “cry wolf” are not good for *sound* policy decisions.
Clear and consistent exaggerations of the risk are the best evidence against the need for urgent action to mitigate that risk.
No, that destroys credibility. If you are consistently wrong, no one relies on you. ANd they are consistently wrong. Keep betting on the 13 coming up on the roll of the dice.
Steve –
Err no. Boosting excess fuel on any sort of aircraft is a very bad idea for many, many reasons. My guess is your simulation wasn’t actually used by any commercial carrier if it consistently overestimated provisioning by any significant factor? It’s not just about economics Steve, it’s also about safety.
Well, this blog posts conclusion “It’s the clearest presentation I’ve ever run across that the models run hot” is simply wrong.
You can’t do statistics with individual model runs, because some models have 10 runs and others only one.
Doing the stats properly, by use of KNMI climate explorer, show that the average SAT trend 1951-2010 of all 39 CMIP5 rcp8.5 models is 0.138 C/ decade
The corresponding trend of Gistemp loti is 0.136 C/ decade
However, Gistemp loti is blended SAT/SST, not global SAT like models. If we blend models likewise, the average trend decrease to about 0.12 C/decade.
Hence, models have on average a slightly lower trend than Gistemp loti in 1951-2010.
The average trend of five global observational datasets is spot on that of models, 0.12 C/ decade
O R,
And are the 8 and 6 significant figures or just displayed because the trends were arbitrarily rounded to 3 significant figures as a habit from slide rule days?
You have to watch the pea under the shell very carefully. Gavin quotes Dr. Curry as saying that the models are “tuned to the period of interest”.
However, Gavin changes this. He says they are not tuned to “the period of interest”, but he defines the period of interest as “(the 1950-2010 trend is what was highlighted in the IPCC reports, about 0.8ºC warming)”.
I know of nobody who claims that the models are tuned to that short 60 year period starting in 1950 and ending in 2010. The “period of interest” to which they are tuned is generally the period 1850-2000. And while they do perform poorly during the period 1950-2010, in part that’s because the 21st century is out-of-sample, and in part because they have trouble replicating the temperature drop from 1945 to 1965 or so. Since these two periods are at the beginning and the end of the 1950-2010 interval, this leads to trends all over the map.
NONE OF THIS, however, negates Dr. Judith’s point.Gavin is falsely claiming that the models are not tuned to the historical record. This is errant nonsense that can only be maintained by ruthlessly censoring opposition viewpoints. There is no question that the models are tuned, there have been entire scientific seminars on the subject and journal articles. See Dr. Judith’s excellent post on the subject.
Best to all,
w.
“The “period of interest” to which they are tuned is generally the period 1850-2000.”
Do you have a reference for that? It seems unlikely to me. Generally for tuning you need a short period with an unambiguous result. That’s partly because full runs are expensive and tuning requires trial and error. Mauritsen et al, quoted above, say:
“The MPI-ESM was not tuned to better fit the 20th century. In fact, we only had the capability to run the full 20th Century simulation according to the CMIP5-protocol after the point in time when the model was frozen”
and later
“To us, a global mean temperature in close absolute agreement with observations is of highest priority because it sets the stage for temperature-dependent processes to act. For this, we target the 1850–1880 observed global mean temperature of about 13.7°C [Brohan et al., 2006].”
Nick writes
Mauritsen et al is a description of running a model, not developing one. Once the model is “complete” there is scope to tune parameters to get better looking results in one area but that’d probably be at the expense of another. Mauritsen et al was an exercise in tweaking all the tunable model parameters to get the best result overall.
And when models are developed, each component must be compared to what is known. Gavin is simply wrong about this or doing Mannian spin.
“You should understand what it is that they’re doing! Mauritsen aren’t developing the model, they’re running. it.”
I understand it very well. Unlike people here, I have actually done tuning, for CFD programs. I’ve tried to explain upthread when it is called for. You don’t do it with full program runs; you have to be able to do trial nd error. I wrote out the three tuning steps they use for TOA balance. That is a development matter; you start with a very brief run to see what you can get out of one association, then with that knowledge try for a longer sequence, probably having most variables following a prescribed rather than a solved trajectory.
Nick:
“Mauritsen et al, quoted above, say: ‘The MPI-ESM was not tuned to better fit the 20th century. In fact, we only had the capability to run the full 20th Century simulation according to the CMIP5-protocol after the point in time when the model was frozen.’”
Note that the passage you quote goes on to say that they already knew that the model would match well with the 20th century when they were developing it, i.e. “Yet, we were in the fortunate situation that the MPI-ESM-LR performed acceptably in this respect, and we did have good reasons to believe this would be the case in advance because the predecessor was capable of doing so.”
The paper also concedes that “[e]valuating models based on their ability to represent the TOA radiation balance usually reflects how closely the models were tuned to that particular target, rather than the models (sic) intrinsic qualities.” If models not producing the 20th century warming are winnowed from publication and use by the IPCC, which the paper also states as happening, does it not stand to reason that this article you quote actually backs up Curry’s point that these models cannot be used to attribute that 20th century warming to any particular cause?
Kurt,
“Note that the passage you quote goes on to say that they already knew that the model would match well with the 20th century”
So? We are talking about tuning here. They have “admitted” that they knew the model matched fairly well in the past. So what are they supposed to do? Throw it out?
As to winnowing, that is a different issue. It isn’t tuning. But what the whole discussion lacks is any evidence of what actually happens. Is it really the case that models were winnowed? How many?
Nick states : “As to winnowing, that is a different issue. It isn’t tuning. But what the whole discussion lacks is any evidence of what actually happens. Is it really the case that models were winnowed? How many?”
Great question. But what it does tell us is that you justification of the average as being acceptable has been invalidated, and worse you cannot tell how badly. Great own goal Nick.
Willis:
When Curry says that the models are tuned to the period of interest, I think she is referring to modeler’s admissions that they discard models that do not replicate the abrupt warming shown in the 20th century. She’s said this in both her congressional testimony and the blog post that Schmidt ostensibly replies to. In other words, she’s not referring to the selection of values for parameters in any individual model, but instead to the selection that goes on when models that don’t match that upswing are just never published or used in the IPCC. Because of this selection bias, she argues (correctly in my view) that using those models to attribute the recent warming to CO2 is circular reasoning since the selection process of the models weeded out any one that didn’t show the uptick.
Here’s the quote she uses from an article describing the tuning of models:
“Climate models ability to simulate the 20th century temperature increase with fidelity has become something of a show-stopper as a model unable to reproduce the 20th century would probably not see publication.” Note the specification of the 20th century temperature “increase” as the characteristic used to selectively weed out models.
Gavin therefore does indeed respond to a straw man by shifting the “period of interest” to begin in 1950, but I think the real “period of interest” is from 1980-2000 since that’s when the instrumental record really starts to take off. If a model shows that “hockey stick” all you need to do is line up the model with the record by selecting the base period to measure your anomalies.
The other problem I have with Gavin’s purported rebuttal is his assumption that you can show that models were not tuned to reproduce an empirical trend by showing the variance in the trends of individual model runs. If you look at Gavin’s graph, the center (average) of all the model runs is right around the GISS values. How precisely does this graph refute a premise that the models used to generate these individual runs were tuned so that the average trend centered around the observed trend? The average of the model runs is, after all,what the IPCC points to as validating the models.
Kurt,
There is a lot of goal-post shifting here. Most people like Willis are saying he chose too short a period. You are saying too long.
But in fact the bit you quote does say 20th century.
In fact, the primary test of models is whether they give a reasonable solution in the no forcing case. If they do that, then it is almost inevitable that they will produce a long term rise with GHGs. Not tuning, just physics. There are weather fluctuations which can mask this for a while. That is not a fault in the model.
“In fact, the primary test of models is whether they give a reasonable solution in the no forcing case. If they do that, then it is almost inevitable that they will produce a long term rise with GHGs. Not tuning, just physics.”
That’s a really clear explanation of why modeled output is of no practical value. The forced response is a characteristic programmed into the model, and the test used to supposedly validate the model is not the accuracy of the future forced model output against the future forced climate response, but instead just whether the theoretical unforced response shown by the model can be judged as “reasonable.”
And whether the period chosen by Schmidt was too long, or too short is not germane. The point is that Curry’s description of how models were tuned mentioned nothing that could be interpreted as being narrowly directed to the linear trend from 1950 to 2010. Schmidt arbitrarily chose this interval and then bizarrely thought that it refuted Curry’s argument.
Tuning/winnowing is a distinction without a difference, and it seems clear to me that the argument raised by Judith Curry, and disputed by Gavin Schmidt, was that climate modelers discard models that don’t show the rise in temperatures at the end of the 20th century. She cites direct quotes from the modelers themselves that say that any model not having this feature won’t see the light of day, and accordingly argues that the models cannot logically be used to attribute the 20th century rise in temperatures to anything because that feature was baked into the models by the procedure used to select them.
Choosing to adopt a picayune interpretation of the word “tuning” avoids her argument; it doesn’t refute it at all. And when the modelers themselves admit to the selection process that forms the factual basis for her reasoning, I think it’s unreasonable to demand that she, or I, or anyone else come up with the data on whether or how often it happens.
Kurt writes: “the models cannot logically be used to attribute the 20th century rise in temperatures to anything because that feature was baked into the models by the procedure used to select them.”
Exactly.
Let’s assume I know nothing of the intentions of our model builders or the relationships assumed by them. I use various inputs (low CO2, high, etc) to measure the model’s response. I conclude via legitimate statistical procedures that CO2 has an effect on climate, which is of course exactly what the modeler intended. It’s basic “black box” testing.
What have I proven? Only that the modeler believes CO2 has an effect on climate. There’s no attribution based on empirical evidence, but that’s exactly what’s being claimed.
Forrest “Remarkably, he says in the comments that “Everyone understands that tuning in general is a necessary component of using complex models. … But the tunings that actually happens – to balance the radiation at the TOA, or an adjustment to gravity waves to improve temperatures in the polar vortex etc. have no obvious implication for attribution studies.”” This contradicts his claims when arguing with Dr. Browning about exponential increase of potential errors from the coarse grid and time steps WRT N-S and atmospheric physics. In that conversation the implication was that when they balanced such as TOA, and the poles, and got the correct profile, meant that when they did not add CO2 and got flatter profiles, that proved the attribution. So, their is a conflict here.
For some reason as I read Nick Stokes’ comments, I am reminded of a man who has walked into quicksand…he started off just up to his ankles, but as he struggles and wriggles to get out, he just sinks deeper and deeper!
His next comment will be the equivalent of a bubble…and the one after that will be like a grasping hand disappearing below the surface.
Shame, I was really quite enjoying it.
Charles,
When it comes to scientific matters, you never seem to dip your toe in.
I think Nick is ready to finally give up in the face of the evidence.
Andrew
BA, that would a first.
Got to come to Nick’s defense on this one – Charles is using movie theater science instead of real science. You won’t ever become totally submerged in quicksand since the density of the sand-water liquid is higher than that of the human body.
Sad that there is no science in the consensus “Climate science”. Probably why all those marchers had no clue why they were marching. Just derelicts picked up from under the Mayo bridge.
https://www.youtube.com/watch?v=tWr39Q9vBgo
Each year, Earth Day is accompanied by predictions of doom. Let’s take a look at past predictions to determine just how much confidence we can have in today’s environmentalists’ predictions.
https://www.lewrockwell.com/2017/04/walter-e-williams/environmentalists-dead-wrong/
Wackoism didn’t end with Carson’s death. Dr. Paul Ehrlich, Stanford University biologist, in his 1968 best-selling book, The Population Bomb, predicted major food shortages in the United States and that “in the 1970s … hundreds of millions of people are going to starve to death.” Ehrlich saw England in more desperate straits, saying, “If I were a gambler, I would take even money that England will not exist in the year 2000.” On the first Earth Day, in 1970, Ehrlich warned: “In ten years all important animal life in the sea will be extinct. Large areas of coastline will have to be evacuated because of the stench of dead fish.” Ehrlich continues to be a media and academic favorite.
https://www.lewrockwell.com/2013/05/walter-e-williams/bring-back-ddt/
“Wackoism”
We should probably use this description more. It is descriptive of a lot of what is going on in our world today.
ned,
I started teaching in 1971 and I accepted the claims by Ehrlich and others as being true, and I passed it onto my students as gospel. I’m now doing penance for the damage I did.
Once the Federal spigot is turned off for this nonsense, it will just die a natural death. No need to debate them. Turn off their air supply.
Climate models are tuned.
Trying to assert otherwise is either dishonest or ignorant.
“Climate models are tuned.”
Yes. but the claim was:
“They use models that are tuned to the period of interest, which should disqualify them from be used in attribution study for the same period “
Well DUH!
Are you asserting that the models can be tuned against a certain period and then can claim accuracy based on that same period? That’s the Texas Sharpshooter Fallacy.
The period of interest they are predominantly tuned to is the only period they match: 1976-2000.
NS,
At the beginning of this thread, mothcatcher claimed that the GCMs are tuned. You challenged his claim. [Nick Stokes April 26, 2017 at 2:53 am] Now you are arguing that the essence of the dispute is about the period of time to which they are tuned. Can you say, “Sophistry?”
They use models that are tuned to the period of interest, which should disqualify them from be used in attribution study for the same period “
Nick, Are you trying to say,
1. The models are not tuned,
2.The models are tuned to a period “not of interest”
3. the models are not disqualified even though they are tuned to the period of interest?
Clyde,
“At the beginning of this thread, mothcatcher claimed that the GCMs are tuned. You challenged his claim.”
I said in that initial comment:
“It is true that there are parameters that are not well established a priori, and are pinned down by tuning to some particular observation. “
No-one disputes that tuning is done and is necessary for various specific issues. Many have quoted the paper by Mauritsen et al (many) and there have been others. Judith made a specific claim that they tuned to the period of interest (GHG induced warming) and then used that for attribution. Gavin said that isn’t true. Neither he nor I said that no-one ever tunes anything.
What he was pointing to was a common disconnect in sceptic argument:
“We know they must have rigged it to get that agreement, and besides, it doesn’t agree.”
According to Gavin, what she actually said was:
That statement, as it stands, is absolutely true. Unless he can produce others of Judith’s statements which elaborate on the above statement, I am led to conclude that Gavin has concocted a straw man.
You also said:
Indeed. I linked to a paper on which Mauritsen was second author. He’s widely cited. I’d say he knows what he’s talking about.
Not everyone is sanguine about tuning. Edward Lorenz, arguably the most influential meteorologist in history, and certainly a pioneer climate modeller, said the following:
That appears to set the bar pretty high.
commieBob quotes
Yes, there is an immediate and very obvious failure to achieve this when the TOA imbalance is tuned.
You really stunk up the thread this time with your attempt at spin! Curry is correct. Gavin tried the spin and got nailed for it, and you are trying to spin the spin!
The first rule of laundry, is you do not go into a spin cycle until AFTER the wash cycle! You forgot that.
But isn’t it the point of the author of this post that Gavin saying that the models are so varied proves they have not been tuned is the reason, by Gavin’s own word, we should not pay any attention to them in the first place?
Sounds to me like the Gavin’s is trying to rebut the argument that the models are invalid in a specific case by saying “no, they’re wrong in general”.
This is called induction.
Gavin is in a desperate position. He is likely to loose his job as both NASA and NOAA are reorganized to work more efficiently toward their primary missions. Is he making these kind of statements in hopes that he will be forced to retire and become a well paid activist like his former boss?
So when is this going to happen? I’ve seen no evidence that Trump is working to reorganize either one.
It has started with the political appointees that occupy the top levels of each organization. These individuals are there to assure administration policies are followed. Congress controls their budgets. I did research at EPA for over 20 years and we reorganized about every 3 to 5 years. Sometimes these reorganizations were used to move some individuals out of positions that could have an effect on policy .
Of course in Gavin-world, larger model error margins clip the observation error margins therefore proving that the models are ok; in short worse=better. That it is unjustifiable anyway to use frequentist stats unless the model inputs were randomly selected and all output runs retained, is just another mere nitpicky detail to him.
“the models are basically weather forecasting programs”
Just a moment, Mr. Stokes. What about climate?
Andrew
Exactly, Thomas, my reaction was the same. I was telling myself: That’s quite a weird strategy to argue to show that the models predict random numbers mostly very far from each other – and from reality – and use this observation as evidence that the climate alarmists are doing something right.
I think that Gavin addressed this own goal to those skeptics or undecided folks who are highly allergic to any “tuning” and who think it’s right for models and scientists not to pay attention to the empirical evidence. Well, I don’t think that sensible people hate “tuning” this much because this opposition is equivalent to throwing the empirical character of science out of the window.
Science should choose theories and models that do a very good job in explaining and predicting natural phenomena and what Gavin showed is another piece of evidence that the climate alarmists aren’t doing anything like that at all.
Wonderful post by Nick Stokes April 26, 2017 at 4:48 am Mauritsen et al, quoted above,So much that contradicts his assertions. Such as
“details such as the years of cooling following the volcanic eruptions, e.g., Krakatau (1883) and Pinatubo (1991), are found in both the observed record and most of the model realizations.”
Even better than Gavin.
This comment shows that some limited tuning has been built into most models when they back cast.
Because there is no way a model can know when to predict a volcano in the past or the future.
Hence a historical framework has been incorporated into most models, Right Nick?
For a mathemetician, Gavin”s certainly not very good with any mathematical analyses.
Good grief your right, Schmidt isn’t a scientist, now I have a mental image of Mann treating Schmidt like Sheldon Cooper treat Wolowitz!
Same article
“models are generally able to reproduce the observed 20th century warming of about 0.7 K,”
is completely at odds with
The “models used” (otherwise known as the CMIP5 ensemble) were *not* tuned for consistency for the period of interest (the 1950-2010 trend is what was highlighted in the IPCC reports, about 0.8ºC warming) and the evidence is obvious from the fact that the trends in the individual model simulations over this period go from 0.35 to 1.29ºC! (or 0.84±0.45ºC (95% envelope)).”
and
“Yet, the span between the coldest and the warmest model is almost 3 K, distributed equally far above and
below the best observational estimates, while the majority of models are cold-biased [for the observed 20th century warming of about 0.7 K only]”
So even though the models have a fitted temperature increase range and known volcano eruptions “fitted in” [Impossible Nick for an untuned model by the way] they are still all over the shop as Gavin and E Smith say.
Amazing.
They have in the past used aerosols to tune the runs. Do all the model runs use the same or nearly identical aerosol input files?
From what I have read, not even close.
micro6500:
You ask
No, they use values that differ by a factor of 2.4.
Please read my post below for data, references and explanation.
Richard
Thanks Richard. Answered 2 questions at once. They do tune the runs with aerosols, but because the models are different, or tuned differently, the aerosol correction factors are changed to get the correct prior temperature trends.
Thanks.
If two models use aerosol numbers that differ by 2.4, then that completely blows out of the water the claim that the models are making predictions from first principles.
MarkW:
Of course the climate models are not derived from first principles. A model becomes a curve fitting exercise when it uses any parametrisation.
Of more importance is the invalidity of climate model projections which I relate in my above anecdote.
Richard
Even Hansen, in some lapses of scientific honesty, admitted publicly that aerosols data are “out of the hat” :
“Even if we accept the IPCC aerosol estimate, wh
ich was pretty much pulled out of a hat, it
leaves the net forcing almost a
nywhere between zero and 3 watts”
source : http://www.columbia.edu/~jeh1/2009/Copenhagen_20090311.pdf
It was a few years later someone came up with aerosol data that was hard to discount, that upset the apple cart. I’m wondering if that was a little before some of the newer generations of models were introduced.
Can we run these models with current Mars’ parameters and see how well they can predict a more stable and less complex atmosphere?
I expect that many of the parameters would be negligible, except for the 95% CO2 content of the atmosphere. That one would be much more exaggerated and we’d readily see that the models don’t reflect reality in terms of “Greenhouse Gases”.
Heck.
I’d settle for any model that could run the Moon’s measured surface rock temperatures correctly.
Then Mercury and Pluto’s assumed “surface temperatures” based on those “total albedo” and (lack of gasses) simple rock surface.
Then the simpler, no-water, no-ice, no-oceans, no seasonal (plants) albedo changes, high-CO2 atmosphere of Mars.
Nick Stokes,
as always you’re nothing but a retarding element.
Happy to receive attention by hostaging a blog.
What’s your contribution.
Think.
Wouldn’t be much of a discussion here if Nick hadn’t been around, would it? We ought to thank him for his contribution.
But defending Gavin Schmidt on this must surely tax even his considerable ingenuity….
It is correct that there is not a master tuning knob for matching the data, but it is not true that the models are not tuned. There is leeway in choice of forcing data–Kiehl showed a tradeoff that implies a tuning. Those models using more aerosol forcing had higher GHG forcing (to balance out).
Kiehl J (2007) Twentieth century climate model response and climate sensitivity. Geophys Res Lett 34:L22710.
The tuning of clouds and albedo and convection and all the rest is not done in isolation–there is always an eye on how it makes the model perform.
Gavin himself has admitted that different models incorporate different (or competing as he says) physics. If different physics in the models still matches the data (sort of) then something somewhere has been tuned and one cannot infer that the models are right because they are based on physics. Clouds are a key factor that even the IPCC admits cannot be modeled at present.
Schmidt GA, Sherwood S (2015) A practical philosophy of complex climate modelling. Eur J Philos Sci 5:149-169.
Nick says: “If they were tuning the models to the data, the would agree. If they were tuning the data to the models, they would agree. But in fact, for individuals runs, they do not agree. So neither tuning is being done.” This does not follow. If the overall framework is wrong, the N-S eqns can’t be solved correctly, and some things are just unknown (clouds), you can tune all day and not get good agreement. This is particularly so because it would take thousands of runs to tune all variables at once, and this is computationally impossible.
Whoever responded to my contribution, be assured I take your opinion seriously.
Only problem is that I stumble through an unmanageable WordPress.com jungle.
Best regards – Hans