Dr. Patrick Michaels, former Virginia State Climatologist has some strong comments about climate models during an interview with Mark Levin:
“It is nowhere near as warm as it’s ‘supposed’ to be,” says climatologist Dr. Patrick Michaels. “The computer models are making systematic, dramatic errors.”
There are 32 different computer models used to predict the climate, all of them run by government entities. And all of those models, except for the Russian model, are predicting far, far too much warming. The Russian model pretty much matches reality.
Because they are “parameterized” (fudged), says Michaels. “We put in code steps that give us what we think it ‘should’ be.” The models were ‘tuned.’ “We forced the computer models to say, aha! human influence, CO2 and other stuff.”
The models “tell us what we wanted to see,” says Michaels. The models have been tuned “to give an anticipated, acceptable range of results.”
Phony models?
In order to clarify what he is hearing, interviewer Mark Levin paraphrases Michaels: “so you’re telling us that we have a massive bit of public policy that has an enormous effect on society that is built on phony models.”
Michaels nods his head ‘yes.” “It’s built on a house of cards,” says Michaels.
One of my favorite links to adjusting the empirical data to match the hypothesis:
Correcting Ocean Cooling
Indeed, excellent example of the junk science mantra :
“If the data do not fit the models, fudge the data.”
After reading the article, and their throwing away bad ARGO floats etc., I came to think about the water’s max density at 4°C. How much of the ocean is warmer than 4°C and how much is colder, and how does that influence the thermal expansion?
Thermal expansion is one of those issues that seems to be accepted at face value when it’s probably a whole lot less than what it’s claimed to be. For example, if the Gulf of Mexico heats up, does that affect sea level in San Francisco? Well no, it wouldn’t*, but you can bet that local areas are thrown into the calculations for average ocean temperature when you’re being told that the estimated rate of thermal expansion, or thermosteric sea level rise is so many millimeters per year.
* Some people probably think it would.
Catch here is that 4C max density for water is only fully true for the nearly pure water in a chemistry lab. Actual fresh water is close enough. For salt water, it isn’t nearly as true. The thermal expansion curve is also different for 3% saline water versus laboratory ‘pure’ water. These kinds of errors never seem to be addressed nor properly propagated.
See my previous guest posts, ‘The trouble with models’ and ‘Why models run hot’ for details.
Patrick Michaels is an employee of CATO Institute, which is not mentioned in the blurb. as such, he is most likely biased due to CATO’s majority funding from oil and coal companies.
and the idea that models are uncertain? well, duh! they are _models_! (sorry, I’m a systems engineer by training. this is obvious to anyone with training and experience in modeling – be it climate, computer systems, military operations, whatever).
Typical, can’t refute the evidence, so you attack the messenger.
Thank your for admitting that you have already lost.
chris
You said Michaels “is most likely biased.” I think that you are playing fast and loose with logic. He MAY be biased, and the probability of bias is likely to be higher than for someone with no financial involvement. But, there is no certainty, and, therefore, it should just be cause for a higher level of scrutiny. In any event, his argument(s) should stand on its/their own merits, and not be dismissed by what is essentially an ad hominen attack. After all, academics have an incentive to publish results that agree with the consensus view, and to write grant proposals with the right ‘buzz words,’ but you don’t attack them for having a bias. You are displaying your particular bias.
It also hasn’t been shown that CATO gets most of it’s funding from oil companies.
That’s just the standard claim that the acolytes role out when they can’t actually refute anything an opponent of theirs says.
Based on your “systems engineer” training, what do you call it when 31 of 32 models are consistently way off in the wrong direction?
I call it, “most likely biased.”
When the climate models fail to track reality I call it a falsification of the AGW meme, because the excessive projections are all that AGW meme has to recommend itself upon in the first place.
Natural variability, UHI and concerted deliberate systematic data record corruption accounts for the rest of the global WX ‘change’ fairly adequately, without too much bother with a non-condensing GHG trace gas effect, which is truly inconveniently rarefied to ineffectual.
Alarmists are funded big time by big oil. Most sceptics hardly at all. But that’s okay because “progressives” are on the proper side of the issue.
https://theclimatescepticsparty.blogspot.com/2013/07/the-big-lie-sceptics-funded-by-big-oil.html?m=1
Chris, I dont ever get an answer to this question, but silence is the most eloquent affirmative answer one can get!
“Are you as alarmed about Anthro catastrophic global warming today as you were a decade or two ago before the “Dreaded Pause”, before climategate, before Ipcc model predictions turned out to be running 300% too hot compared to observations….? Or not so much?
“Criteria for Selecting Climate Scenarios” IPCC 16 May 2011, Sourced from:
http://www.ipcc-data.org/ddc_scen_selection.html
This now gets a 404 Not Found – why are we not surprised. It might be retrievable from the Wayback Machine or similar. This is what it said:
“Criterion 1: Consistency with global projections. They should be consistent with a broad range of global warming projections based on increased concentrations of greenhouse gases. This range is variously cited as 1.4°C to 5.8°C by 2100, or 1.5°C to 4.5°C for a doubling of atmospheric CO2 concentration (otherwise known as the “equilibrium climate sensitivity”).”
The sensitivity nonsense was mandatory. It was hard-wired into the models. And all but one complied. Presumably the IPCC were unable to intimidate the Russians, but couldn’t leave out their model, so the actual model results are shown in pale grey, so maybe no one will notice that one down the bottom …
Chris sez:
Patrick Michaels is an employee of CATO Institute, which is not mentioned in the blurb.
Chris — typical Alinsky tactic. Use the other rules too?
https://bolenreport.com/saul-alinskys-12-rules-radicals/
“chris July 2, 2019 at 2:41 pm
(sorry, I’m a systems engineer by training.”
Yeah! I have met a lot of people like you over the years. Lots of training, lots of paper qualifications, little real world experience, mostly useless.
should have added this: “parametrized” is not “fudged” If I need to explain that, then the reader isn’t qualified to understand the difference.
sheesh
Speaking of not understanding, parametrized is not the logical equivalent of “fudged”, however it is how the fudging is most easily introduced.
If they’re calibrated to a systematically ‘miraculously’ always cooler past than was actually observed (plus guided by the zeitgeist of UHI effect hysteria) they’re fudged.
“Who you gonna believe, me or your own eyes?” – Chico Marx
If “the models” (as if they are mysterious manifestations of alien culture) are so wrong, why is it so damn hot here in the National Capitol Area?
Looking forward to M1 tanks sinking into melting tarmac in two days. … 🙂
We have got to raise some money so we can buy us a better class of troll.
One hot area is proof of global warming? Really?
…If “the models” (as if they are mysterious manifestations of alien culture) are so wrong, why is it so damn hot here in the National Capitol Area?…
If I need to explain that, then you aren’t qualified to understand why.
And to think, he actually claims to be an engineer.
Ever heard about summer?
chris,
You are confusing weather with climate. The climate has only seen an average increase in global temperature of about 1 deg C in the last century, with the majority of that in the Winter and at nights. That isn’t going to melt the tarmac!
More importantly, the models are wrong because they don’t agree with the measured temperatures! How do you not understand that?
has Loydo or Griff got a new sockpuppet?
“If “the models” (as if they are mysterious manifestations of alien culture) are so wrong, why is it so damn hot here in the National Capitol Area?”
There is a high-pressure weather system southeast of you that is pumping warm air into your area and into about half of the rest of the United States right now. No CO2 required.
“Looking forward to M1 tanks sinking into melting tarmac in two days.”
Trump wants to put two APC’s and two Abrams tanks on display for the Fourth of July celebrations and the anti-American Democrats throw a fit. They are especially distubed about the military display because the anti-American Democrats are also anti-military. They hate Trump and they hate the U.S. military and Trump’s promotion of the military as a force for good in the world infuriates them.
Anti-American Democrats. That’s what the Democrat Party has become. Don’t go by their words, because they will deny they are anti-American, but go by their deeds. That will tell you the truth.
When the Democrats display their anti-Americanism, they should be publicly called on it. They should be called anti-American, because that’s what they are.
Chris sez:
If “the models” (as if they are mysterious manifestations of alien culture) are so wrong, why is it so damn hot here in the National Capitol Area?
Because you’re in an enormous sea of concrete, asphalt, cars, trucks & buildings, Einstein.
You forgot about the waste heat from all of our money being spent.
‘There are 32 different computer models used to predict the climate’
Sorry. This is absurd.
The earth has many climate regions. IT DOES NOT HAVE A CLIMATE. What the heck is ‘predict the climate’ supposed to mean?
Climate science is the only field of endeavor where you can be wrong everywhere, but still be right on average.
PS: Even on average, they are wrong, but that’s the claim of the trolls.
MarkW July 2, 2019 at 3:24 pm
Beware of averages. The average person has one breast and one testicle. Dixie Lee Ray
The shrill shills for climate catastrophers have even swung away from the IPCC because of the post Pachari moderation of its views, so you can even use IPCC opinions on lack of changes in weather extremes and no connection with CO2 rise and the comedown in warming threshold worry at1.5C above 1850 by 2100 instead of 2C above 1950!
Even if it is still exaggerated, its useful against the the new world gov apparat and good to assure ordinary folk that nothing bad is really going to happen.
Essentially the worry is another rise of ~O.6C by 2100.
Therein lies the reasons why climate model owners never seek to have theri models officially tested, verified and certified.
1-for-32 is an improvement. A few years ago they were 1-for-42.
https://rclutz.wordpress.com/2015/03/24/temperatures-according-to-climate-models/
Duane July 2. He is right about the word Model.
Its the updated version of Joseph Stalin’s famous quote.
“”It does not matter how many people vote, what does matter is who counts
the votes. “”
If a model is created to give a result the person making it wants, then its a
false model, but that does not mean that all models are giving false results.
The old saying, “”The proof of the pudding is in the eating”” comes to mind.
MJE VK5EDLL
I am always surprised by the ease at which people claim the models are wrong for reasons
X, Y and Z. Given that the models are open source if you think a model is wrong for a particular
reason then you can easily change the code and see if it makes a difference. Patrick Michaels
would appear to have had 30 years to do just that while employed as a climatologist and he also
seems to know precisely where the “fudge factors” are that makes the climate models inaccurate.
Hence he would be doing the world a huge favour by correcting the models and showing how his
improved model correctly predicts the global temperature.
Izaak,
You make it sound trivial to wade into a million lines of poorly documented, Fortran spaghetti-code, written by a team of programmers, and then run it on your desktop computer.
I have written short programs, which in the absence of detailed documentation, (because I didn’t think it necessary at the time), have proven so intractable a couple of years later that it was easier to start over with all new code. That isn’t practical with something like a million lines of code!
Everyone acquainted with the modeling knows just where the problem areas are. One of the most significant is the inability to handle the energy exchanges in clouds, using the same spatial resolution as the rest of the measurements and solutions of the differential equations. That means, the clouds have to be handled by parameterization. That means, someone has to make some subjective decisions about how to simplify the energy exchanges. Part of the subjectivity involves trial and error: “That doesn’t look right! Let’s see what happens when I change this constant. There, that’s what I think it should be doing!” It becomes a self-fulfilling prophecy where the output of the models looks like what the modeler(s) think it should look like. It isn’t just all physics.
Clyde,
it is not trivial but the fact remains that the statement that the climate models are wrong
because of X can be tested by inspecting the source code locating the error, correcting it
and demonstrating that the improved code gives better results.
Suppose for example I make the counter claim that the models give inaccurate results
because they use incorrect historical forcing. How would you judge between the two claims?
The recent paper entitled “A limited role for unforced internal variability in 20th century warming.”
used improved historical forcing and gets the average temperatures almost spot on. So it
would seem that much of the difference between the models is due to the forcings and not
the parametrisation.
It would be nice ‘n helpful if the 31 models that are clearly wrong were tossed out and not seen or used again in publication.
Rubbish must go in the bin, or else it accumulates, then invites pests and stinkers to cohabit.
Earth’s material palaeo-record shows that genuine planetary climate-change detection takes centuries to resolve.
Furthermore, if a Russian model predicts cooling for a few decades I’m sorry but that’s just long-term weather cycle prediction and not an example of a working global climate model.
Any way it’s cut, if 31 models are not consistent with reality, then what we SCIENTIFICALLY DO KNOW is that it will take centuries to detect a clear-cut global change of actual climate trend (which presumes any even substantially occurs within the period that is as there may be effectively little to none at all over the coming time interval).
Izaak
If your proposal is other than trivial, than the barrier to implementation by an individual that is not a professional Fortran programmer, and acquainted with using super-computers, and has the funding to pay for a computational run, precludes doing it. Put another way, it is a theoretical answer that is impractical to implement. It is akin to saying that if a climatologist should want to know the temperature on the surface of the moon, all (s)he has to do is go to the moon and stick a thermometer into the lunar regolith. Simple solution!
As to your alternative claim, has the single claim been replicated?
Clyde,
If the barriers are so immense then why do you believe Patrick Michael when he says
he knows why the climate models are all wrong? How does he know or is he just guessing?
He makes the definite statement about why climate models are wrong and you appear to be saying that he is right but it is almost impossible to prove. And if Patrick Michael hasn’t done the work then surely he is just guessing and his claim has zero evidence.
Izaak Walton,
You asked, “How does he know or is he just guessing?” That is an excellent riposte. I suppose one would have to ask him.
However, the prima facie evidence is that the 31 models do a poor job of predicting temperatures, and an ever worse job of predicting precipitation. Therefore, it is obvious that there is SOMETHING wrong with the models. While the alarmists often claim that the models are based on physics, the reality is that while there is physics in the models, there is also subjective parameterization that can over-ride the physics. As I pointed out above, it is well-known that the energy exchanges involving clouds cannot be handled with first principles and have to be parameterized. Therefore, they are the most suspect. You are right that there isn’t any hard proof as to what the problem(s) is/are. [One would think that the modelers themselves would have explored this!] At one point in time I tried to find information on the Russian model and couldn’t come up with any. A comparison between the Russian model and the other 31 models is something that could be done without access to a super-computer and would be instructive.
Clyde,
Again look at the most recent paper “A limited role for unforced internal variability in 20th century warming” by Haustein et al. They clearly show that using the best available forcings they can correctly simulate all of the 20th Century changes in average temperature. There is a discussion of it at real climate.org (http://www.realclimate.org/index.php/archives/2019/06/unforced-variations-vs-forced-responses/) which shows their key results. There is no spurious warming in the climate model. So the claim that the “31 models are wrong” is outdated and even if correct Haustein’s work shows clearly that a major source of error is the use of incorrect forcings.
Izaak
I gave your link to the article on forcings a cursory read. I don’t think it is worth my time to try to digest it completely. They remark, “… In contrast to those earlier studies, we were able to reproduce effectively all the observed multidecadal temperature evolution, including the Early Warming and the Mid-Century cooling, using known external forcing factors (solar activity, volcanic eruptions, greenhouse gases, pollution aerosol particles).”
Prior to the satellite era, the solar activity (i.e. TOA insolation) was not well known. Indeed, it used to be called the “solar constant.” Similarly, volcanic eruptions were poorly known prior to the satellite era; even today we sometimes observe evidence of an eruption, but don’t know where or how large. Carbon dioxide wasn’t monitored routinely until 1959, and other greenhouse gases weren’t well characterized, again, until fairly recently. In short, actual measurements of the forcings didn’t occur until after about 1980! Therefore, between 1840 and 1980, they must be relying on estimates or proxies. So, any claim that they have improved our understanding of temperature changes by using better forcing data is not supported by the historical facts.
The issue of the veracity of the climate models is best demonstrated by a graph that David Middleton has used numerous times showing the CMIP-5 results compared to the historical temperatures. I have similarly demonstrated that the model used by Hansen in 1989 to predict temperatures doesn’t agree with his own historical temperature data.
Clyde,
You state that the forcings were not measured until after 1980. In which case how do you
know that they inaccurate forcings don’t account for the difference between models and measured temperatures. The paper I mention shows clearly that using better forcings gives more accurate results as one would expect. And again the CMIP-5 models are out of date since the results in the Haustein’s results show that excellent agreement is possible if you use the correct forcings. You do not appear to want to discuss the results of Haustein since it clearly shows that models are accurate and thus can be used to make predictions.
Izaak
You stated, “The paper I mention shows clearly that using better forcings gives more accurate results as one would expect.” The point of contention is whether the forcings ARE better. For the period of 1840 to the present day, only those measured since ~1980 are actual values. Everything before that is a subjective estimate. That hasn’t changed and therefore there is no improvement in the majority of the forcings. And, how much better are the post-1980 forcings used by Haustein et al. compared to what others have used?
You are right! I do not want to discuss something that obviously is indefensible.
Izaak,
P.S. What you are suggesting that Michaels should have done, is what apparently the Russian modelers have done.
Izaak, being able to recognize that fudge factors are being introduce can be straightforward for someone knowledgeable of the issue. Knowing what the true factors are is quite a different matter.
I could produce a model showing that you evolved from an anteater. You would likely be able to shred the model completely, but would you be able to model accurately just how you did evolved?
They put in values that produced what they wanted the models to produce. No one has been able to produce a valid model because no one completely knows all the natural contributions to climate, much less the weighting factors of those contributions, or how they interact with each other.
It is looking more and more that Miskolczi is correct. It is impossible for additional CO2 to add to the greenhouse effect. The atmosphere maintains a constant optical depth. Miskolczi has derived that constant both through theoretical and empirical means. Measured decreases in atmospheric humidity offset any increase in CO2 to maintain the constant. This is clearly visible in measurements of atmospheric humidity over more than 60 years!
Ask yourself, why does the temperature always return to the norm after an El Nino or La Nina?
Unless someone can prove otherwise, my understanding of why nearly all computer models forecast the temperature too high is that they have a bad assumption built in to their programming. The bad assumption is that CO2 causes most warming and will continue to do so. In addition, they may also have too much emphasis on positive feedbacks in the programming. Neither positive nor negative feedbacks have been measured with any accuracy. However you will not get a “climate scientist” to admit this.
I believe the problem is much more simple. The atmosphere is far too complex to model at a level where you use precise physics. That means they are not really modeling the physics. They model how they “believe” the average of billions of physical interactions will work out. The word “believe” is the one that gets them the results they expect.
The Russian model is probably a lot simpler and, if I had to guess, probably doesn’t get into feedback. That is why their results are far cooler. Still wrong, just not as wrong.
Richard,
“… but some models are useful.” It would appear that only the Russian model has enough skill to actually be useful.
If the IPCC really knew what they were doing they would now have only one model without any parameterization and that model would reasonable have predicted today’s global temperatures. Such is not the case so the IPCC does not really know what they are doing. Funding of the IPCC should stop.
According to the IPCC itself, there is no such model that can predict climate evolution :
“The climate system is a coupled non-linear chaotic system, and therefore the long-term prediction of future exact climate states is not possible.”
https://www.ipcc.ch/working-group/wg1/?idp=36
Strangely, the IPCC has removed direct access to the chapter in which they mention the above statement, but Google still has it in its memory.
The only bunch of models that gave some realistic predictions are related to the C scenario of James Hansen :
– a scenario of “draconian emission cuts” in which humanity has constant CO2 emission since 2000.
https://wattsupwiththat.com/2018/06/30/analysis-of-james-hansens-1988-prediction-of-global-temperatures-for-the-last-30-years/
The fact is that CO2 emissions since 2000 correspond to the “business as usual” scenario (scenario A).
So much for the accuracy of these pizza spaghetti models salza peperoni.
still going to tax CO2 even after you posted your conspiracy theory
Strangely, the IPCC has removed direct access to the chapter in which they mention the above statement, but Google still has it in its memory.
TAR 14 is available here:
http://www.thestupidithurts.org/wp-content/uploads/2018/12/TAR-14.pdf
If there are 32 different models, all of which Run Hot, I would say
that either some really bad guesses are been made , very unlikely, or its
the left wing ideology at work.
MJE VK5ELL
Patrick Michaels is not a climatologist
https://www.desmogblog.com/patrick-michaels
Please give us the qualifications for a Climatologist? DeSmogBlog would say Al Gore is a Climatologist.
Leo
Which of the high-visibility ‘climatologists’ actually have a degree in climatology?
still going to tax CO2 even after you posted your conspiracy theory
“It is nowhere near as warm as it’s ‘supposed’ to be,” says climatologist Dr. Patrick Michaels. “The computer models are making systematic, dramatic errors.”
It’s actually warmer
Clearly a false statement. No proof provided.
Leo
“It’s actually warmer.”
-1
I doubt he gets served at the Red Hen.
When is deliberate deceit and lies going to be punished. Time after time scientists who know and have the proof of these misdemeanours, I I’m being polite, should activate honest and respected folk within the scientific community, and they should be brought to book, shamed and fined heavily to deter them and those in higher places to be brought into punishment.
Trump was and is right about the swamp.
This is germain to this fine site and sadly politics rears its head in a bad way.