Climate science needs a critical review by skeptical experts

Guest essay By Myron Ebell

WASHINGTON — Is global warming a looming catastrophe? President Donald Trump has often said he doesn’t think so even while his administration continues to release official reports warning that it is.

The president will soon find out who is right by convening a high-level commission to do a critical review of the fourth National Climate Assessment issued last November and other government reports.

Surprisingly, most of the climate science funded by the federal government has never been subjected to the kind of rigorous and exhaustive review that is common practice for other important scientific issues and major engineering projects.

For example, when NASA was putting men on the moon, every piece of equipment and every calculation were scrutinized from every possible angle simply because if anything went wrong the mission would fail.

Serious problems and shortcomings with official climate science have been raised repeatedly in the past by highly qualified scientists such as Princeton’s brilliant physics professor William Happer only to be ignored or dismissed by the federal agencies in charge of producing the reports.

Yet the conclusions and predictions made in these official climate science reports are the basis for proposed energy policies that could cost trillions of dollars in less than a decade and tens of trillions of dollars over several decades.

Given the magnitude of the potential costs involved, taking on trust the bureaucratic processes that have led to official consensus is simply foolish. Thus the review to be undertaken by the proposed President’s Commission on Climate Security is long overdue.

To mention only three major issues among many that need to be scrutinized:

First, the computer models used have predicted far more warming than has occurred over the past 40 years. Why have such models failed and why are they still used are important questions.

Second, predictions of the various negative impacts of warming, such as sea level rise, are derived from highly unrealistic scenarios; and positive impacts, such as less ferocious winter storms, are minimized or ignored. What would a more honest accounting of all the possible impacts of climate change look like?

Third, surface temperature data sets appear to have been manipulated to show more warming in the past century than has occurred. The new commission should insist that the debate be based on scrupulously reliable data.

Since news of the proposed review leaked out in February, a furious campaign to stop it has been mounted by the federal climate bureaucracy and their allies in the climate industrial complex.

On the surface, this seems puzzling. If the alarmists are confident that the science contained in the official reports is spot on, they should welcome a review that would finally put to rest the doubts that have been raised.

On the other hand, their opposition suggests that the science behind the climate consensus is highly suspect and cannot withstand critical review. In other words, they’ve been peddling junk and are about to be found out.

A press release from one alarmist pressure group was headlined: “58 Senior Military and National Security Leaders Denounce NSC Climate Panel.”

Denouncing an expert review seems a most inappropriate response, and especially to one that is designed to be open and subject to further review by other experts, such as the National Academies of Science.

I wonder if environmental pressure groups have ever denounced doing another environmental review of projects — for example, the Keystone XL pipeline — they are trying to stop.

Two prominent promoters of global warming alarmism recently published an op-ed in which they accused the Trump administration of using “Stalinist tactics” to try to discredit the climate science consensus.

Let’s hope that they’re not as ignorant about science as they are about history. It’s the enforcers of climate orthodoxy and opponents of open debate who are using Stalinist tactics.

———

ABOUT THE WRITER

Myron Ebell is director of the Center for Energy and Environment at the Competitive Enterprise Institute. Readers may write him at CEI, 1310 L St. NW, Washington, D.C. 20005.

This essay is available to Tribune News Service subscribers. TNS did not subsidize the writing of this column; the opinions are those of the writer and do not necessarily represent the views of TNS or its editors.


Submitted to WUWT by Mr. Ebell

©2019 Tribune Content Agency, LLC

Distributed by Tribune Content Agency, LLC.

https://www.latimes.com/sns-tns-bc-climatechange-panel-pro-20190328-story.html

0 0 votes
Article Rating
86 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
steven mosher
March 28, 2019 6:13 pm

“Third, surface temperature data sets appear to have been manipulated to show more warming in the past century than has occurred. The new commission should insist that the debate be based on scrupulously reliable data.”

1. wrong. overall the warming trends of adjusted data are lower.
2. the gwpf tried this and quit.
3. bob mercer, trumps biggest donor already funded judith curry to do this. the record was confirmed

Reply to  steven mosher
March 28, 2019 9:30 pm

”1. wrong. overall the warming trends of adjusted data are lower.”

All the adjusted data I have seen has the past being adjusted DOWN

Patrick MJD
Reply to  Mike
March 28, 2019 10:38 pm

Famously the BoM ACORN data sets, V2 being the latest where the past is down, AGAIN, from V1.

Anthony Banton
Reply to  Mike
March 29, 2019 12:44 am

“All the adjusted data I have seen has the past being adjusted DOWN”

Well then I’d suggest you follow what the likes of what Stephen and Nick say here on the subject. That is, if this place is your only source of climate science.

https://cdn-images-1.medium.com/max/1600/0*0oH-bP8FYQwLRWbn.

Spot the difference APART from the adjustment UP prior to the 1950’s
(And that graph is taken form a Heartland

Dr Deanster
Reply to  Anthony Banton
March 29, 2019 5:22 am

Anthony …. I went and read your link, and I find a lot of Cherry picked, subjective clap trap. In fact, your link is a perfectly good reason as to why a panel of interested and non interested parties should convene and iron out the truth. ……. Funny, the folks in your link oppose such. Wonder why?

Reply to  Anthony Banton
March 29, 2019 7:49 am

Falsehoods and false citations, banton!

That is not from a Heartland piece.
The absurd arguments against Dr. Spencer and Heartland article attributions are all based on lies and falsehoods.

Unsurprising.

lee
Reply to  Anthony Banton
March 29, 2019 3:05 am
Reply to  lee
March 29, 2019 7:11 pm

The direct link to http://www.waclimate.net/acorn2/ provides a more comprehensive analysis of the difference between Australia’s ACORN 2, ACORN 1 and RAW temperature history.

It’s worth being aware that a few days ago the BoM updated its ACORN-SAT station catalogue (http://www.bom.gov.au/climate/data/acorn-sat/stations/#/23090). It’s actually a bit more informative than the catalogue used since 2011 as it now includes adjustment summaries for all 112 stations with dates, temperature adjustments, neighbouring stations used for area averaging, etc. Most of this info has always been accessible from other BoM documents but it’s handy having it all on the same page.

The ACORN dataset itself is no longer a web page with thousands of daily temps for whichever station you choose, but an FTP link via which you can download csv files for ACORN 1 and ACORN 2 dailies in min and max. Probably more convenient for researchers with an FTP app and software to decompress the files, but I’m not sure it’ll be appreciated by people with basic computer and software skills.

A bit of good news is the Kalumburu station max file for ACORN 2 now actually contains Kalumburu temps instead of Halls Creek temps, a blunder that took three months to fix after the A2 files were first made available by the BoM in December.

It’s currently difficult to figure out whether the bureau’s website portrays Australia’s climate change temperature record since 1910 with either the ACORN 1 or ACORN 2 datasets, the latter having increased the ACORN 1 per decade temperature trend by +23%. The Climate Trend section of the site still presents daily, monthly, annual and seasonal averages since 1910 with ACORN 1, but the dataset platform itself lets you download either A1 and/or A2 datasets so you can make up your own mind, I suppose.

It remains interesting whether the BoM will fully propagate the A2 dataset to all its webpages before the federal election in May, accompanied surely by a media release explaining to the public what’s happened to the site data and why Australia has warmed 23% more than previously calculated. The left wing still hasn’t realised that the p1 story in The Australian a few weeks ago, which revealed the BoM’s 23% warming rewrite of history, was and is a great opportunity to spook the voters.

Whatever, the bureau’s correction of the A2 Kalumburu blunder means the site linked above now analyses all 112 ACORN stations, rather than 111, with a comparison of A1, A2 and RAW datasets. If of interest, the different dataset estimates for Kalumburu since 1910 were:

Max
Average change per decade : ACORN 1 – 0.12C / ACORN 2 – 0.18C / RAW – 0.09C
Min
Average change per decade : ACORN 1 – 0.23C / ACORN 2 – 0.23C / RAW – 0.03C

Derg
Reply to  steven mosher
March 29, 2019 4:13 am

We sure have lots of record temps from the 30s
🤔

Tom Abbott
Reply to  Derg
March 29, 2019 5:04 am

Yes, if we go by the unmodified surface temperature charts from around the world, we see that the 1930’s were just as warm as today, yet the bogus Hockey Stick charts don’t show this warmth. To claim that the temperature adjustments haven’t cooled the past is ridiculous.

Here’s an unmodified surface temperature chart from Finland which shows the 1930’s as being as warm as today (just like the US chart shows).

comment image

And there are plenty more examples from around the world and in both hemispheres.

The Climategate data manipulators cooled the past with their computer manipulations in order to make it look like things have been getting hotter and hotter for decades, in their efforts to support the CAGW fraud.

The actual temperature record shows it warms up for a few decades, then it cools down for a few decades, and then it warms up again for a few decades, and the highpoint of the last three warming periods, 1934, 1998, and 2016 are all within a few tenths of a degree of each other.

The cooling fraud is obvious when you compare unmodified surface temperature charts to the bastardized versions of same.

It is no warmer today than in the 1930’s, which blows up the CAGW fraud, and is the reason the Climategate fraudsters decided to change the temperature record so it conformed with the CAGW hypothesis. Being as warm today as in the 1930’s means we are NOT experiencing unprecedented warming today caused by CO2, instead we are experiencing warming very similar to the 1930’s, which was not caused by CO2, so maybe today’s warmth is not caused by CO2, either..

Maybe this Climate Change/Global Warming fraud is now on its last legs. The temperature data manipulations can’t stand the light of day and they now just might be exposed to the daylight if/when we get an official review.

Erast Van Doren
Reply to  Derg
March 29, 2019 7:28 am

Yep, there is a wikipedia page on US states temperature records. I plotted it:

Interesting how few heat records are from the recent past vs 1930-1950s. The cold records are spaced much more evenly! ))

Gary Ashe
March 28, 2019 6:16 pm

Tick tock tick tock.

March 28, 2019 6:23 pm

Hopefully the critical review will look into the role of activism in climate science

https://tambonthongchai.com/2019/02/03/hidden-hand/

Loydo
Reply to  chaamjamal
March 28, 2019 11:17 pm

To have any credibility with the wider public the “reviewers” must clearly have no links to the fossil fuel industry.

Orson Olson
Reply to  Loydo
March 29, 2019 2:20 am

This oft believed canard is simply wrong: the vast majority of the world’s oil and gas industry is state-owned, ie, “socialized.” Ergo, it isn’t a private, profit-driven industry. (eg, Mexico, Sau Arabia, Venezuela, Norway.)

Dylan
Reply to  Orson Olson
March 29, 2019 6:21 am

If state run energy isn’t profit-driven that would explain a lot.

What would be the result of a strategy by which a predator expends more energy in the chase than it gets from the prey?

In any economy of living things, you profit from your endeavors, or you die.

KaliforniaKook
Reply to  Dylan
April 2, 2019 3:01 pm

You didn’t address the main thrust of Orson’s comment, Dylan. He said it wasn’t ,private, that is, it is state owned. All of the countries he named have a stake in propagating the CAWG meme, because it would thwart fossil fuel development here, allowing them to control prices more effectively all over the world. They know as well as anyone that fossil fuels are the only way for sometime in the future for transportation, and with the success of demonizing nuclear, they are the only way to assure reliable electricity generation.
Bottom line: Oil producing states see it as in their best interest to push CAWG.

Pop Piasa
March 28, 2019 6:25 pm

I hope Roy Spencer is picked to testify, as well as Curry and Christy. Freeman Dyson would be a good panel member too.

Reply to  Pop Piasa
March 28, 2019 6:59 pm

Pop Piasa … at 6:25 pm
I hope Roy Spencer is picked to testify, as well as Curry and Christy. Freeman Dyson would be a good panel member too.

He’s on the top of my wish list.

Reply to  Pop Piasa
March 28, 2019 7:01 pm

I hope Mann, Trenberth and Trofim Karl, among others are picked to testify. Cameo appearance from Hansen too, if he hasn’t been boiled alive in some ocean somewhere.

Reply to  philincalifornia
March 28, 2019 7:41 pm

Pop, Steve, & Phil: So called “Climate Science” had already spoken in the NCA, & not with much accuracy either. But this is about TESTING what they produced, and COST/BENEFITING the result and any new proposals floated on a daily basis. So you need a cross section of disciplines, from US Armed Forces, Economists, Industrialists, Business, and Actuarial Science – all selected on their ability to cut through to the basics. Yes, Deviation analysis and Problem solving are mandatory, along with a penchant for growing a National Economy – the underpinnings for Making America Great Again!

Reply to  tomwys
March 28, 2019 8:14 pm

NCA = National Climate Assessment

Besides that, you’re right, the members of any audit need to be trusted. How about:
https://en.wikipedia.org/wiki/Big_Four_accounting_firms

Reply to  steve case
March 28, 2019 8:50 pm

Yeah, I’ve often thought that, with respect to an independent audit of the climate data. Given the $Billions being thrown around, a double-blind audit by two of the big four should be worth the expenditure.

Max
Reply to  tomwys
March 29, 2019 2:48 am

I would settle for my fifth grade teacher explaining simple math. 400 ppm carbon dioxide is 4/100 of 1% of the atmosphere. That means every CO2 molecule is responsible for heating 2,500 air molecules… To heat that much air just 1° for less than a second it would have to be a minimum of 2,500° in temperature.
Too warm 30° by noon every day, CO2 must be eight times hotter than the surface of the sun to accomplish the feat.

Steven Mosher
Reply to  Pop Piasa
March 28, 2019 7:12 pm

Funny.
Curry already did a review of temperature series.

Reply to  Steven Mosher
March 29, 2019 2:50 am

Steven,

But as you know, new versions are produced from time to time. When I have seen a study done in the past 2 years, it does not fit your earlier observation that adjustments were neutral to mildly warming of the past.
If you have time, open up the latest post from my friend Ken Stewart to see some really extreme changes to former official Australian records and links to some Australian State summaries that show decided cooling of the past.
https://kenskingdom.wordpress.com/2019/03/28/acorn-sat-2-0-south-australia-science-fiction/

Is this important? Hell, yes. Australia provides a large portion of the global data, being the main data source for the Southern Hemisphere. The little-publicized ACORN-SAT Version 2 appeared late last year, maybe timed to be written into the next IPCC report and we know what hysteria people like Ocasio-Cortez can wring from the IPCC.
Geoff

1sky1
Reply to  Steven Mosher
March 30, 2019 12:57 pm

Curry is an academic, scarcely more qualified to judge the validity of “temperature series” and their ad hoc manipulations than Mosher, the polemicist.

F.LEGHORN
Reply to  1sky1
March 31, 2019 5:02 am

Dr. Curry was the chair of the school of meteorology at the University of Georgia. She’s totally qualified.

Jim Butts
Reply to  Pop Piasa
March 28, 2019 7:58 pm

I agree, Freeman Dyson should be on the panel.

rotor
Reply to  Pop Piasa
March 28, 2019 8:06 pm

I’ll see your Spencer and raise you a Lindzen.

March 28, 2019 6:34 pm

Climate models fail because they have no predictive value. I’ve been trying to publish that ferschlunginer paper for six years, but journal editors run away screaming in fear.

The reason the models are still in use is because climate modelers are not scientists, they don’t understand that observations mortally test theory, and they have no idea that the models are predictive garbage. Also, the APS fails to call them out and the NAS is in cahoots. So, the train chugs merrily along.

The air temperature record is badly contaminated with ignored systematic measurement error, and the workers in the field don’t even know enough to include the resolution limits of the historical instruments. The incompetence among these people is incredible.

There will never be reliable air temperature data outside of the USCRN and the like, if the like exists anywhere else in the world.

The entire field of climate alarmism lives on false precision. The whole thing is a scientific crock.

Reply to  Pat Frank
March 28, 2019 10:16 pm

Here is your ferschlunginer hypothesis showing that climate change predictions are not scientific.

In other words not even worth a potrzebie. 🙂

https://rogerfromnewzealand.wordpress.com/2018/05/09/ever-been-told-that-the-science-is-settled-with-global-warming-well-read-this-and-decide-for-yourself/

Cheers

Roger

Reply to  Roger
March 29, 2019 9:00 am

You’re clearly a well-read guy, Roger. 🙂

Russ Wood
Reply to  Roger
March 31, 2019 4:49 am

I too used to get Mad magazine in the late 50’s! I STILL tend to use ‘furshlugginer’ when in extreme frustration!

Zig Zag Wanderer
March 28, 2019 6:46 pm

The debate is over.

It never actually happened, but it is over. We won the debate that never happened. You lost the debate that never happened.

Get over it!

Did I get that right?

Pop Piasa
Reply to  Zig Zag Wanderer
March 28, 2019 6:56 pm

Almost, just replace”debate” with “election” and fire it back at ’em.

Zig Zag Wanderer
Reply to  Pop Piasa
March 28, 2019 7:27 pm

But the election did happen (despite all sorts of denial)

March 28, 2019 6:47 pm

Reviews of scientific work are always beneficial to science as they continue the role of critical thought necessary to the advancement of science. The CAGW computer models are clearly non-functional and are constantly predicting much more warming than what has actually occurred since 1980. Faulty models are useless models. Actual disasters like a catastrophic rise in sea levels have not occurred. Storms are less and hurricanes reduced. Despite constant attention to any unusual weather event to imply future disaster, the MSN keeps well hidden the benefits of recent climate trends. The benefits of extra CO2 and benefits of warming are steadfastly ignored. Connections between extra CO2 and climate trends are not shown or proven to the public. They are merely constantly repeated as propaganda.
Worst of all the surface temperature data sets have been altered and corrupted to show cooler than actual temperatures in the past, to make present climate appear warmer than it is. The integrity of the historical record should be restored.
A review of the climate science situation is well overdue.

Karabar
March 28, 2019 6:53 pm

Some thoughts from Alberta that might serve as thoughts to pursue from the commission:
https://blog.friendsofscience.org/2019/03/27/climate-science-and-economics/

Monster
March 28, 2019 7:08 pm

Yeah, you can call a meeting and state some conclusions. It won’t change minds. This isn’t a rational discourse, anymore. This has as much chance of bringing everyone to a common solution as the first council of Nicaea did to get rid of the heretics.

Zig Zag Wanderer
Reply to  Monster
March 28, 2019 7:32 pm

I beg to differ.

There are a great many ordinary people out there who don’t know the alternative viewpoint to the alarmists’ given the collusion by the MSM. It will help those people, who we may find to be in the majority, to get a different perspective on the matter.

Tom Abbott
Reply to  Zig Zag Wanderer
March 29, 2019 5:23 am

“There are a great many ordinary people out there who don’t know the alternative viewpoint to the alarmists’ given the collusion by the MSM. It will help those people,”

I agree. A lot of these people don’t even know there is another side to the story on CAGW. Besides, the Democrats are going to be pushing some version of their Green New Deal hard, so skeptics better get ready to counter their arguments.

Rob_Dawg
March 28, 2019 7:27 pm

Not expert review. Experts in “climate science” are hopelessly tainted. Leading scientists and engineers. That’s the ticket. Oh, and no lawyers.

David Chappell
March 28, 2019 7:56 pm

“Let’s hope that they’re not as ignorant about science as they are about history.”

Faint hope, it’s worse than you thought.

Rod Smith
March 28, 2019 7:57 pm

Myron, well said. Me seems to recall that the wise ancient Greek philosophers took the view that correct ideas are strengthened by debate.

So their disdain of such a high level debate does belie a fear of exposure of their claims as weak.

They claim AGW is “settled science.” Not. I’m a registered electrical engineer and I’ll tell you what is settled science: Electrical theory. Provided we can handle the very difficult mathematics (Maxwell’s and quantum), we can calculate darn near everything electrical. Even Special Relativity is contained in Maxwell’s theory: Einstein’s big contribution was only in correctly interpreting it; I have been through the math and if you care to check Dr. Marcus Zahn’s excellent text “Electromagnetic Field Theory” (Wiley), he derives the mathematical basis of Special Relativity, the Lorentz Transformations, using only Maxwell’s Equations. Settled science it is. I dare say you will not find even one EE world-wide who will disagree with Maxwell’s electrodynamic theory, save someone mentally deficient. If I was to question another EE’s work and such questioning was contrary to Maxwell’s theory, I would likely be disciplined and maybe even lose my license. So the very existence of a Dr. Judith Curry proves that climate science is not settled.

Izak Walton
Reply to  Rod Smith
March 28, 2019 10:35 pm

Rod,
Firstly Maxwell’s equations are still being tested by physicists. I know people who are looking for tiny deviations from Lorentz invariance or to measure the diameter or magnetic dipole moment of an electron. Such measurements constantly push the limits of experimental physics and so far have not yielded any non-zero results. But they are done by what everyone would consider “serious physicists”. In contrast my spam folder is full with emails by people claiming to have disproven Maxwell’s equations. These people claim to be physicists but almost everyone else considers them cranks. If you talk to mathematicians all of them can tell you about receiving letters claiming to have squared the circle. Again they get automatically dismissed as cranks.

The existence of people disputing a theory is not evidence that the theory is not settled science. At worst it shows that cranks exist in every field. And at best it shows the way towards new knowledge.

Dylan
Reply to  Izak Walton
March 29, 2019 6:32 am

Likewise, a vast consensus of scientists whom back a theory does not magically become data to feed back into said theory to validate it.

March 28, 2019 8:00 pm

“Let’s hope that they’re not as ignorant about science as they are about history. “

Suggested Correction for Myron: “Let’s hope they’re not as ignorant dishonest about science as they are about history.”

But considering Michael Mann is one of these individuals, Mikey’s mental affliction as a pathological liar shouldn’t give anyone hope about his “science.”

Speaking of politicized science, I keep getting the following BS from AAAS/Science mag wanting me to re-join. Here’s the email text they sent me:

“The results are in.

Humans are causing—and have already caused—climate change.
We are running out of time to limit some of the worst effects.
Achieving these limits will require “rapid, far-reaching, and unprecedented changes in all aspects of society,” according to the Intergovernmental Panel on Climate Change.

So, what do we do now?

Whether it’s climate change, species loss, public health, or new discoveries, AAAS is there asking the hard questions, sharing accurate, unbiased information with the public, and working with policymakers to chart a smart course for the future.

Our members make this all possible. And we need your voice to continue our vital work.

Speak up for science. Join AAAS for as little as $50 and receive a year’s subscription (50 issues!) to Science magazine.

And the email has this graphic with it:
comment image

The bottom tag line is a damning portrayal of what has happened to AAAS with their indoctrination to the Climate Hustle:
“Get In On The Movement”???
And they actually think that such activist messaging and claiming they are giving “unbiased information to the public” are compatible? Really?? The two things couldn’t be more incompatible conflicts of interest.

Science + activism = junk science

And asking scientists to join a movement????
Serious question: Do any adults still work at and control AAAS? Or is it just all political agents now in charge at Science mag and AAAS? Are there any actual scientists still working at AAAS? Or just Climate Change Communication majors? Really!!

Activism in one’s scientific field is such a no-brainer no-no for any real scientist with integrity.
The people running AAAS now are so blinded with partisan messaging and a socialist climate propaganda message that I can’t figure out which is stronger in them: their ignorance or their arrogance???

But whatever it is, it is clearly a case of Noble Cause Corruption. They’ve thrown unbiased objectivity and uncertainty to the wind and have embraced the junk science and activism as their business model.
AAAS is a pathetic organization now. Politics all the way down.

Tom Abbott
Reply to  Joel O'Bryan
March 29, 2019 5:43 am

“The results are in. Humans are causing—and have already caused—climate change.”

I thought the AAAS had taken this blatant lie out of their official position statement.

I guess they saved it for their emails.

The AAAS is flat-out lying to people with this statement. Someone ought to ask them to provide the evidence backing up those claims. We all know they don’t have any such evidence. They should be called on it.

Because of the CAGW fraud, we are watching the destruction of the Scientific Method right in front of our eyes. Politics and delusion have turned it into a travesty of itself.

Chris Hanley
March 28, 2019 8:12 pm

“… this seems puzzling. If the alarmists are confident that the science contained in the official reports is spot on, they should welcome a review that would finally put to rest the doubts that have been raised …”.
================================================================
It’s puzzling in the same way that the Left’s response to the exhaustive Mueller Report was puzzling i.e. instead of welcoming the news that their president was not a Russian spy after all, there was disappointment and disbelief, even outrage.
Good summary.

Tom Abbott
Reply to  Chris Hanley
March 29, 2019 5:56 am

“It’s puzzling in the same way that the Left’s response to the exhaustive Mueller Report was puzzling”

I think the CAGW fraud and the Mueller report are examples of the same type of delusion. The Democrats already have their minds made up about the subject and nobody is going to change them. They believe what they want to believe, despite evidence to the contrary in the Mueller case, or in the case of the CAGW fraud, they believe in it despite no evidence at all to support it

Frank
March 28, 2019 8:15 pm

Question 1: First, the computer models used have predicted far more warming than has occurred over the past 40 years. Why have such models failed and why are they still used are important questions.

Answer: Models haven’t predicted “far more” warming than has been observed over the last 40 years. The transient response hindcast by climate models is sensitive to choices about the strength of the aerosol indirect effect on clouds (which may be negligible) and by the rate of heat uptake by the ocean (which reduces current transient warming). Given our best estimates of forcing, the warming observed over the last 40 years, our planet responds to forcing with about 30% less warming than models project.

From a scientific perspective, climate models haven’t “failed”. Models are hypotheses. To prove them wrong, one needs to show – with high statistical confidence – that observations are inconsistent with the predictions of the model. However, the predictions from the IPCC’s models – the multi-model mean – as interpreted by the IPCC’s experts are extremely vague: It is likely that it will be X to Y degC warmer in 2100. Likely implies this is one standard deviation. Scientific invalidation requires two standard deviations (including the uncertainty in the observations).

As Box famously said: “All models are wrong, some models are useful.” All of the government’s economic models are wrong, but most policymakers think that have some utility in formulating policy. That is why both parties treat the projections of the CBO with respect. The IPCC’s projections don’t deserve to be treated with respect because their models have climate sensitivities much higher than the central estimate from observations.

Question 2: “Second, predictions of the various negative impacts of warming, such as sea level rise, are derived from highly unrealistic scenarios; and positive impacts, such as less ferocious winter storms, are minimized or ignored. What would a more honest accounting of all the possible impacts of climate change look like?”

The IPCC makes projections for a range of scenarios. RCP8.5 might be unreasonably pessimistic, but RCP6.0 is perfectly sensible. There is no way to predict how much adaptation and technological progress will reduce the projected damage from future warming – that is simply something that policymakers must compensate for when making decisions.

Question 3: “Third, surface temperature data sets appear to have been manipulated to show more warming in the past century than has occurred. The new commission should insist that the debate be based on scrupulously reliable data.”

A waste of time. The adjustments represent only a small fraction past change and some of them are scientifically justified. Arguing about whether there has been 0.9 degC or 0.8 degC of warming since 1970 isn’t going to change anything important. If UAH is right, then negative lapse rate feedback is non-existent and climate sensitivity may be higher than expected.

Arguing about an appropriate discount rate for future damages would make a lot of sense. Left-wing academics fear that we have destroyed and depleted our environment and fear that their descendants will be poorer. They are willing to pay a lot to make things better for their descendants. The developing world and much of the developed world assume that economic growth is going to make their descendants far richer than they are today. These people aren’t willing to pay much to avoid future damages that will interfere with the life-style of their far-richer descendants. This rational is an intuitive explanation of the Ramsey equation, which says that the optimal discount rate depends on the future economic growth rate. Global warming requires a global response, and it is idiotic to expect the bulk of the world to view this problem from the same perspective as our academic elite.

Hopefully, someone a little more knowledgable than Mr. Ebell will be shaping the direction of this review.

Reply to  Frank
March 28, 2019 8:35 pm

You seem to greatly confuse the strong CO2 forcing climate change hypothesis with the null hypothesis.
The huge uncertainty from the model ensemble mean (if that can actually mean anything) and the clear failure of observation to track within does not support the climate change hypothesis.

And “given the warming of the last 40 years” we have no way of separating the natural part from the anthropogenic part. The hand-waving of “most or all” as human-caused is simply a hopeful guess. But the clear fact that the similar 1910-1940 warming was most assuredly all natural, how can the 1975-2005 be anything but mostly natural? To conclude the opposite is simply bad science. (And that goes back to get rid of the blip email from Phil Jones, they needed to disappear that inconvenient warming.)

As for your final argument, you get the Thomas Malthus Award. A richer society is one that will always be able to better adapt, cope, and innovate. Making yourself poorer today only ensures your children will likely also be poorer and less able to adapt to whatever nature throws at them.

Frank
Reply to  Joel O'Bryan
March 29, 2019 3:20 pm

Joel: You are completely correct in saying that it is impossible to distinguish between anthropogenically-forced warming and unforced or internal variability. (We have reliable information about changes in natural forcing during the last half-century and know that didn’t play a significant role). So I look at the proxy record for the last 70 centuries and ask how often does this record show 0.9 K of nearly global warming in a half-century. Remember those events can be cause by natural forcing (the Maunder Minimum, a cluster of volcanos, etc) as well as unforced/internal variability (chaos). Most people think the LIA was less than 1 degK colder than 1850-1950. You can find events like the LIA, MWP, RWP and Minoan WP in Greenland ice cores, but there is lots of evidence that the dynamic range of climate swings in Greenland is much greater than for the planet as a whole. You won’t find any of the above warm periods in Antarctic ice cores, nor a pattern of global warming in ocean sediment cores. So, when I ask myself what are the chances that unforced variability warmed the planet 0.9 K over the last half-century – exactly when radiative forcing increased at 0.4 W/m2/decade – I conclude that the probability that unforced variability was completely responsible is to small to be worth considering.

AFAIK, about half of the 1910-1940 warming can be accounted for by a change in forcing and about half is assumed to be due to unforced variability. That is excellent evidence that a couple tenths of a degree of unforced warming over a couple decades can be common. The 1950-1970 period could represent unforced cooling of the same magnitude obscuring the warming from rising GHGs. The IPCC used to attribute this to rising aerosols, but that is looking less tenable. So I think the 0.9 K of warming observed over the past half-century is almost certainly due to anthropogenic forcing, but unforced warming OR cooling certainly could have added OR subtracted a few tenths of a degC. Given that 1.5 centuries of natural warming precedent the last 50 year of warming, one might have guessed that modest unforced cooling was more likely than modest unforced warming.

So I agree with the IPCC that the best estimate of forced warming for the last century is roughly the same as observed warming, but their estimate for unforced change of +/-0.1 K appears too small for me.

An appropriate discount rate for future damage depends on expectations of future economic growth. My and your personal expectations for future economic growth are fairly irrelevant. No leader in the developing world can afford politically to set expectations for economic growth much lower than attempting to follow China’s recent footsteps. Nor can they afford to postpone growth for the benefit of future generations – when it is clear their descendants can be much richer. This is why the individually nationally determined contributions to emissions reductions at Paris from developing countries amounted to business as usual growth in emissions. IMO, it is inconceivable to have a global effort to limit emissions growth based on a universal discount rate to be used by all countries.

Reply to  Frank
March 29, 2019 9:09 am

Frank, “To prove [the climate models] wrong, one needs to show – with high statistical confidence – that observations are inconsistent with the predictions of the model.

Or you can show that their predictions are physically meaningless..

That is, climate models do not make predictions in any scientific sense. They’re not even wrong. They’re meaningless.

No physically valid models, no physically credible notion of CO2-induced warming.

Frank
Reply to  Pat Frank
March 29, 2019 3:47 pm

Pat: In your paper, you show that AOGCMs disagree with each other and observations – by +/-10%. That doesn’t mean they get the planet’s energy balance incorrect by such a large amount. All models are tuned so that incoming and outgoing radiation agree with what we observe from space (and lately with the imbalance reported by ARGO). So they must have offsetting errors elsewhere their programs (the absorptivity and reflectivity of various types of clouds) which correct the problem in cloud fraction.

https://www.skeptic.com/wordpress/wp-content/uploads/v14n01resources/climate_of_belief.pdf

Models can be tuned many different ways to produce a reasonable representation of today’s climate. The amount of warming they hindcast can be adjusted with aerosols and ocean heat uptake. The model developers aren’t idiots. However, many equally good sets (or bad if you prefer) of parameters produce different amounts of feedbacks and comparisons with observations don’t allow them to identify an optimal set of parameters. This may be due to limitations on grid cell size and other computational limits. IMO, this is the biggest problems with AOGCMs

Reply to  Frank
March 29, 2019 4:35 pm

Frank, “Pat: In your paper, you show that AOGCMs disagree with each other and observations – by +/-10%.

Rather, in the Skeptic paper I show that ±2.8 W/m^2 annual average cloud forcing error means that the average climate model can’t resolve the impact of the 0.035 W/m^2 annual average perturbation of CO2 emissions.

That ±2.8 W/m^2 was estimated from fractional cloud error and and total global cloud forcing.

The link I provided used an up-dated CMIP5 annual average long-wave cloud forcing error of ±4 W/m^2

The model error in long wave cloud forcing is systematic and produces an uncertainty in every single step of a projection. That uncertainty propagates forward leading to ±15 C uncertainty after 100 years.

The question models are trying to resolve is the internal climate energy state. The true energy state includes the unique partitioning of available energy among all the climate substates (atmosphere, oceans, ice-fields, etc.).

Any given global energy balance can have a very large number of different internal energy states. Getting the energy balance correct tells one nothing about how that energy is partitioned internally.

Climate models are required to tell us which internal energy state our climate currently occupies, among all the possible states.

They are presently hopelessly wrong about the physically correct internal energy state of the climate. They are useless in predicting air temperature.

Frank
Reply to  Pat Frank
March 29, 2019 8:41 pm

Pat: Model developers might have large errors in cloud fraction, but they have tuned there models so that incoming post albedo SWR and outgoing OLR are nearly in balance (and today agree with the imbalance reported by ARGO). So a model with too many clouds has been tuned to correct for too much incoming SWR, perhaps by having cloud that reflect too little SWR. An off-setting error. When there are compensating errors, the uncertainty doesn’t add in quadrature. That applies to random errors.

More likely, the model’s clouds weren’t reflective enough, so tuning to create near balance at the TOA would produce too many clouds. Models contain a parameter that controls at what relative humidity clouds begin to form in a grid cell. 100% relative humidity is too high. Small variations in temperature and relative humidity inside a grid cell will produce a partially cloudy grid cell when the relative humidity is slightly less than 100%. Tuning the parameter that controls the RH at which clouds being to form produces large changes in albedo because rising air parcels are often near saturation.

IMO, you are looking at errors in one observable (say cloud fraction) without recognizing that the model has been tuned to compensate for that error. These errors aren’t critical unless they influence how the planet’s radiative balance changes with rising Ts – the planet’s climate sensitivity parameter reported in W/m2/K. Calculating the correct amount of warming needed to negate a forcing is critically dependent on models producing the correct climate feedback parameter. The reciprocal of the climate feedback parameter (K/(W/m2)) is ECS with W/m2 converted to doublings of CO2.

Reply to  Pat Frank
March 30, 2019 2:53 pm

Frank, climate models can’t be tuned to compensate for error. The error, and its supposed compensation, do not produce a correct underlying physics.

The physical energy state of the climate is misrepresented, even when the model has been adjusted by hand to reproduce some observable.

There is no assurance whatever that the tuned parameter compensates the error in all subsequent predicted states.

That is the meaning of the uncertainty bars.

Frank
Reply to  Pat Frank
March 31, 2019 1:52 pm

Pat wrote: “Frank, climate models can’t be tuned to compensate for error. The error, and its supposed compensation, do not produce a correct underlying physics.
The physical energy state of the climate is misrepresented, even when the model has been adjusted by hand to reproduce some observable.
There is no assurance whatever that the tuned parameter compensates the error in all subsequent predicted states.”

As the saying goes, all model are WRONG, but some are USEFUL. Clearly weather prediction models are clearly useful despite being wrong, until initialization uncertainty and chaos destroy their utility. Today we initialize under slightly different conditions to assess how sensitive the projections for the weather a week in the future or the path of a hurricane depend on initialization uncertainty. That makes weather prediction models more USEFUL, despite being WRONG.

Different weather prediction software makes different forecasts. Both show skill in predicting the future, but the European model apparently is more skillful in some areas. Being MORE WRONG doesn’t mean “not useful”.

AOGCMs normally aren’t initialized to match the conditions at any particular time. They are “spun up” for several centuries to eliminate all trace of artifacts from initialization. Then temperature in different locations is tweeked slightly, 10^-6 K if I understand correctly. Efforts to hindcast decadal climate change after initializing to the conditions on a particular date were a complete failure (predictably, since we can’t forecast El Ninos a year in the future).

However, over a century the predictions of AOGCMs under a given forcing scenario no longer diverge rapidly. Is the mean and spread of their output after different initializations a useful forecast (even if it isn’t a deterministic one)? Plenty of room for responsible scientific skepticism here!

Frank
Reply to  Pat Frank
March 29, 2019 8:19 pm

Pat Frank: {The predictions of climate models] are physically meaningless.

The deterministic view of physics died many decades ago. Quantum mechanics told us that the current state of a particle doesn’t allow us to predict what will happen to it. Schroedinger’s cat is simultaneously alive and dead, which is physically meaningless. However, QM makes exceptionally useful and precise predictions about the results of real experiment. Many people were unable to adapt to the replacement of deterministic theories with those that only produced probability. Feynman notoriously said, If you don’t like it, tough! Go to another universe – where the rules are simpler – more appealing.

https://en.wikipedia.org/wiki/Schrödinger%27s_cat

The expectation that we can come up with “physically meaningful” solutions (predictions about the future) to macroscopic physical problems ended with Lorenz and the understanding of deterministic chaos.

“When the present determines the future, but the approximate present does not approximately determine the future.”

Since we can’t know the exact present, we can’t come up with “physically meaningful” solutions to anything having to do with fluid flow (and numerous other physical situation, including coupled pendulums and three-bodies moving in their mutual gravitational field). The output of climate (and weather) models is limited by initialization uncertainty. Even a change of 10^-6 degC during initialization produces a different future. So the only answer physics provides is a pdf describing the final state. Though Pat may disagree, pdf’s are physically meaningful answers and can be tested experimentally. Saying there is a 50% probability of rain between 3 and 4 pm tomorrow or providing a pdf for rainfall two days in the future is a testable scientific prediction. The null hypothesis is derived from historic climate records and the predictions of weather prediction models are vastly more accurate that historic averages.

Climate models are different from weather forecast models because their utility is based on the hypothesis that the predictions for the exact weather on any day in the 2090’s certainly lack skill, but the difference between the today’s climate and the climate for the 2090s (the average of 3650 daily predictions) is meaningful. This hypothesis is difficult to test in practice. There are a lot of debates about the accuracy of the predictions Hanson made in the 1980’s, because some of Hansen’s assumptions about future forcing were wrong. In reality, Hansen wasn’t practicing the scientific method and arranging for the most stringent possible test his hypothesis – and a climate model is a hypothesis. For climate scientists, models aren’t hypotheses to be tested experimentally; they are unvalidated tools for influencing policymakers. Demonstrating with a high degree of statistical confidence that the predictions of a climate model are inconsistent with observations turns out to be nearly impossible because those predictions are highly uncertain and climate is highly variable. The IPCC says that climate sensitivity has a 70% likelihood of lying between 1.5 and 4.5 degC. This is analogous to aiming a barn from 10 feet away – the target is so big you can’t miss. Therefore climate scientists do make physically meaningful predictions, but it isn’t practical to carry out the scientific method of attempting to INVALIDATE a hypothesis/climate model.

Climate models are also plagued by parameterization uncertainty, which is the subject of the Rowland paper discussed in the WUWT article Pat linked. There are a large number of sets of parameters that can be used in a AOGCM to describe physics that occurs on a subgrid scale. If the average relative humidity inside a grid cell is 99%, what fraction of the grid cell will contain clouds. The answer isn’t zero, because some fraction of the grid cell will contain slightly more water vapor or be slightly colder. This is an important parameter, because the relative humidity of most rising air masses is near 100%. Changes in this parameter have a massive impart on albedo. This parameter is adjusted to make the model’s albedo agree with observations from space. However, errors in this parameter can be offset by changes in other parameters including the reflectivity of clouds. Pat wrongly believes that the errors in albedo and cloud reflectivity must be added to each other, but model developers offset errors in one parameter with error in the other during tuning. Model developers aren’t stupid; they make sure the planet’s overall energy balance correct (today using ARGO heat uptake). Unfortunately, no matter how many “observables” (temperature, precipitation, cloud fraction) model developers use to tuned their models, that can’t find a single optimum set of parameters. The IPCC attempts to deal with parameter uncertainty by relying on a multi-model mean derived from many models with different sets of parameters. Unfortunately, they don’t systematically explore parameter space, so the multi-model mean and spread have no rigorous statistical meaning.

Reply to  Frank
March 30, 2019 6:40 am

Schrodinger’s cat is not simultaneously alive and dead. It’s not an atom. A cat is too big to be in quantum superposition. It will decohere by itself. Schrodinger invented the zombie cat to make fun of Bohr et al. 80 years later people are still falling for it. He’s laughing in his grave. It’s not the uncertainty principle in quantum mechanics that makes future climate states unpredictable. It’s chaos in classical fluid dynamics.

Climate models are the fake physics of Alice in Wonderland. Take it from Prof. Chris Essex, mathematical physicist and former climate modeler.

Frank
Reply to  Dr. Strangelove
March 30, 2019 11:26 am

Strange: As I understood it, the fate of Schrodinger’s cat was supposed to be determined by the decay of a single nucleus. However, you are right, I should have limited my discussion to chaos.

I’m dissatisfied with Essex’s presentation. I understand vaguely that their are significant problem with computational fluid dynamics. Nevertheless, as I understand it, engineers today design airplanes using computational fluid dynamics. They no longer go to wind tunnels with a scale model. Some CFD calculation apparently are reliable, but I’d never know it from listening to Essex.

Climate models may have been Alice-in-Wonderland when Essex started, but they stopped needing flux adjustments to deal with computational instabilities almost two decades ago. (It’s disgraceful that two IPCC reports were written without clearly raising the issue of flux adjustments.) What are the other signs that models can’t do what we are asking then to do? Initialization uncertainty can be addressed by multiple runs. We have 100 runs from one model today, Parameter uncertainty is currently unsolvable, but ensembles with perturbed parameters suggests that it is difficult to find a model with ECS less than 2. Reducing grid cell size allowed the QBO to be properly modeled in the stratosphere and may someday make an MJO appear. We have cloud resolving models that can be compared with AOGCMs.

Can you suggest anything more informative/compelling that Essex’s propaganda?

Reply to  Dr. Strangelove
March 30, 2019 9:34 pm

CFD calculations are reliable because they tested it in the wind tunnel. As long as they use the same set of parameters, they don’t have to test it again in wind tunnel. If they change the parameters, they have to test it in the wind tunnel to make sure the equations still work. They are using the Reynolds-averaged Navier-Stokes equations. They don’t have an analytical solution to the actual NS equations. They have numerical solutions. Basically trial and error using a fast computer. No matter how fast it is, they have to check with experiments if the guesses are correct.

Where is the wind tunnel where climate modelers test Earth’s climate? They don’t have empirical parameters derived from experiments. They just plug in the parameters and call the results, scenarios. Alice in Wonderland is more catchy

Frank
Reply to  Dr. Strangelove
March 31, 2019 11:12 am

Dr. Strangelove replied: “Where is the wind tunnel where climate modelers test Earth’s climate? They don’t have empirical parameters derived from experiments. They just plug in the parameters and call the results, scenarios.”

Thoughtful answer. The ability to reproduce today’s climate might be considered to be the “wind tunnel” of AOGCMs, but experiments with ensembles of perturbed parameters and a panel of observed properties of today’s climate have failed to narrow the plausible range for any particular parameter. Model with N parameters requires finding a global optimum an N-dimensional space with many local optima. With ensembles of simpler models, a global optimum can’t be found by trial and error. Such ensembles show a wide range of ECS, but rarely below 2. (In one case, I know of this may be because low climate sensitivity models are unstable towards cooling.) With sophisticated computationally-intensive models, tuning parameters one-by-one is likely to end up at a local optimum.

Hindcasting certainly doesn’t prove anything when you can’t be sure what fraction of recent warming is unforced. And models validated by hindcasting can be used to make attribution statements about past warming, that is circular reasoning.

Reply to  Frank
March 30, 2019 2:40 pm

Dr. Strangelove anticipated my reply about Schrodinger’s cat. Thank-you. 🙂 The cat is not part of the quantum phase state of the radio-nucleus.

However, Frank, you wrote that, “The deterministic view of physics died many decades ago. Quantum mechanics told us that the current state of a particle doesn’t allow us to predict what will happen to it.

This is not correct. Quantum Mechanics is an entirely deterministic theory. It produces a fully deterministic description of the evolution of the quantum state. It also completely predicts the scattering profile of a set of states.

QM can’t predict the scattering position of a single state, because such predictions violate the Heisenberg uncertainty that governs particle position and momentum. Heisenberg uncertainty in no way removes the determinism of the theory.

Apart from that, Quantum Mechanics has no bearing on the physical meaninglessness of climate model projections.

Your duplex set of paragraphs about climate models, if taken to their proper end, would require a conclusion that climate models can make no predictions. But you decided, in a thorough non-sequitur that “Therefore climate scientists do make physically meaningful predictions,…

Parameter uncertainty means the underlying physics is not known. The climate energy state is misrepresented. The projections are an expression of ignorance, no more.

Climate modelers aren’t stupid, it’s true. My experience with them, though, is that they know nothing whatever of physical error analysis. They don’t understand a calibration experiment. They are not competent to evaluate their own models.

Argo floats, by the way, exhibit about ±0.6 C of systematic uncertainty, when calibrated in the field (not back in the lab). They cannot give anywhere near the accuracy necessary to measure the tiny change in ocean heat content required to resolve any thermal effect of CO2 emissions.

Frank
Reply to  Pat Frank
March 31, 2019 1:20 pm

Pat: You are correct that I shouldn’t have mentioned the non-deterministic nature of quantum mechanics in a discussion of the macroscopic world. However, you totally ignored my comments on how deterministic chaos changes the nature of the predictions that can be made about phenomena such as weather, climate, coupled pendulums, the solar system on long time scales, etc.

Pat says: “Parameter uncertainty means the underlying physics is not known. The climate energy state is misrepresented. The projections are an expression of ignorance, no more.”

This is incorrect. We know the underlying physics of turbulence, cloud microphysics, and other phenomena. We can’t apply the physics we know to a large grid cell. Clouds start appearing in a grid cell before the average relative humidity for the grid cell reaches 100%. IIRC, accurate calculations of the transport of heat and water vapor perpendicular to the surface of stationary ocean requires working with grid cells of 1 mm^3. Nevertheless, an equation for bulk transfer by turbulent mixing with an empirical constant provides reasonably accurate fluxes: the rate of evaporation is proportional to wind speed and undersaturation (both assessed a specified distance, usually 2 meters, above the surface). The validity of such empirical constants has been tested under carefully controlled conditions in the laboratory (where the surface is no longer flat) and with fine scale CFD computations, and in the field (where definitive experiments are challenging). Similar approximations are used in aeronautical engineering (where CFD is practical): Lift and drag are proportional to the wing area, the density of air and velocity squared. An empirical coefficient of lift and drag for a given wing shape is used. Careful experiments show these coefficients are not constant and applicable to all situations, but they provide useful approximations. They are NOT, as you claim, expressions of IGNORANCE. They are practical approximations used in the real world.

We may be ignorant of the limits beyond which these approximations work. What happens to the above equations for lift and drag when the flow over wing is no longer laminar? We do experiments to find out. These equations fail when the angle of attack of a wing is too steep and the flow detaches from the wing. What happens when the wind over the ocean is strong enough to produce a fine mist of water droplets. We’ve done those experiments too. However, it is my understanding that the output from CFD and other simulations varies in unpredictable ways. In the laboratory, the number of cloud condensation nuclei determines the size of the water droplets that initially makes up a cloud and determines its reflectivity (Toomey effect?) However, small water droplets evaporate and form more stable large droplets? How long does that take? Apparently this happens faster in the real atmosphere than in the laboratory, because AR5 now says the Toomey effect is usually insignificant in the field. Modeling groups that accept this conclusion are re-parameterizing cloud reflectivity.

AOGCMs provide reasonable representations of current climate, in part because key parameters have been tuned to do so. The question is whether they provide a useful estimate of what our world with 2X or 3X more CO2 will be like. There is plenty of room for skepticism here! IMO, skeptics would be far better discussing this problem rather than making dubious and inaccurate assertions that are easy for the consensus to reject. Nevertheless, many prefer the Stephen Schneider approach: Telling scary stories, making simplified dramatic statements, and hiding doubts. I personally prefer his definition of ethical science.

Pat Frank mistakenly claims: “Argo floats, by the way, exhibit about ±0.6 C of systematic uncertainty, when calibrated in the field (not back in the lab). They cannot give anywhere near the accuracy necessary to measure the tiny change in ocean heat content required to resolve any thermal effect of CO2 emissions.”

This is random error, not systematic uncertainty. Thus, the authors of the paper you are referring to show their error going down over longer periods. When averaged over a decade or two – rather than a few months – and over the entire planet – not just a problematic piece of the Atlantic where the spacial resolution of the Argo buoys is poorly sampled – random errors average out. It isn’t clear that the authors of the paper believe they have proven ARGO isn’t meeting its design objectives and they certainly don’t make this claim. And the developers of ARGO, weren’t inexperienced idiots either. They understood the challenges they faced from previous experience inadequate equipment and sampling and designed a system that would achieve their goals.

https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2006JC003825

“Results for the main North Atlantic region show a reduction in RMS difference from 29 ± 3 Wm−2 for monthly heat storage to 12 ± 2 Wm−2 for seasonal (i.e., 3 monthly) heat storage and 4 ± 1 Wm−2 at biannual scales.”

Any error that is going down with time is clearly not a systematic error in the eyes of these authors. I see no reason to assume that the current value for ocean heat uptake (0.7? W/m2) averaged over the planet over more than decade is unreliable.

Furthermore, a CONSTANT systematic error in a single point calibration (for example, the error in b for y = mx + b), doesn’t interfere with out ability to measure a CHANGE in y accurately. dy depends only on the accuracy of m and dx. A VARIABLE systematic error is a whole different matter. However, we debated this point several years ago.

To the extent that poorly-sited temperature stations have a CONSTANT systematic error, one might expect them to show the same trend as well-sited stations. This is what Andy found. And if urban stations have a CONSTANT bias compared to non-urban stations (a more dubious hypothesis), then you might expect them to have similar trends in temperature. This is what BEST found. If you created model data sets with constant and variable systematic errors, this is what you would find too.

Reply to  Pat Frank
April 1, 2019 10:08 am

Frank, you’re right, I ignored your point about deterministic chaos. It didn’t have anything to do with the physical error that is the focus of my analysis of climate models.

In any case, you didn’t originally write that, “deterministic chaos changes the nature of the predictions that can be made“, you wrote that “The deterministic view of physics died many decades ago” and that “expectation that we can come up with “physically meaningful” solutions (predictions about the future) to macroscopic physical problems ended with Lorenz.” That is, your initial position rejected the possibility of discrete predictions entirely. Your updated position allows Your second line is a back-pedal from your first.

Deterministic chaos does not end our ability to make meaningful predictions about the future. Physical theory allows us to predict the appearance of chaotic behavior itself, for example. You clearly understand that the solar system is chaotic, but that the positions of the planets can be accurately predicted at least hundreds of millions of years into the future.

This admission on your part obviates your own claim that predictions are not possible. If it were true that we cannot come up with physically meaningful predictions, then no engineering model would have any bounded predictive power. Likewise physics-based models. Predictability is ubiquitous in science and engineering. A debate about this is not worth the time.

You wrote, “We know the underlying physics of turbulence, cloud microphysics, and other phenomena.” No you don’t. Cloud physics remains an open question.

Your entire discussion of parameters, as used to simulate air movement around wing surfaces, only reinforces my point. Your explanation is embedded in an engineering model approach, where the parameters allow accurate modeling only within the experimental bounds of explicitly tested conditions. The same parameters are useless outside those bounds. Those parameters are indeed an expression of ignorance of the physics. Were the physics known, empirical parameters would be unnecessary.

The same limit is true for parameterized climate models. Their parameters are adjusted to produce a fit to bounded observables. There is no reason whatever to suppose they are applicable to conditions outside those bounds. Climate model projections fail within the limits of your own argument.

You wrote, “AOGCMs provide reasonable representations of current climate, in part because key parameters have been tuned to do so.

No, they do not. AOGCMs provide a reasonable reproduction of target observables; the observables that the AOGCM was tuned to reproduce. That is not the same at all of achieving the correct climate energy state, and deriving the observables from theory.

I think you know very well that GCMs tuned to reproduce air temperature fail to reproduce precipitation. And vice versa. These complementary failures show that the GCM is not reproducing the climate. It is reproducing chosen observables, and then only within the chosen bounds.

Yo wrote that, Argo error “is random error, not systematic uncertainty.” No it’s not, and yes it is.

The paper you linked, under “4.1. Accuracy of Argo‐Derived Temperature Section,” reports that ARGO temperatures are no better than ±0.5 C and sometimes much worse. And that uncertainty was obtained after the comparative data set was smoothed.

Nowhere do they claim or demonstrate that the ARGO measurement error is random. Because it’s not random. It’s systematic.

The reduction in error you quoted referred to the optimal interpolation (OI) scheme they used, not to the ARGO temperatures. From the paper, just above your quote: “We have also investigated the dependence of the OI scheme on the selected correlation length scales and timescales by running a number of experiments and briefly summarize the results here. For each experiment, one of the selected correlation parameters was altered, while the others remained constant.” That’s the error they’re discussing in your quote; model parameter sensitivity, not ARGO measurement error.

Systematic temperature measurement error is not constant. It is driven by uncontrolled environmental variables. Your supposition that all systematic error is merely a constant offset is a standard, but entirely universal, error in thinking characteristic of climate modelers and their acolyte supporters. They’re so naive about physical error they’re not even wrong. It’s as though they are entirely uneducated in physical error analysis.

You can upper case “constant” all you like. It won’t establish your wrong argument. The BEST folks don’t even know to include the resolution limits of the historical thermometers. They’re not the people anyone should look to, to obtain direction for proper data analysis.

Frank
Reply to  Pat Frank
April 1, 2019 11:18 pm

Pat Frank wrote: “The same limit is true for parameterized climate models. Their parameters are adjusted to produce a fit to bounded observables. There is no reason whatever to suppose they are applicable to conditions outside those bounds. Climate model projections fail within the limits of your own argument.

Frank wrote: “AOGCMs provide reasonable representations of current climate, in part because key parameters have been tuned to do so.”

Pat Frank replied: No, they do not. AOGCMs provide a reasonable reproduction of target observables; the observables that the AOGCM was tuned to reproduce. That is not the same at all of achieving the correct climate energy state, and deriving the observables from theory.

Frank now adds: This blog post from Isaac Held shows that an AOGCM is able to reproduce features of our climate that were not used to tune the model. In this case, the AOGCM is predicting the location and strength of the jet stream and how it changes between the seasons. The detail is awesome. The jet stream is a result of the large scale motion of the Hadley, Ferrel, and Polar cells being acted on by the Coriolis effect. The model is clearly doing something right!

https://www.gfdl.noaa.gov/blog_held/60-the-quality-of-the-large-scale-flow-simulated-in-gcms/

It is worth noting what the model does wrong. The model is run at higher resolution than most CMIP5 models. And this happens to be the output of a AMIP run where SSTs are specified and not controlled by the model. (When the CMIP5 models were forced with historic SSTs, their climate feedback parameters were appropriate for an ECS from 1.5-2.0 K. One might speculate that normal model runs with historic forcing were directing heat to locations on the planet where fewer W/m2 escape to space per degK of average surface warming.) I’d also like to ask how the model’s ability to properly reproduce the jet stream varies with how the model is parameterized.

My favorite climate paper is Tsushima and Manabe (2013), because it compares observed and modeled changes in OLR and reflected SWR during the seasonal cycle. GMST (not an anomaly) rises and falls 3.5 K every year, so seasonal changes observed by CERES are huge. And reproduced every year. The models all do a great job with the changes in OLR emitted through clear skies, where only water vapor and lapse rate feedbacks operate. The models do poor and mutually inconsistent job at reproducing the changes in OLR from cloud skies and SWR from both clear and cloud skies. The changes in LWR are highly linear and look like responses that are complete within one month, but the changes in SWR are not and appear to have some lagged components (a phenomena that has been reported by others). IMO, this is proof that we can’t trust the feedbacks (and therefore the climate sensitivity) of AOGCMs.

IMO, an honest and candid assessment of AOGCMs is far better than mischaracterizing the importance of getting the cloud fraction wrong (at least according to 1980’s cloud data). It’s too easy to dismiss all skeptics if we take such arguments seriously. As I mentioned before, models are tuned to have no radiative imbalance with a Pre-industrial atmosphere. All models are spun up and run for a couple of centuries to prove that the model can produce a stable temperature (with about 0.1 K of unforced internal variability) before a forcing is applied. The existence of disagreements between various models and disagreements with observations of the type you have noted can be explained by offsetting errors in cloud albedo or emissivity, or surface albedo, among other factors. This must be the case, because the model wouldn’t produce a stable temperature with an large imbalance at the TOA. Or it would equilibrate to a radically different temperature. Before 2000, models required “flux adjustments” because they couldn’t be tuned to afford a stable temperature near 287 K. But that isn’t the case today.

Frank
Reply to  Pat Frank
April 2, 2019 12:36 am

Pat Frank wrote: “Argo floats, by the way, exhibit about ±0.6 C of systematic uncertainty, when calibrated in the field (not back in the lab). They cannot give anywhere near the accuracy necessary to measure the tiny change in ocean heat content required to resolve any thermal effect of CO2 emissions.”

We were discussing the ability of the ARGO buoys to measure the current radiative imbalance at the TOA. That means the buoys need to assess heat storage. Section 4.2 of a paper citing an 0.6 K discrepancy between ARGO and other observations in the North Atlantic is titled “Use of Argo Data to Estimate Heat Storage”. ARGO shows average heat storage in the ocean averaged 0.7 W/m2 during the existence of the system. The paper clear says:

“Results for the main North Atlantic region show a reduction in RMS difference from 29 ± 3 Wm−2 for monthly heat storage to 12 ± 2 Wm−2 for seasonal (i.e., 3 monthly) heat storage and 4 ± 1 Wm−2 at biannual scales.”

How can this be? Due to the Gulf Stream, the temperature in the North Atlantic is relatively heterogenous spacially and can vary significantly over short distances. That is why – at any given time – an error of 0.6 K can be observed. At any instant, there may be more or fewer buoys than usual in the warmest spots in the Gulf Stream or colder spots on the sides. There simply aren’t enough buoys to fully defined the temperature field at any given time. However, as the buoys and the currents and eddies drift over time, these problems average out. The authors report a large RMS between monthly changes in heat content (and therefore heat uptake) and a much smaller discrepancy for heat uptake over 6 months. The authors obviously believe that the error caused by limited sampling is reduced by measuring over longer time periods. We now have ARGO data for about 360 6-month periods, potentially reducing the error by another factor of 19. It appears as if the authors of this paper believe the limitations they have characterized for seasonal heat uptake aren’t significant for decadal heat uptake.

Now it is possible that there could be systematic errors with the ARGO system. The thermometers (accurate to 0.001 degC) removed from decommissioned buoys show no deterioration with time. If the system reporting depth has a bias that increases with time, then temperature will appear to increase with time. A systematic error. However, buoys are being replace about every 5 years, and someone would likely notice that newer buoys are reading colder than older ones.

Reply to  Pat Frank
April 2, 2019 10:32 am

Frank, your Isaac Held evidence of model/observational conformity, is model/model conformity. Held is using reanalysis as his observations.

Reanalysis uses modeled extrapolation of observational data and calls that extrapolation “observations.” Data that have gone through models extrapolations suffer from the same sort of errors that the model projections themselves exhibit. It’s no surprise that models can emulate models.

Secondly, Held himself admits that time-wise projections are unreliable. He says, “Importantly, the model can propagate information from data rich regions into data poor regions if this propagation of information is fast enough compared to the time scale at which errors grow. (my bold)”

The upper limit time scale over which errors grow 100x larger than the perturbation of CO2 emissions is a year. A better estimate of the time over which a model projection becomes unreliable is a week.

You wrote, “The existence of disagreements between various models and disagreements with observations of the type you have noted can be explained by offsetting errors in cloud albedo or emissivity, or surface albedo, among other factors.

You offer an exercise in handwaving: “…can be explained by…” A complete physical theory would allow one to say, ‘Is known to be caused by…’

If you don’t know that offsetting model errors do not remove uncertainty from an expectation value, then you don’t know anything about physical error analysis. Every single climate modeler I’ve encountered is in that group.

Your entire second paragraph is an admission on your part that models fail, after which you suppose they are trustworthy.

The ±4 W/m^2 cloud forcing error is not a mischaracterization. It’s a systematic model error that appears in every single step of a projection.

Your arguments are all over the place, Frank, just as they were when you tried to contest my post about the pseudo-science that is paleo-temperature reconstruction. You lose a sally, and just go on to construct another one. It’s a waste of time.

Reply to  Pat Frank
April 2, 2019 10:37 am

Regarding ARGO buoys, I stand by my comments at April 1, 2019 at 10:08 am.

Nothing you wrote in your April 2, 2019 at 12:36 am follow-up post gainsays the systematic measurement error due to uncontrolled environmental variables that appears in ARGO measurements, Frank. Such systematic error does not appear in laboratory calibrations.

Frank
Reply to  Pat Frank
April 4, 2019 12:36 am

Pat wrote: “Your Isaac Held evidence of model/observational conformity, is model/model conformity. Held is using reanalysis as his observations. Reanalysis uses modeled extrapolation of observational data and calls that extrapolation “observations.” Data that have gone through models extrapolations suffer from the same sort of errors that the model projections themselves exhibit. It’s no surprise that models can emulate models.”

Of course, one always needs to ask if a feature found in a reanalysis is present in the re-analyzed data or introduced by the biases of the model used to assemble climate data from a variety of sources. In fact, I tried to ask this question when the post first appeared and was mis-understood. The original post appears to have revised to address this question.

However, you need to remember that most AOGCMs don’t reproduce the jet stream nearly as well Held’s AOGCM, which possessed higher resolution than normal and was run in AMIP mode (ie with historic SSTs). The seasonal change in the jet stream shown by Held is not a common pattern produced by all climate models. The degree of agreement between re-analysized observations of winds and the prediction of winds in AOGCMs is something that has evolved over time as models have become more sophisticated. So this agreement shouldn’t be dismissed as a models vs models comparison.

Pat Frank mischaracterizes some of Held’s caveats: “Secondly, Held himself admits that time-wise projections are unreliable. He says, “Importantly, the model can propagate information from data rich regions into data poor regions if this propagation of information is fast enough compared to the time scale at which errors grow. (my bold)”

Frank notes that Held continued: “For climatological circulation fields such as the ones that I have shown here reanalyses provide our best estimates of the state of the atmosphere. For the northern hemisphere outside of the tropics these estimates are very good — I suspect that they provide THE MOST ACCURATE DESCRIPTION OF ANY TURBULENT FLOW IN ALL OF SCIENCE. [My emphasis.] For the tropics and for the southern hemisphere the differences between reanalyses can be large enough that estimating model biases requires more care”

Pat continues: “The upper limit time scale over which errors grow 100x larger than the perturbation of CO2 emissions is a year. A better estimate of the time over which a model projection becomes unreliable is a week.”

Weather projections become less skillful within a week, because chaos amplifies miniscule errors in initialization. Climate models are run for long enough periods that chaotic fluctuations in temperature average out in the long run. Negative feedback prevents climate from drifting too far from a stable equilibrium temperature. For most climate models, the standard deviation of the temperature variation in a century-long pre-industrial run is about 0.1 degK and this stable temperature is within 2 degK of our best estimate of pre-industrial and often much closer. In systems with negative feedback, errors do NOT grow exponentially.

For example, we know that a random walk process on the average moves an increasing distance from the starting position (that grows with the square root of the number of steps). Now modify your random walk with a negative feedback: Heads takes you one step to the right minus 0.1 steps times the number of steps you already are to the right of the origin. Tails does the opposite. Now your random walk is constrained by negative feedback to remain near the origin. Negative feedback is inherent in the Stefan-Boltzmann law and the fact that our planet’s temperature hasn’t drifted far for the last 70 centuries of the Holocene.

Frank earlier wrote, “The existence of disagreements between various models and disagreements with observations of the type you have noted can be explained by offsetting errors in cloud albedo or emissivity, or surface albedo, among other factors.”

Pat notes: “You offer an exercise in handwaving: “…can be explained by…” A complete physical theory would allow one to say, ‘Is known to be caused by…’ If you don’t know that offsetting model errors do not remove uncertainty from an expectation value, then you don’t know anything about physical error analysis. Every single climate modeler I’ve encountered is in that group. The ±4 W/m^2 cloud forcing error is not a mischaracterization. It’s a systematic model error that appears in every single step of a projection.

Frank revises: Model developers are no longer developing models with a significant radiative imbalance at the TOA that must be corrected by flux adjustments. When a model has a significant bias in cloud cover, then there MUST BE a compensating bias somewhere else. OTHERWISE these models wouldn’t exhibit a reasonable and stable pre-industrial temperature for a century or two before forcing begins. And despite these offsetting errors, such models all reproduce current climate reasonable well. They get temperature about right, deserts and rain forests in the proper locations, ocean current and wind blowing in the right direction, etc. These models clearly have some utility in reproducing what we observe. That is not the important question. The important question is: Do the models properly represent the increased amount of OLR emitted through the atmosphere and SWR reflected per degK of surface warming. If not, the models aren’t useful for predicting climate sensitivity.

Was error propagation an important part of your work on the determination of molecular structures?

Rod Smith
March 28, 2019 8:27 pm

Yes, and when I read Mann’ & Ward’s article, I thought the same thing: YOU are the Stalinists. What nerve, can we say chutzpah?

Greg F
March 28, 2019 8:29 pm

I would want a statistician or two included. One familiar with the published literature. Steve McIntyre and Nic Lewis come to mind.

Reply to  Greg F
March 28, 2019 8:36 pm

Unlikely to include non-US citizens.

Greg F
Reply to  Joel O'Bryan
March 28, 2019 8:58 pm

Unlikely to include non-US citizens.

Did Gavin Schmidt become a US citizen?

Steve O
March 28, 2019 8:37 pm

“when NASA was putting men on the moon, every piece of equipment and every calculation were scrutinized from every possible angle simply because if anything went wrong the mission would fail.”

— When you’re spending a little bit of money, a little bit of due diligence is appropriate. When you’re spending a LOT of money…

“If the alarmists are confident that the science contained in the official reports is spot on, they should welcome a review that would finally put to rest the doubts that have been raised.”

–Do they believe scientific findings can be politicized? Before they accuse the reviewers of holding to foregone conclusions based on biases, I’d like to hear them explicitly state that scientists might hold to preconceived notions and foregone conclusions based on biases. Oh, only those who hold contrarian positions are at such risk. Got it.

Convincing skeptics who created political opposition that we must do more is supposedly the key to saving the planet. But they don’t mind skipping the step of convincing skeptics. I suppose that’s as long as they can ram policy down our throats. Just don’t accuse them of having an authoritarian bent.

Steve O
March 28, 2019 8:43 pm

“…a furious campaign to stop it has been mounted by the federal climate bureaucracy….”

In what field is it not true, that scientists play king-of-the-mountain? The status and the prestige of the leaders in any field is dependent on them not being wrong about everything. Thus, nobody likes a revolutionary. Even if you believe you are correct, why would anyone want an open debate? Right or wrong, you might lose. Then you tumble down.

SAMURAI
March 28, 2019 9:50 pm

Based on completely invalidated climate model projections, which were made using obviously exaggerated assumptions of CO2 forcing, and a complete failure to properly account for other natural climatic variables, the US government is insanely proposing to waste $100+ TRILLION to keep ECS below 1.5C by 2100.

The economic and social consequences of the Green New Deal would bankrupt our country and cause a worldwide economic collapse.

Based on observations, other natural climate variables, and more realistic CO2 forcing, if we don’t spend a DIME on CO2/CH4 sequestration, and continue using cheap and abundant fossil fuels at current growth rates for the next 80 years, ECS will likely be in a range of 0.6C~1.7C.

I think not spending a dime makes more sense.

Moreover, if the Svensmark Effect is validated, during the upcoming 50-year Grand Solar Minimum event, global temps may actually FALL for the next 50 years due to increased cloud cover/albedo..

This audit must also carefully evaluate the HUGE amount of heat added to raw-temperature data and confirm whether these adjustments were scientifically justified, especially the KARL 2015 adjustments.

NONE of CAGW’s 30-year projections for catastrophic: Sea Level Rise, severe weather incidence/intensity trends, ocean acidification, falling crop yields, Greenland and Antarctic Land Ice Loss, Arctic Ice Extents, etc., come close to reflecting objective reality, which is further evidence CAGW is already a disconfirmed hypothesis.

Just as Leftists’ insane Russiagate hoax crashed and burned, the CAGW hypothesis should already have been tossed in the trash heap of disconfirmed hypotheses..

It’s high time a detailed audit be conducted to show CAGW is already a disconfirmed hypothesis.

March 28, 2019 11:54 pm

The Climategate exposed temperature data manipulation at a planet scale, but what about TSI data ?

http://notrickszone.com/2019/03/25/satellite-evidence-affirms-solar-activity-drove-a-significant-percentage-of-recent-warming/

Scafetta and Willson, 2019


The PMOD is based on proxy modeled predictions, “questionable” modifications, and degraded, “misinterpreted” and “erroneously corrected” results

The PMOD TSI composite “flawed” results were an “unwarranted manipulation” of data intended to support AGW, but are “contraindicated”

I hope the Commission on Climate Security will investigate deeply on all these scientific misbehaviors and all the consequences will be drawn to stop this scam.

old construction worker
March 29, 2019 4:47 am

Are climate models any good at “what if”. Is ” what if” a prediction or forecast of what may happen? It doesn’t made any difference what you call the outcome. Climate models are base on assumptions.
https://www.masterresource.org/forecasting/simple-model-leaves-expensive-climate-models-cold/ In an earlier paper, we found that the IPCC’s approach to forecasting climate violated 72 principles of forecasting. To put this in context, would you put your children on a trans-Atlantic flight if you knew that the plane had failed engineering checks for 72 out of 127 relevant items on the checklist?

aleks
March 29, 2019 7:45 am

About computer models quite rightly said in the link by Chaamjamal (March 28, 6:23 p.m.) : “ Climate models are pre-programmed with a well connected causation sequence from CO2 emissions to rising atmospheric CO2 concentration to warming driven by way of climate sensitivity.” The problem is that there is in fact a parallelism between the concentration of CO2 and temperature, but there is no proven causal relationship. More precisely, the increase in temperature leads to an increase in CO2 concentration due to a decrease in its solubility in water, but there is no physical theory explaining the increase in air temperature with increasing CO2.
Instead of a physical theory, the IPCC 1990 report contains a picture showing how solar rays reflected from the Earth are absorbed by “greenhouse gases”. The model of the “greenhouse effect” not only does not have a strictly physical justification, but it also contradicts the laws of thermodynamics, see, for example https://arxiv.org/PS_cache/arxiv/pdf/0707/0707.1161v4.pdf
Therefore, a detailed analysis of the physical validity of the theory of the “greenhouse effect” is a very urgent task.

tom0mason
March 29, 2019 8:28 am

‘Climate Science’ seemed more up to the task back in 1905 …

For those interested in Climate (or ‘Climate Change™) here a link to an old book on the latest research in 1905 of climate. Called Climatic Changes: Their Nature and Causes by Huntington and Visher it is freely available at http://www.gutenberg.org/ebooks/37855 . And yes CO2, terrestrial variations, solar variations, and stellar effects are discussed. Remarkably modern considering its age.

D Cage
March 29, 2019 10:04 am

The problem is that there is in fact a parallelism between the concentration of CO2 and temperature,…
I have been trying to find even a hint of it when you match areas of maximum change to areas of fossil fuel use.
Non engineers seem to have difficulty with the idea that averages are fine if the distribution is even but if not they are totally meaningless. When the name changed from global warming to climate change the required basis of the science changed totally but the science itself did not change one bit.
The maximum differential is at the Arctic where fossil fuel use is near zero. The high heat anomaly regions is totally bounded by lower anomaly area . Either they need to show fossil fuel use in the Arctic sea areas is not what it appears to be subjectively of near zero or they need to show the heat transfer mechanism to explain the clear cut failure of emissions trapping heat in areas with near zero solar energy to cause it.