Climate Modeling Dominates Climate Science
By PATRICK J. MICHAELS and David E. Wojick

What we did
We found two pairs of surprising statistics. To do this we first searched the entire literature of science for the last ten years, using Google Scholar, looking for modeling. There are roughly 900,000 peer reviewed journal articles that use at least one of the words model, modeled or modeling. This shows that there is indeed a widespread use of models in science. No surprise in this.
However, when we filter these results to only include items that also use the term climate change, something strange happens. The number of articles is only reduced to roughly 55% of the total.
In other words it looks like climate change science accounts for fully 55% of the modeling done in all of science. This is a tremendous concentration, because climate change science is just a tiny fraction of the whole of science. In the U.S. Federal research budget climate science is just 4% of the whole and not all climate science is about climate change.
In short it looks like less than 4% of the science, the climate change part, is doing about 55% of the modeling done in the whole of science. Again, this is a tremendous concentration, unlike anything else in science.
We next find that when we search just on the term climate change, there are very few more articles than we found before. In fact the number of climate change articles that include one of the three modeling terms is 97% of those that just include climate change. This is further evidence that modeling completely dominates climate change research.
To summarize, it looks like something like 55% of the modeling done in all of science is done in climate change science, even though it is a tiny fraction of the whole of science. Moreover, within climate change science almost all the research (97%) refers to modeling in some way.
This simple analysis could be greatly refined, but given the hugely lopsided magnitude of the results it is unlikely that they would change much.
What it means
Climate science appears to be obsessively focused on modeling. Modeling can be a useful tool, a way of playing with hypotheses to explore their implications or test them against observations. That is how modeling is used in most sciences.
But in climate change science modeling appears to have become an end in itself. In fact it seems to have become virtually the sole point of the research. The modelers’ oft stated goal is to do climate forecasting, along the lines of weather forecasting, at local and regional scales.
Here the problem is that the scientific understanding of climate processes is far from adequate to support any kind of meaningful forecasting. Climate change research should be focused on improving our understanding, not modeling from ignorance. This is especially true when it comes to recent long term natural variability, the attribution problem, which the modelers generally ignore. It seems that the modeling cart has gotten far ahead of the scientific horse.
Climate modeling is not climate science. Moreover, the climate science research that is done appears to be largely focused on improving the models. In doing this it assumes that the models are basically correct, that the basic science is settled. This is far from true.
The models basically assume the hypothesis of human-caused climate change. Natural variability only comes in as a short term influence that is negligible in the long run. But there is abundant evidence that long term natural variability plays a major role climate change. We seem to recall that we have only very recently emerged from the latest Pleistocene glaciation, around 11,000 years ago.
Billions of research dollars are being spent in this single minded process. In the meantime the central scientific question – the proper attribution of climate change to natural versus human factors – is largely being ignored.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
An end to a means. In this case an agenda. Empirical data does not matter to these people.
When it comes to modes I always felt that Josh’s take on them back in 2015 was spot on:
?w=1050
Once again, this documentary is way ahead of its time, and is being proven very prophetic. This clip highlights the graphic by Josh and Dr Christy has a great quote on the Model based science. Unfortunatly this documentary isn’t shown in schools.
https://youtu.be/QowL2BiGK7o?t=15m35s
On a physics blog I read, “… experimental physicists and theoretical physicists must work together. Their symbiotic relationship – with theorists telling experimentalists where to look, and experimentalists asking theorists for explanations of unusual findings – is necessary, if we are to keep making discoveries.” There was a time when the distinction between the two sorts was not so stark. It seem the climate science suffers from the lack of such a distinction. Or rather the relative difficulty on the experimentalist side to achieve the sort of direct measurement data of so much highly useful data. Would the money spent on Cray computers for modeling be better spent on [for example] a high resolution real time satellite at L2 monitoring outgoing nighttime LWIR?That sort of data would provide plenty of useful info for falsification or limits on lots of model parameters it would seem.
Randy Bork wrote: “Would the money spent on Cray computers for modeling be better spent on [for example] a high resolution real time satellite at L2 monitoring outgoing nighttime LWIR?That sort of data would provide plenty of useful info for falsification or limits on lots of model parameters it would seem.”
What a good suggestion! Real data instead of speculation. Are you listening NASA? I bet Trump will listen. Then, NASA will listen for sure.
And what do computers do, what are they if not a reflection of whatever mind or thought process programmed the thing?
one of 2 possibilities is that The Computer is the ultimate appeal to authority. No-one is going to pick a fight with a computer. But computers do what computers do, you wouldn’t argue with a 300hp tractor in a ploughing contest would you, so it is with computers.
they are the modern day perfect oracles. Digital= Right or Wrong. On/Off. Black/White. No Compromise.
And how do you know if its output is right or wrong.
Simple. It is what you expect it to be.
If it produces something unexpected, then its obviously wrong and must be re-programmed.
2nd is a follow on from 1 above. Computers produce what is expected. They are then safe, predictable, not a threat. A return to the womb if you like.
Just how much damage can be done before this (insanity?) stops?
Now *there’s* a thing to worry about.
Peta in Cumbria wrote: “Just how much damage can be done before this (insanity?) stops?
Now *there’s* a thing to worry about.”
Quite a bit of damage has already been done. Just look at how Europe is crippling their electricty production over this CAGW scam. Electricity prices in Europe are skyrocketing, and electricity availability gets more uncertain, the farther down this road they go.
All because of the unproven CAGW theory and the fear its advocates have instilled in people.
CAGW is not only wasting enormous amounts of money, it is also putting people’s lives in real jeopardy by causing the costs of electricity to soar, and crippling the electrical grid. People die when they can’t get electricity.
They’ll riot if they can’t get internet!
As a European I just don’t see what you are talking about. Electricity availability is rock steady and has been for as long as I’ve been around and prices are somewhat higher than in the US (+30%?) but have been going down the last couple of years. In my country at least. Now gasoline, that is 4x more expensive in my country than in the US. But that is mostly due to taxes of course.
And anyway electricity is a negligible part of overall household expenditure so its a pretty moot point.
I speak as someone who used computer model runs of weather out to T+120 professionally.
They are merely a *possible* outcome, with a probability of success that ranges from 0 to 100%.
The further forward in time we project we become increasingly uncertain of the outcome.
We do more runs and end up with an ensemble (as happens in GCM’s), each time altering the starting conditions a little.
What does that tell us?
I tells us the sensitivity of the the atmosphere to initial conditions and narrows the possibilities of error at some future time.
IOW: Model runs can never be truly deterministic.
We learn *stuff* from them.
End of.
There is a parallel in post-modern philosophy, where the intellectual effort is fixated on creating a plausible alternative to the reality we must live in. Based on how all their proposed solutions seem to converge on socialism it appears that most climate scientists are also post-modernists. So for them their work is a “two-fer”.
Maybe they use models because reality is too discomforting to their (and their superiors’) beliefs?
I think real models should be hired to present the climate models to the public.
And not the really skinny runway models .. and they should demonstrate clothing appropriate for global warming, such as a bikini.
If the taxpayers money is going to be wasted, shouldn’t we get something of value?
LoL . Thanks .
Too many models are never enough .
Contribute to Trump’s campaign, he is t he candidate most likely to do it.
I know what you are saying, but strictly speaking all science is models. A scientific theory is a model!
Whatever the field, the problem is that a computer’s output is taken as the conclusion rather than a piece of the puzzle used to arrive at a conclusion.
Punch a bunch of numbers into a “super pocket calculator” and some will believe that they only have $2 in their wallet rather than the $20 they can count. It doesn’t occur to them that maybe “the puncher” got the decimal place wrong or their “super pocket calculator” is defective.
“Isn’t science wonderful, ladies and gentlemen? You get such a wholesale return of conjecture from such a trifling investment in facts.” — Mark Twain
This is not hard to understand. There is a lot of government grant money to be had, but acquiring facts in the climate change business is expensive and difficult. Who wants to freeze in Antarctica drilling holes in the ice, when one can sit comfortably at a keyboard sipping a latte and still publish? Models are wonderful sources of revenue.
Kudos to the real scientists who still insist on gathering real data. Fie on the ones who rearrange said precious data to fit their models, instead of the other way around.
“Get your facts first, then you can distort them as much as you please.” Mark Twain
The internal validity and accuracy of the climate models serve as the basis for the talking point / story line for the entire policy development. It is so full of holes it leaks like a sieve. The climate models are deterministic models based on assumptions (as Pat Michaels points out) and do not account for natural causes and misrepresent attribution of cause – the models are fit to manmade greenhouse gases because that is what the establishment wants the attribution to be.
I think there is a need to focus on the logical fallacy of the approach of using climate models built with a predetermined stack of independent variables to give an output that is based on the establishment expectation of finding cause in the predetermined chosen independent variables. it is circular reasoning, it is misleading bordering on deception especially when global policies are being set based on the results. The policy push would be severely weakened if this were properly challenged.
In this re: I think there is a big need for studying the climate modeling in a framework of probability and causal reasoning / statistical causality. Along the lines of the work by Judea Pearl. Pearl won the AM Turing award in 2011 for fundamental contributions to artificial intelligence through the development of a calculus for probabilistic and causal reasoning. http://amturing.acm.org/award_winners/pearl_2658896.cfm. Judea Pearl created the representational and computational foundation for the processing of information under uncertainty, Bayesian networks and creating a mathematical framework for causal inference.
The paper by Judea Pearl “Causal Inference in Statistics” (http://ftp.cs.ucla.edu/pub/stat_ser/r350.pdf) discusses a framework which is based on :
1. Counterfactual analysis
2. Nonparametric structural equations
3. Graphical models
4. Symbiosis of counterfactual and graphical methods.
It’s worse than this, it’s not just GCM’S, the same hypothesis is used to infill and homogenize the surface series.
BEST, does out of band testing, but their process is to construct a field that represents climate, Mosh says they can get this field with only latitude, altitude and whether it’s near a large body of water, the difference between the field and a measurement is weather.
So 2 points, first the field has to have the hypothesis baked into the fields temp. Second, because there is no global average temperature measurement for any of the out of band stations, in fact they might only record a min and max temp, so the measurements have to be processed to be compared to the field.
Didn’t DaVinci say when looking at a block of stone, you had to imagine the statue inside the block and remove what wasn’t statue.
Judea Pearl and others published an article in the American Meterological Society Bulletin (Jan 2016) titled:
CAUSAL COUNTERFACTUAL THEORY FOR THE ATTRIBUTION OF WEATHER AND CLIMATE-RELATED EVENTS (http://journals.ametsoc.org/doi/abs/10.1175/BAMS-D-14-00034.1)
You say the science of climate processes is far from adequate.. i.e in this article you are arguing for more modelling. You say we need more understandi
g of natural vs human factors.. ie you want more modelling.
to make an argument, you have to compare climate science with other types of science, to see how important modeling is
For instance, in studies of the heat resistance of nickel super alloys (important to jet engines) we can do real world relevant experiments
In studies on how changes in the DMD gene (responsible for duchenne’s dystrophy) we can do real world studies
In studies on black hole radiation, we can’t do many studies; it is all modeling, althought the word modeling may not be used in that field
in other words, this post is the sort of sloppy, easy to do, not a lot of thought sort of thing that you precisely accuse warmers of..irony, thy name is legion
Even back in the 1960’s the National Air Pollution Control Administration (now the EPA) was fixated on mathematical modeling, so the current fixation on modeling is no great surprise. It’s become tradition. A number of corporations like Battelle Memorial Institute were always submitting modeling proposals for review and (hopefully) government funding. Government Funding being the operative words.
WOULD IT FIT?
Look at the Cray picture carefully. Would it fit into your basement?
That is the only question that matters.
In Australia, as they fire the climate fellows, they are also looking for a home for their computer models. Presumably with the supercomputer and a hefty power bill.
You could try to cover that big (fossil based) power bill by minting bitcoins.
Of course you could also try to sell the computer model predictions, but that market is shrinking fast.
Seriously speaking, a lot of people wrote above about computationally intensive models in other fields.
In climate, the difference is that some of the main data and even fundamental, dominating processes are missing, especially concerning cloud cover and cloud seeding. So even a ten or hundred fold increase in computation power would not make much of a difference for the long term predictions.
The most interesting questions are the causes and mechanisms of the millennial oscillations, such as MWP-LIA-now, and of the multidecadal oscillations.
Remember that when a journal tried to address the later, they terminated it, like fascist thugs.
http://bigcitylib.blogspot.com/2014/01/copernicus-publishing-temrinates.html