Climate science appears to be obsessively focused on modeling – Billions of research dollars are being spent in this single minded process

Climate Modeling Dominates Climate Science

By PATRICK J. MICHAELS and David E. Wojick

The Cray Ecoplex NOAA GAEA supercomputer used for modeling at Oak Ridge Lab. Gaea was funded by a $73 million American Reinvestment and Recovery Act of 2009 investment through a collaborative partnership between NOAA and the Department of Energy.

The Cray Ecoplex NOAA GAEA supercomputer used for modeling at Oak Ridge Lab. GAEA was funded by a $73 million American Reinvestment and Recovery Act of 2009 investment through a collaborative partnership between NOAA and the Department of Energy.

What we did

We found two pairs of surprising statistics. To do this we first searched the entire literature of science for the last ten years, using Google Scholar, looking for modeling. There are roughly 900,000 peer reviewed journal articles that use at least one of the words model, modeled or modeling. This shows that there is indeed a widespread use of models in science. No surprise in this.

However, when we filter these results to only include items that also use the term climate change, something strange happens. The number of articles is only reduced to roughly 55% of the total.

In other words it looks like climate change science accounts for fully 55% of the modeling done in all of science. This is a tremendous concentration, because climate change science is just a tiny fraction of the whole of science. In the U.S. Federal research budget climate science is just 4% of the whole and not all climate science is about climate change.

In short it looks like less than 4% of the science, the climate change part, is doing about 55% of the modeling done in the whole of science. Again, this is a tremendous concentration, unlike anything else in science.

We next find that when we search just on the term climate change, there are very few more articles than we found before. In fact the number of climate change articles that include one of the three modeling terms is 97% of those that just include climate change. This is further evidence that modeling completely dominates climate change research.

To summarize, it looks like something like 55% of the modeling done in all of science is done in climate change science, even though it is a tiny fraction of the whole of science. Moreover, within climate change science almost all the research (97%) refers to modeling in some way.

This simple analysis could be greatly refined, but given the hugely lopsided magnitude of the results it is unlikely that they would change much.

What it means

Climate science appears to be obsessively focused on modeling. Modeling can be a useful tool, a way of playing with hypotheses to explore their implications or test them against observations. That is how modeling is used in most sciences.

But in climate change science modeling appears to have become an end in itself. In fact it seems to have become virtually the sole point of the research. The modelers’ oft stated goal is to do climate forecasting, along the lines of weather forecasting, at local and regional scales.

Here the problem is that the scientific understanding of climate processes is far from adequate to support any kind of meaningful forecasting. Climate change research should be focused on improving our understanding, not modeling from ignorance. This is especially true when it comes to recent long term natural variability, the attribution problem, which the modelers generally ignore. It seems that the modeling cart has gotten far ahead of the scientific horse.

Climate modeling is not climate science. Moreover, the climate science research that is done appears to be largely focused on improving the models. In doing this it assumes that the models are basically correct, that the basic science is settled. This is far from true.

The models basically assume the hypothesis of human-caused climate change. Natural variability only comes in as a short term influence that is negligible in the long run. But there is abundant evidence that long term natural variability plays a major role climate change. We seem to recall that we have only very recently emerged from the latest Pleistocene glaciation, around 11,000 years ago.

Billions of research dollars are being spent in this single minded process. In the meantime the central scientific question – the proper attribution of climate change to natural versus human factors – is largely being ignored.

201 thoughts on “Climate science appears to be obsessively focused on modeling – Billions of research dollars are being spent in this single minded process

  1. This is really what Carl Sagan was referring to. Billions of dollars and billions of models, more models than there are grains of sand on the beaches of the earth.

    • It seems to me that climate models never seem to compute the expected climate at ANY place on earth where actual climate related measurements are made.
      A model (of anything) should first of all behave just like the real anything, which means you should be comparing the real and the modeled (in the same places).

      • Why would I want to do that ??
        I’ve seen no climate change that seems worthy of note. Well I’ve never been to the Antarctic highlands so there are climate extremes I haven’t experienced; but I also haven’t seen the plots of ANY climate models of groups of climate models that would even hint that they even related to this planet, let alone anything real. If I ever get to where I start to think climate doesn’t change, I’ll let everybody know. I have no idea where the notion that ANYBODY doesn’t believe climate changes, ever came from. Certainly wasn’t my idea.
        Yes I do believe CO2 captures 15 micron LWIR photons, and I do believe CO2 in the air has gone from 315 ppmm to circa 400 since IGY 1957/58.
        So what; doesn’t seem to have done anything much that I can see.

      • Whilst I’m highly sceptical of the utility of current GCMs wrt to policy and, frankly, appalled by the
        amount of speculative papers based on model output, I think this misses the point. It’s also easy
        strawman material for those who give much more credence to model output than I think they should.
        GCMs, we are told, are not meant to simulate or compute the existing earth climate, They are meant
        to model the response of the climate system to various forcings. It’s the trend, response and
        sensitivity that are important. I believe it’s a stronger argument to focus on the fact GCMs, in
        nearly all cases, clearly get these wrong too and that the ‘model ensemble’ is statistical garbage whose
        ony apparent purpose is to broaden the spread through which observations run. If each individual
        model was run many times to understand it’s precision wrt. input parameters then compared
        against observations, I reckon we’d be able to cull at least 97% of them.

      • You can’t “model” what you don’t know. To model the climate would require actually knowing all the ins and outs and all the factors that affect it. You can write “simulations” of things you don’t know, like Sim City and Sim Farm or Sim Climate. You don’t need to understand the subject you are simulating all that well to approximate it. Climate “models” do not exist at this time since we don’t have enough understanding of climate to model it. and that is why it is so frustrating to hear them say that they are worth while. In another 50 years, when due diligence in scientific work brings enough understanding of climate, they can write better simulations, but they won’t be able to model climate until there is enough actual raw, unaltered data to base their model on in the first place. The same applies to other fields in science as well.

  2. Billions of research dollars are being spent in this single minded process. Billions of dollars that could have gone into repairing Americas crumbling Infrastructure instead of into the pockets of green scammers.

  3. We have 100’s of millions of years of climate history and we are postulating on a natural process that has gone on for billions of years.

  4. Moreover, within climate change science almost all the research (97%) refers to modeling in some way.
    There’s that magic 97% number again. ;->

      • Whoa! At the time I viewed this page there were 272,977,970 WUWT views…. 97 and 97 again! That’s it … tinfoil hat time!

      • Ha haa I remember Cook’s Constant from school. The bovine scatology number you used to get the answer you wanted

    • In marketing it’s recommended to price your product at $97 rather than $100 or $95 it seems to be more Believable to the consumer.
      So it’s a marketing ploy, not real science.

  5. I think I’d be hard-pressed to find any paper that I’ve been a co-author on that included the term “climate change” that didn’t also include some form of the term “model.” If we we’re using a statistical (empirical)model ourselves, we most certainly were comparing our result to those based on some sort of “model.” The latter examples were not envisioned as explicit “model” improvement projects, but rather either formal or informal model testing.

    • Indeed so. This paper seems to identify occurrences of “model” with GCMs as suggested by the Cray picture. But as it also says, models are basic in science, and climate papers will use many kinds – statistical models, radiative models, etc. Not all costs billions.

      • I agree, which is why Wojick and I have a pretty vigorous debate about the utility of semantic analysis. Nonetheless, it sees very odd that “model” would appear so much less in the other disciplines that presumably may use the same statistical tools.
        I’m not so sure about this avenue of research, or,perhaps, that it yields meaningful results. More than anything else, I put this up to see what kind of comments it would generate, and I think Chip and Nick are right in their observations.

      • So, are any of you suggesting climate models are not central to the CAGW case? I find that extremely hard to doubt, and if true, then the “science” done using the model’s outputs/results as though credible indicators of real world eventualities are, in effect, extensions of the money spent on those models. Don’t see a lot of “What effect would global cooling have on critter/environment X” . . do you?

    • Yes. The article only sees the word “model”, it doesn’t see how it is used. There aren’t too many climate hypotheses that can be tested without some kind of model. So the distinction needs to be made, somehow, between the models whose use is regarded as research and research which is tested using a model. May I be so bold as to classify these as “bad” and “good” model uses respectively. (Well, good if done properly).
      Having said that, though, the reality is that the vast majority of all climate research has “bad” model use. Or certainly, it seems, where public funding is used. To compound the problem, the “bad” use models are unfit for purpose, because their structure is upside-down. As I explained here the models are simply low-resolution weather models which necessarily become hopelessly inaccurate in a few days. When they then aggregate their weather results into a climate result, the whole thing necessarily remains a work of fiction. For a model to be useful for this purpose, it would surely have to be structured as a climate model.

      • It is not surprising that “there aren’t too many climate hypotheses that can be tested without some kind of model”, since at root the logical difference between “hypothesis” and “model” is a minor one.
        Hypotheses are probably commonly understood to refer more to a binary sort of model, with an answer like “yes” or “no”. Likewise, models may typically tend to predict more complicated curves, but certainly the word “model” could be used to refer to a binary sort of question.
        Perhaps the prevalence of “models” as a term of art seen in climate science is nothing more than a historically based convention.

      • I’m not seeing how either of those two uses of a model are legitimate. A computer model produces only what it is programmed to produce, so I can’t see it as a legitimate source of “research.” Nor do I see how the output of a computer model is in any way useful in testing research, for the same reason. I can see a model used as a source of a hypothesis to be tested by actual research.
        As for why climate science accounts for so much of the modeling done in science, the reason seems obvious; it’s the only tool in a climate scientist’s arsenal to quantify any postulated climate phenomenon. You can’t conduct any type of controlled experiment on the planet’s climate system, and seeing as there is only one Earth, you can’t perform any kind of statistical study like an epidemiologist might do to determine the effect of, say butter consumption on cholesterol levels. In other words, honest-to-goodness scientific procedures aren’t available to the modern climate scientist who might want to explore how much X you get for a given change in Y. But instead of being honest and saying “we don’t know and we will never know,” the modern climate scientist fabricates their evidence by programming a computer to simulate the way they think (wish) the climate behaves, and then uses the output of their creation as “data” to support the assumptions they used to program the computer,
        When the computer output doesn’t predict actual measurements taken later (and only a fool would think it within the realm of possibility that a mathematical model could do that, given the pitifully small amount of time we’ve had to study the Earth’s climate and the overwhelming handicap we have when collecting data and experimenting on the Earth’s climate system), the modern climate scientist cheats using a variation on the Texas Sharpshooter fallacy. Since past data is known at the time the model is programmed, the model can be tuned to generally show the historical up and down trends of past climate, and then all you have to do is later graph the prediction and future measurements using anomalies, while cherry picking the base or reference period for the anomalies. In this manner, the subsequent measurements can be vertically lined up to fall within the very wide predictive portion of the range bands of the model, thereby “painting the target” within the model’s predictions, and while the past measured temperature anomalies may drift in and out of the models’ ranges, they generally track the inflection points of the past climate, since those were known and could be pre-programmed into the model. All in all, it looks nice to the average journalist and politician, but it’s still still just an illusion.

        • In most other fields people don’t write articles about their every day use of models .
          In fields like algorithmic high frequency trading where powerful languages like those , they keep their models secret and secure if they work and bury them silently if they don’t .

    • I do think that with the advent of GCMs, “models” came to dominate the climate science literature–not through GCM-development papers alone (although there certainly are plenty of those), but also through papers using GCM output either to drive impact analyses (these are a dime a dozen) or papers that compare observations with model-derived predictions/projections (if even in an off-hand fashion).
      I think Pat and David’s analysis shows this general result, but probably requires refinements in order to be able to drill down to specifics.

    • whew! – it’s a relief that some question the validity of the authors’ method – it includes but doesn’t exclude – and does neither wisely

    • Chip and Nick, can you explain why the rest of science does not talk this way? I think the dominance of modeling in climate science is so entrenched that we cannot imagine it being otherwise, that is we cannot imagine a true science of climate change. Would that we could.

      • I think a lot of it is that other fields of physical science you work up from fundamental classical equations — which , along with the necessary math , are the first textbooks bought and the first things taught to undergraduates .
        There is no such foundation for “climate science” . You cannot find the essential quantitative sequence of equations explaining how optical phenomena “trap” greater heat at the bottoms of atmospheres than their tops . It should be trivial to point to the page or so of non-optional equations in an intro text and say “plug in these values for the parameters and you see it get X much hotter on the side away from the source” . But you can’t .
        Thus 70+ never rejected “models” and decades of stagnation .

  6. climate forecasting:economic forecasting::climate modeling:economic modeling
    a.k.a. grossly inaccurate.

      • MP, My eyes are not what they used to be. Hint—– hold down the control key and scroll your mouse.

    • Ideas have Consequences: Post-modern philosophy has led to post-modern science. Everyone is entitled to their own facts.

    • For all of the people pictured in that article, you need to find a picture with them holding a hat. Preferably in front of them, open end up.

  7. …and the federal observation network lay in ruins. Just think what could be done to improve climate measurement with a small fraction of the modeling money. At some point, there may not be any credible observations to calibrate climate models and then we have a self-fulfilled prophecy – modeled observed data and modeled forecasts. Who needs to verify anything? Gore triumphs!

  8. The models basically assume the hypothesis of human-caused climate change

    I spent 15 years as a electronics simulation expert, and this was what got me looking at GCM’s, it’s great if it’s right, but they use modeling to confirm attribution of observations, which are used to validate models.

    • Are you saying that “data” are used to set up the models, and then the models are validated by producing the “data” that were used to set them up?

      • Right here, you have the long and the short of it. This is what many people consider to be the fundamental failings of the models. The modelers would have you look at the devilishly complex calculations embodied in how the models do what they do. Others take a more global view to see what the models are doing overall. The output of the models simply reaffirm the assumptions built into them, the rest is smokescreen.

    • Do you mean SPICE modeling or the like ??
      I did a lot of SPICE like modeling of CMOS Analog/Digital circuits, which included the real Semiconductor Physics of the specific structures I was laying out.
      We never ever made any masks for any ICs that did not already perform properly in the SPICE environment, and that included, what the hell the circuit would do, during the power turn on transient. Quite often, a circuit that would function properly with the correct power supply voltages applied, would simply never get through the transient turn on states, to ever reach that steady state operating condition.
      I designed some beautifully elegant Analog circuits, that were damnably clever, until the SPICE discovered it wouldn’t turn on.
      You couldn’t afford enough prototype engineering runs in wafer fab to find out such glitches.

      • Modelling doesn’t necessarily mean you need a Terra-Computer.
        Years ago, you could buy a three bit mechanical flip flop “logic” gizmo from one of those science hobby stores. It was just a bunch of levers and such tht you could assemble into say three JK flip flops, with various inputs and feedbacks, and you could actually make a three bit counter out of it.
        You slid a lever back and forth, and that was the clocking signal, and the damn thing would cycle through the states.
        Well of course, such a three bit circuit must have eight possible states.
        I had designed a very high speed counter (for that time). It had a total count of 200,000 and it was an up down counter that could run at a clock speed of 200MHz. The very fastest then available logic flip flops, were Motorola MECL3 D-flipflops, that could clock at 300 MHz. Nobody made a JK flip flop that would toggle at 300 MHz.
        Well my counter needed to be parallel loaded with a number for the terminal count, and then counted down to zero, and reloaded. But it had to be able to divide the input clock frequency by ANY integer from 1 to 200,000. at 200 MHz.
        So you had to be able to detect the zero state (or terminal count); enable the parallel load input gates, load the count number, and be ready to start counting five nanoseconds later, whether the count was 1, or 200,000 or anything in between.
        Normally the cost of making such a thing would require a lot of flip flops that were a damn side faster than 200 MHz and a totally parallel counting system.
        MY counter started with a divide by four, made out of those MECL3 D-flip flops; three of them plus some special transistor gates made out of fast transistors to get around the D-limitation (I needed JK).
        That was followed by a divide by five quinary counter that was made out of 50MHz high speed TTL JK flip flops (Sylvania). The remaining counter of 10,000 count was made of low power 10 MHz TTL Fairchild MSI ICs , namely four decade counters, and it was all basically ripple counters, rather than parallel. (Very clever architecture).
        But I digress, the thing was my quinary was made out of three JK flip flops with appropriate feedbacks to count by five.
        So hell, I built a model of that quinary out of that Edmund Scientific mechanical contraption to see if it would work. simple matter to connect all the levers to provide the proper feedbacks, and in minutes I was merrily sliding the clock strip back abd forth, and watching the thing step through the five states.
        So I hightailed it over to my boss to show him what a genius I was, and he played with it, not quite understanding the correct clock slide stroke required, so he rough housed it a bit, and then told me, he couldn’t even move the clock strip.
        Well I had had no problem, so I grabbed it off him, and sure enough the damn thing was locked up solid, and no way to move the clock, without busting something.
        Well my quinary wa supposed to count like a shift register (for speed), so the sequence of states was: 000, 001, 011, 110, 100, … 000 …
        Well of course three bits gives you eight states, not five, so there were three states, my counter never got into.
        They were 010, 101, and 111.
        So Iooked what my boss had done, and sure enough, my fancy counter was in the 111 state, and in that state, if you clocked it, it sure enough was supposed to go from 111 to 111. In the mechanical gizmo, that meant no go on the clock input.
        So that left 010 and 101 states. So I forcibly flipped the middle flip flop back to zero to get the 101 state, and low and behold, the clock line was again free, and it now toggled back and forth between 010, and 101, which the logic diagram confirmed it was supposed to do.
        So I had to add additional logic gates to my quinary to force a return to 000 if it ever got into 101, or 0101, or 111.
        Well I cheated, and simply recognized the illegal 1X1 state and it clocked from there to 000.
        That Edmunds Scientific mechanical contraption saved me a ton of grief.
        The counter was for a completely digital timing unit for a pulse generator for semiconductor testing, so it could count a 200 MHz crystal controlled oscillator to make accurate 5 nanosec increments. It also included logic switched PC board strip lines , giving ten delay increments of 100 psec, and four delay increments of 1 nsec, so I could make pulse widths, and periods and anything else in 100 psec steps, all with crystal controlled accuracy. That was in 1968 as near as I can remember.
        It never went into production. They one day laid off the entire advanced circuits group, and decided to outsource the testing to a big company already in that business, instead of trying to build their own.
        That ultra fast divide by N counting architecture was never published and to my knowledge it is still the fastest way to do a divide by N counter. But today, the same architecture could do GHz speeds, with today’s hardware.
        Yes some modeling can be invaluable when it is done properly.

      • Do you mean SPICE modeling or the like ??

        Spice, digital, digital mated with real vlsi chips, timing, transmission line, pretty much all of it.
        The interesting thing is your circuit might have been fine, but the numerical engine might not have stabilized within the time steps allowed. That was a lot of what I did too, explain results when it worked and when it didn’t.
        You could model the climate in spice, same kind of differential equation solver, but adjusting the model would be time consuming.

      • Modelling doesn’t necessarily mean you need a Terra-Computer.

        I designed a Fairchild I^2L gate array for NASA on a 68010 multi bus system and a vt100 that worse case clocked at about 270mhz, and best at 450mhz, it counted the number of bits or anti bits in a 32 bit word, forwards or backwards.
        But nothing as clever as your Babbage Machine 🙂

      • george e. smith says: May 19, 2016 at 3:27 pm
        Modelling doesn’t necessarily mean you need a Terra-Computer.

        Amen. If you apply a little engineering understanding, you can often get the job done with greatly reduced resources. On the other hand, if you have the resources, the brute force method often gets the job done quicker, and cheaper, in terms of engineering man-hours.
        If you’re going to build a million of something, you can spend a lot of time figuring out the clever efficient solution. If you’re going to build one, you’re better to brute force it because your major expense is engineering. (… unless you’re building one thing for a satellite.)
        On the other hand, there’s nothing like the satisfaction of doing something on an 8 bit computer that somebody else couldn’t do because it crashed his mainframe. 🙂

  9. The story photo shows a beautiful, pristine and unmarred landscape on the front of them. Shouldn’t it show acres of land pockmarked with windmills and solar panels to power said computers. Maybe the UPS and diesel back-up generator too. I guess whatever makes them sleep at night.

  10. In short it looks like less than 4% of the science, the climate change part, is doing about 55% of the modeling done in the whole of science. Again, this is a tremendous concentration, unlike anything else in science

    You cannot infer the amount of modeling done by climate scientists from the number of times three words are used in scientific papers. Did it ever occur to you that one model could be mentioned thousands of times?

    • The total number of models does not matter.
      It’s how many papers are using modeling instead of real world data that matters.

      • It’s how many papers are using modeling instead of real world data that matters

        Which you cannot determine using this post’s spurious correlation. Did you know there is a 95.23% correlation between the number of math doctorates awarded and the amount of uranium stored at US nuclear power plants?

      • The only correlation claimed is between the use of the word modeling and doing modeling. So how is that spurious? Words refer to things.

  11. That’s hardly surprising. The models are the only place that CAGW has ever been found.

  12. “It seems that the modeling cart has gotten far ahead of the scientific horse.”
    Not only is the cart ahead of the horse, but it’s so far ahead that the horse can’t even see it any more.

    • MarkW, knowing a bit about horses, by that time the horse ( smart animal that he is) has realized that the barn is the best place to be and is back there already.

  13. Do we need to distinguish between the normal process of scientific modelling which tries to visualize the interaction of the various components of a system and craaaaaazy, whacked out modelling which plugs data into holes where it doesn’t fit, which glosses over huge gaps in the understanding of processes and which can never be falsified no matter how poorly it fits reality, all in support of predetermined beliefs about how the system works?

  14. I continue to contend that a competitive planetary model can be written in at most a few hundred succinct APL definitions — no more expressions than required in a textbook working thru the quantitative physics . And will execute efficiently on anything from a smart phone to a supercomputer .
    And be understandable to those who can understand mathematical physics in any field .
    An example of just how succinct and expressive an APL can be , the dot product can be defined in Arthur Whitney’s K from as :
    dot : ( +/ * ) .
    and will calculate the dot product not just between a pair of vectors but between arbitrary arrays of pairs of vectors , for instance pairs for every voxel in a 3D map over the globe .
    Anybody interested in cooperating on such a model , contact me on my website , .

    • Well so that would get you presumably a correct result for each of the Physics processes actually included.
      So what about all of the real world physics processes, that are not included in your iphone model ??

      • I’m not sure what you mean ?
        200 equations , when , for instance , the Planck function can be expressed in one line and be applied to whole sets of frequencies and sets of temperatures like dot above , can express a hell of a lot of physics .
        And it’s a lot easier to understand what’s being done . And if it’s wrong , correct it . Here’s the Planck function in K in terms of the 3 ways to measure distance and time : temporal and spacial frequency ( wave number ) and wave length .
        boltz : 1.3806488e-23 / Boltzman constant / Joule % Kelvin
        Planckf : {[ f ; T ] ( 2 * h * ( f ^ 3 ) % c ^ 2 ) % ( _exp ( h * f ) % boltz * T ) - 1 }
        Planckf..h : " in terms of temporal frequency / W % sr * m ^ 2 from cycles % second "
        Planckn : {[ n ; T ] ( 2 * h * ( c ^ 2 ) % n ^ 3 ) % ( _exp ( h * f ) % boltz * T ) - 1 }
        Planckn..h : " in terms of spatial frequency / W % sr * m ^ 2 from cycles % meter "
        Planckl : {[ l ; T ] ( ( 2 * h * c ^ 2 ) % l ^ 5 ) % ( _exp ( h * c ) % l * boltz * T ) - 1 }
        Planckl..h : " in terms of wave length / W % sr * m ^ 2 from meters % cycle "

        I implement the computations of the temperature of a gray ball in our orbit from the temperature , radius and distance from the sun in a small handful of expressions in my Heartland presentation ,and add one more tor the computation in terms of arbitrary spectra . A couple more would apply a cosine map over the sphere and any spectral map over the surface .
        What’s the downside ? It’s hard to compare what these notations accomplish versus the hundreds or thousands of lines of traditional scalar , I’ll call them Algol family languages . It is this power and flexibility which causes them to find their market in money center financial applications . Because , if you’re really good at number crunching , the numbers with the greatest value are ones that have currency symbols attached . And generally , in particular , run on some of the biggest fastest systems anywhere . But now days these APLs also run on Raspberry Pi .
        So , I’m not sure what you mean in your comment .
        My own priority these days is getting other heads to join me in fleshing out my own 4th.CoSy , abstracting all I’ve learned from years in K and traditional APLs in a chip level Forth so it will run on any current or future processors . But modeling planetary physics I see as almost a virgin field given the ungainly archaic languages such models are currently quite inscrutably implemented in .

  15. As your friendly neighborhood environmental scientist I have to say: no, just no.
    First, science in general is very focused on modelling. If anything, climate modelling is quite down to earth because its parameters are so easily measured. Virtually all of economics, theoretical physics, astronomy and a very substantive part of experimental physics and chemistry etc. etc. are based on computer modelling of some sorts. Climate research field is WAY to small to produce 55% of all papers related to modelling.
    Second, that’s just not how google scholar works. It always gives you a couple of million results if you use a general search term. Try using web of science or something for a more suitable search engine.
    Clearly this article does not pass the first ‘but does this statistic have any bearing on reality?’ test.
    Well, back to doing some system dynamics modelling of my own.

    • benben says: “First, science in general is very focused on modeling…”
      That was clearly stated when discussing the 900,000 papers found. How’d you miss that?
      55% of these model-related papers seem to be related to climate change, which suggests modeling is lopsidedly high in climate science. How’d you miss that?
      benben says: “…If anything, climate modelling is quite down to earth because its parameters are so easily measured…”
      Lol, good one. A handful of model parameters as “so easily measured.” There are plenty of unknown parameters, parameters that are not modeled, uncertainty in the parameters involved in feedbacks, etc. And even if you had those nailed-down, using the hydrostatic approximation would screw you over.
      How about your buddies at says: “…Simplified formulae known as ‘parameterizations’ are used to approximate the average effects of convective clouds or other small-scale processes within a cell. These approximations are the main source of errors and uncertainties in climate simulations3. As such, many of the parameters used in these formulae are impossible to determine precisely from observations of the real world…”
      Is “parameters used in the formulae are impossible to determine precisely from observations” the same thing as “parameters are so easily measured?” Sounds like complete opposites.
      benben says: “…Well, back to doing some system dynamics modelling of my own…”
      Good luck to that poor system.

    • I don’t see how you manage to misinterpret what I wrote so badly, but bravo!
      Climate science is such a tiny field compared to equally modelling heavy fields such as theoretical physics and economics that it is completely impossible that 55% of ALL modelling related papers across all sciences come from climate change field, even if every single paper ever written by a climate scientist was 100% modelling based.
      Second, Nature is correct, but nevertheless climate is a lot easier to measure than11th dimensional constructs (theoretical physics) or completely non existing constructs (the homo-economicus, in economics). Climate science does not exist in a vacuum, which somehow people here seem to think it does, and just because it uses modelling doesn’t make it automatically wrong, which people here definitely seem to think is true.
      Well, at least you got that last one on the ball. My system is feeling particularly queasy today!

      • @ DB, + Benben, + Betty, ( and whom ever can help me) The word ” climate scientist” keeps showing up everywhere but can any of you tell what that is and at what uni I can get a “climate scientist” degree? To me that description did not exist until like, 4 years ago. But all of a sudden everybody has a degree in it. I am just wondering. Whenever I look at the credentials of these “Climate Scientists” I get confused. The descriptions seem to vary from Psychologist to astronomer (or is that Astrologist?) or Meteorologist ( that one I get). But nowhere there seems to be a description that kind of makes it clear to me, and thanks for any answer.

        • The word ” climate scientist” keeps showing up everywhere but can any of you tell what that is and at what uni I can get a “climate scientist” degree?
          Well it sure doesn’t seem to require any quantitative courses beyond some sort of physics for poets pretend quant courses .
          I got sucked into this slough because of the grossly amateurish level of math and physics I saw pervade theses blog battles . Clearly not up to even that required in just about any field of undergraduate engineering .
          And to wield the power of the national or global government on these matters , better to have a degree in poli sci or sociology .

      • I think ‘climate scientist’ is mostly a boogy man invented on the far right spectrum of american politics. I have quite a few people working on climate change and none of them would call themselves ‘climate scientist’. It’s just easier to create an alternative narrative if you have only a limited number of players. That’s why you keep seeing the same few ‘bad people’ pop up on this site (michael strong, the Indian guy from the IPCC, Gore). And because people don’t want to deal with the fact that virtually all of the sciences work on climate change sometimes and are happy to do so, they invent the generic sinister person ‘the climate scientist’.
        Its quite impressive writing from that point of view, what Anthony et al. project on this site!
        Anyway, more to the point: my flatmate who works on the climate models so disliked by the commenters here refers to his field as atmospheric physics.

      • Funny that benben declares that the name that most climate scientists use to refer to themselves is nothing more than an invention of the far right.
        BTW, has anyone else noticed the constant habit of liberals to declare that everyone who is more conservative than they are, is far right, extreme right, or something similar.
        It goes back to their desperate need to de-legitimize anyone who disagrees with them.

      • MarkW, I think that if you take the world average political preference and plot the average WUWT commenter on that they’d end up pretty far on the right end of the spectrum. I’m obviously not saying extreme right, which we use (in europe) at least to signify a whole different type of social movement.
        So not a dismissal, just an observation. An accurate one I think.

      • It really is sad when leftists assume that they are the measure against which all things should be measured.
        As always, the communists call the socialists conservative.

      • “It really is sad when leftists assume that they are the measure against which all things should be measured.”
        The left know they’re super-smart and everyone else is wrong.
        The right know about the Dunning-Kruger Effect.

    • benbenben says:
      If anything, climate modelling is quite down to earth because its parameters are so easily measured.
      OK then, measure this parameter: quantify AGW with a measurement.
      Ready? OK. On your mark…
      …Get set…
      Hey! What happened? You didn’t ‘Go!‘. Why not?
      Could it be that the AGW parameter isn’t so easy to measure?
      In fact, AGW is central to the entire ‘dangerous man-made global warming’ debate. You know, the scientific debate that your alarmist contingent has decisively lost.
      If you can’t even measure the most basic parameter, then all you’re doing is blowing hot air… Mr. “Environmental Scientist”. (Heavy emphasis on ‘environmental.‘ But on ‘scientist’… not so much.)

        • Not only that, ‘Greenhouse’ was a theory when it was proposed and it still is a theory today.

      • DB’s whole point is that you can’t quantitatively measure AGW.
        As for the assertion that modeled parameters are easily measured, that assertion is demonstrably false unless your definition of “easily” is completely divorced from “accurately.” We don’t even measure the average surface temperature of the Earth, but instead use an assumed proxy – the mean of the daily min/max at locations mostly selected around population centers. No person could ever think that a min/max mean is accurately representative of heat flux from a surface, which varies proportionally to the fourth power of instantaneous temperature. (What if the daily max temp at a location doesn’t change over time but stays at that point for longer as time passes? What if the min temperature goes up but the max stays the same while the time spent at the max shortens over time?) Nor were those measurement locations selected for the purpose of global climate measurement, and even of they were, they are mostly corrupted by biases that have never been shown to be reliably corrected for.
        We have no reliable temperature record of the oceans, or of the atmosphere except at a very thin layer. We have no reliable measurement of amounts of precipitation over the oceans (70% of the Earth’s surface). We can’t model clouds. We have no time machine to go back and verify the accuracy of proxy temperature reconstructions used to tune the models.
        We simply do not have the data to even begin to develop a mathematical model that one could reasonably believe accurately simulates the behavior of a system as complex as the Earth’s climate system. Nor will we ever have that data since collection of it would require procedures that are simply not available. Ask yourself a simple question – if computer models are good enough for climate scientists to tell us how much of a temperature increase we get for a doubling of CO2, and then to proceed to tell us how much more flooding, and droughts, and hurricanes we get for that temperature increase, why are other scientists in other disciplines bothering with that pesky process of actual experimentation? Why don’t the drug companies and the epidemiologists stop all those time-consuming, expensive double-blind studies and long-term tracking of hundreds of patients? Hell, they should just get with the program, develop a “mathematical model” of the human body and simulate its response to whatever drugs or things we eat, submit it to the FDA for approval, and save a ton of time and money. Why is it only climate scientists who insist on driving public policy based solely on the unverified quantitative output of theoretical computer models?

      • Betty, betty, betty, benben claimed that it was easy. All DB did was prove that benben is once again lying.
        [Note: “Betty” is a fake. “Her” comments have been deleted. -mod]

      • well, I would say that some of the parameters are inferred and others are directly observed. And we have a HUGE amount of emperical data that is being fed directly in these models (precipitation, moisture, albedo, particles etc etc etc.). So it’s kind of weird to ask for a single AGW parameter and then rail against the fact that that does not exist.
        Once again I invite all skeptics to just follow a MOOC or two and make their own model and then play around with some of the published CIMP5 models. It’s not that difficult and it’ll certainly take less time than spending years arguing in the comment section of this blog.

      • We do have giga-bytes worth of data. The problem is that tera-bytes would still be insufficient for the problem at hand.

      • MarkW, sure, that may or may not be true, but to say that there is no data at all (like DB likes to do) is not true. I’m glad we agree on that at least 😉

      • The difference between having less that 0.1% the data needed and having no data at all is hardly worth mentioning, unless your goal is deception.

    • “Virtually all of economics, theoretical physics, astronomy and a very substantive part of experimental physics and chemistry etc. etc. are based on computer modelling of some sorts.”:
      That’s nonsense. Aside from your “theoretical” physics example, none of the fields in that list are “based on” modeling. Each may USE computer models, but those models are developed only after physical data was used to postulate relationships, and further physical data was used to confirm and quantify the accuracy of those postulated relationships. In none of those fields do scientists rely exclusively on the output of a computer model to quantify a physical phenomenon, even so far as to question the accuracy of physical measurements that conflict with the model, as do many “climate scientists.” You’re confusing instances where past scientific practices have utilized models independently verified to produce reliable results (astronomy, physics, chemistry), with reliance on theoretical models as a substitute for the scientific inability to experimentally measure the behavior of a system in response to a change in an input.

    • Benben, unlike Google, Google Scholar does not give you millions of hits. GS is a precise bibliometric instrument. People use the term modeing when they talk about modeling. The use of this term in climate science is incredibly high. I believe this is diagnostic, as stated. Modeling has taken over climate science.

    • benben, if the inputs are so easily measured, what’s the exact amount of aerosols released into the atmosphere for each of the last 50 years, broken down by chemical composition.
      Your belief in things already disproven is admirable.

  16. Climate “science’ likes models because they are easier to manipulate than measurements to obtain a politically defined result.
    (They adjust the data also, but it is easier to spot those machinations than those hidden deep in the assumptions of a complex computer program.)

  17. To be correct, 55% of *published articles* are about climate modeling, not 55% of research done. If you are running a GCM it is incredibly easy to run the model, publish a paper, tweak the model, run it again, publish. This way you rack up an impressive number of publications with very little effort. Compare that with old school climatology where you did field work, tramping around collecting ice cores, tree rings, etc to try and tease out past climates. That’s labor intensive and hence far fewer papers published.
    And are you saying 4% of the entire Federal science research budget or 4% of NOAA’s budget. Huge difference. 4% of the overall Federal budget would not be “a tiny fraction”
    You may be on to something here, but I think you need to refine your methodology and tighten your definitions.

    • This is so true. Just looking at number of publications is… just not very relevant. Nothing wrong with writing a couple of modelling papers. Good exercise in scientific writing and dealing with reviewers and the such. But it’s kind ridiculous to write these kind of blog posts based on that.

      • The articles report the research so they are a reasonable proxy for it. Most of the research is based on modeling on one way or another. The point is very simple.

    • I was going to same something very similar about that 55% number. It is an over-reach to say #papers ~ research done.
      That said, you have understand the motivation to cram the peer-review literature with quantity, it precludes a careful quality assessment when the bulk paper in the GCM field is so large. One person or group could never carefully and exhaustivily review such a bulk. Any naysaying, a counterparty could always say, “but what about these papers.”
      And that is precisely the point. Advocates of Climate change, and their model outputs it entirely depends upon, are still attempting to position it as the Null Hypothesis by flooding the journals with circular logic papers.

    • Joelobryan, I don’t think that that’s true. If you’re in the field you know exactly what the value of these and other papers is. And the interface between science and policy is always messy and that hardly has anything to do with modelling itself. See for instance medical research!
      As a lay-man you can still get a feel for how important a work is. Is it published in a top ranking journal? It’s probably good. Is it published in some random chinese journal? Not so much. Can you only find it via google scholar or is it a conference proceeding? Ha.
      Anyway, the truth of the matter is that climate research is pretty well done nowadays. Websites like WUWT create this alternative narrative of a bunch of crazy power hungry bureaucrats abusing incredibly simplistic models to undermine democracy. It’s entertaining to read but it’s just really far removed from the boring daily grind of science, of which climate research is very much a part.
      Hey, you’re always free to follow a couple of MOOCs on the topic and see for yourself how things are done, instead of rely on stuff like whats posted here!

      • Ben,
        no its like the CMIP 3/5 ensembles. With such a sphaghetti splay of supposedly plausible insilico climate responses to rising pCO2, then Post hoc, one can cherry pick the one or few to claim success to observation. That’s the very definition of pseudoscience.

      • a bunch of crazy power hungry bureaucrats
        Don’t need that; inertia and momentum are enough.

      • Joel,
        My flatmate makes climate models for a living, so I know how they work and have played around a bit with them. It’s just pretty standard science. The models all point in roughly the same direction because that’s just the way it is, not because of some massive conspiracy.
        You want proof? Very easy: just go on coursera, follow a MOOC or two and make your own model. Shouldn’t take you more than a couple of weeks, depending on your math skills. And then play around with any of the hundreds of publicly published models.
        Please also note that the comment section here is never frequented by someone who says ‘I made a climate model myself and it proves that these CIMP models are bullshit’. The reason being that anyone who has the skills to make a climate model understands that the CIMP models are doing quite a good job.
        Contrast that with the post on which we are commenting here. The guys at WUWT can’t even put together a decent google scholar search.
        Now, I understand that most of the stuff on this site is actually a super conservative American flamewar against liberal Americans, but its just weird to see science caught up in it.

      • Yes, Tone, giggle like a little girl.. suits you.
        We all know that Giss, Had, Noaa are all massively adjusted to try to get somewhere near the erroneous models.
        Your point is?

      • yep, dbstealey
        The models basically construct the side of a barn.. and still missed it by a proverbial mile. !!

      • dbstealy: That graph you posted was manipulated by selecting a base period from 1980-1999 to measure the anomaly. All this does is force the four measurement lines into the model range. The surface temperature data starts more than a century ago – why do you suppose that cute little picture of yours not only focused only on the tail end of it, but re-centered the measurements to the model projections around an arbitrary slice within that window?

      • Kurt, I think you mean Toneb not dbstealey. Toneb posted the manipulated graph.
        It’s also possible that the the ongoing ‘adjustements’ to the series in Toneb’s graph are designed to keep the series in the model range.

      • Billy,
        Kurt was obviously referring to Toneb’s chart. He referred to four models, not to 70+. The one I linked to has been posted numerous times without any criticism or questioning. And similar charts have also been posted showing the same thing.
        Bob Armstrong,
        Correct, there are more than 70 climate models. And Toneb’s are more in error than most of them. For those who may want to know why, just do a search for ‘cowtan & way’. They’ve been thoroughly debunked, and now only True Believers like toneb pay them any attention.

      • Bob Armstrong, I don’t know exactly to what discussion you’re referring to, but its pretty hyperbolic to state that just because you can’t agree on how to model something you have ‘left left the domain of analytical physics’. And anyway, atmospheric physics is a different domain to start with so that isn’t such a big problem 😉
        And also: “Which largely explains Cork Hayden’s observation that if it were really science there would be 1 model , not , what is it now ? 70+”
        So the assumption here is that something is only science if it has one answer? or that it can only be modeled in 1 way? That just… isn’t how science works. If only because different models focus on different things. I think the main thing CIMP5 shows is that you can model the climate in a very wide variety of ways and yet get roughly the same outcome (more co2 = higher temperatures). That is exactly what gives the rest of the world the confidence in the AGW hypothesis. It’s not just 1 model. It’s, like you say, 70+ independently constructed models that show the same behaviour.

        • BenBen , watching this discussion I have concluded you are an intellectually dishonest troll . You should realize that anything other than an accurate understanding of reality is not optimal for your own welfare or the welfare of your family or anybody other than the crapitalist class which feeds off the force required to subjugate the rational to their willful delusions .
          Here’s the model , in K for thermal power spectrum as a function of wavelength:
          Planckl : {[ l ; T ] ( ( 2 * h * c ^ 2 ) % l ^ 5 ) % ( _exp ( h * c ) % l * boltz * T ) - 1 }
          Planckl..h : " in terms of wave length / W % sr * m ^ 2 from meters % cycle "

          Here’s the model for its integral in 4th.CoSy :
          : T>Psb ( P -- T ) 4. _f ^f 5.6704e-8 _f *f ;
          Those are the only models , winnowed thru many decades of experiment and observation , for those processes .
          That’s physics . There is only one model for calculating the temperature of a gray ball in our orbit and that gives a temperature of about 278.5 +- 2.3 which explains 97% ( ’tis a magic number ) of our estimated surface temperature .
          The model for a ball asymmetrically irradiated with a uniform ( isotropic ) arbitrary spectrum is
          ( ( dot[ ae ; source ] % dot[ ae ; sink ] ) ^ 1%4 ) * TgrayBody
          where ae is the absorption=emission spectrum ( color ) of the object .
          Either that is correct and the next expression in constructing a quantitative audit trail from the output of the Sun to our estimated surface temperature , or we need to stop right there and get it right .
          Any other procedure is not physics .

      • “70+ independently constructed models that show the same behaviour.”
        And not a single one of them matches what the real world has done.
        Funny thing that.

        • That the models show that there is a certain amount of warming, and that observations also show that there is warming.

          But that doesn’t show any sort of relation, might as well compare the models to population.
          And in fact, there has been no measurable loss of night time cooling, which would be required if Co2 was the cause of the warming.

      • benben is starting to sound a lot like Nye.
        CO2 is up, temperatures are up, therefore it’s settled.
        The models have gotten the amount of warming off by huge amounts.
        The models have gotten the timing of the warming off by huge amounts.
        The models have gotten the distribution of the warming, both geographically and in terms of altitude, off by huge amounts.
        If the best you can do is state that models predicted warming and it warmed, then you are in pathetically bad shape.

      • I’m not trying to prove that the models work by saying that. I’m just wondering if I can get you to agree with something that is blazingly obvious. The temp trend in the observations is up. The models point up. You don’t have to agree with anything else, not causality or anything. Just that the trend for both is up.
        You guys are so amazingly combative, I’m just trying to get the barest glimmer of an actual conversation going here 🙂

      • Bob Armstrong,
        “BenBen , watching this discussion I have concluded you are an intellectually dishonest troll.”
        I’ve watched him for months, and conclude he’s psychopathic, and appeals to consider “the welfare of your family or anybody other than the crapitalist class” are falling on deaf ears, so to speak.

  18. From IPCC working group I – executive summary:
    “The climate system is a coupled non-linear chaotic system, and therefore the long-term prediction of future climate states is not possible.”
    So why, exactly, is all this modeling being done?

    • Jock
      Straight to the heart of the matter – my hero
      Climate science is a shambles. There is very few scientists just data collectors and modellers. What a sad state of affairs. Nearly every paper I read states “can’t come to a conclusion, need more data” and yet they are swimimng in data. The AGW have got most tied up trying to find the holy grail instead of putting their heads out the window and looking at reality. There are only a very few individuals that can truely call themselves Climate Scientists. Looking from the outside, its a disgrace.

  19. The local TV station frequently provide the predictions of three different models when providing their prognostications of the next days rainfall or snowfall. They even give a map of these predictions. Rarely are the results ever the same and even rare is even one of them correct in predicting the correct amount. Three times they predicted heavy snowfall over the entire county over the last two years. The county had the snow plows at the ready but no snow fall. Once they predicted “less than an inch, just a heavy dusting.” We got 18 inches. The snow came so fast that by the time they called out the drivers none could make it to the maintenance yard. It took are days before people could go to work. And the CAGW Cult loves these models.

  20. Science has progressed to become an institution of prophets, profits, and white collar jobs. It was inevitable.

  21. I don’t have a problem with modelling, per se, provided (a) the boundary conditions are precise, (b) the appropriate science is reasonably described by the mathematics and digital methods chosen to investigate it, (c) the assumptions are properly based, defined, explained, and justified, and(d) the outcome is not predicated on a pre-conceived notion. Modellers need to fully appreciate that Garbage In = Garbage Out, not Gospel Out!

  22. I suspect that this is nonsense. Or at least a fault with the search engine or search terms. Running the same
    search on scopus gives the following:
    “model” or “modelling” or “modelled” = 5609206 documents
    (“model” or “modelling” or “modelled” ) AND “climate change” = 131459 results
    so on SCOPUS at least less than 2% of modelling papers are related to climate change.

    • Just did a quick comparison. Think you are probably wrong. Scopis is abstracts only, but includes conference proceedings and patent filing abstracts. So naturally there would be many more modelling hits. Google Scholar is only papers and books, but full text not just abstracts. So naturally there will be many more models in full climate context hits.

      • Hi,
        In general SCOPUS which just does search abstracts is far more restrictive than google scholar. It includes
        far few sources and is more restrictive. So for example searching for “climate change” on SCOPUS for the
        last 10 years gives 140007 results while the same search on google scholar gives “About” 1130000, i.e.
        almost 10 times as many.
        Incidentally on SCOPUS though over 93% of papers on climate change also include the word model. So
        while most modelling papers are not about climate change most climate change papers do mention models.

      • Indeed Geronimo, most climate change papers either start with modeling or hope to feed into modeling . Abstracts are relatively useless in this regard. Modeling dominates the field. That is the point of my research. Does anyone contest this?

  23. “Modeling” is the favorite past time of many climate ‘scientists’. Their second favorite past time is “adjusting” the observational data to match the models.

  24. My grad degree is in modeling. IMO the climate scientists are way too enamored with modeling, especially with projections, rather than interpolations.

  25. Very nice analysis.
    There is a fundamental reason the models cannot be right, illustrated in detail in a previous guest post here. In short, there is a 7 order of magnitude (E+7) difference between what that $73 million Cray supercomputer can do and what current weather models need to do to adequately resolve convection cells (e.g. tstorms) for precipitation and tornado warnings. So such crucial GCM processes are parameterized, tuned to produce reasonable hindcasts (for CMIP5, the mandatory tuned hindcast was YE1975 to YE 2005 (30 years) with initialization either avg Dec 2005 or Jan 1 2006). The hindcast tuning period contains a significant warming event statistically indistinguishable from ~1920-1945. Even IPCC AR4 said the earlier was not anthropogenic–not enough change in CO2. It was mostly natural variation. This gives rise to the attribution problem. The 1975-2000 rise is attributed in GCMs to anthropogenic forcings, mainly CO2 as the ‘control knob’. The models subsequent multiple failures (pause, tropical troposphere hotspot, ~twice observational EBM TCR and ECS) simply highlight the obvious attribution error.
    All the many gloom and doom CAGW papers are based on faulty models. Because temperatures haven’t risen this century except for the now fading El Nino, or unless Karlized. Neither El Nino nor Karlization has anything to do with CO2 emissions. SLR has not accelerated. The planet is greening. Corals adapt symbionts by bleaching. Polar bears do not rely on late summer sea ice and are thriving. And so on.

  26. Aha!
    So I was right, long ago, to interpret CAGW as ‘computer-aided global warming’

    • I suspect that the heat being generated by all those Cray computers has warmed the planet more than CO2 has.

  27. As has been pointed out in other comments this paper might be overstating the dependence of alarmists on modeling just a bit. However it does seem to at least draw attention to the general problem the alarmists have of ignoring actual evidence. One of the best recent collections of evidence is the recent paper on the natural processes that corals undertake to deal with the inevitable and constant changes in their environments. If that research does not at least quiet the bleaching alarmists, then there is no hope for them.

  28. Of COURSE they focus on modelling – that’s where the funding pig trough is for them to snout at!
    It’s like the Scottish wave power generation “testing” – it has been in testing for over 20 years now.
    Becasue they get over £5 million per year for testing and the power generation would get them only around £1.5 million.
    So they keep “testing”, “researching”…

  29. It is no surprise to hear that there is so much ‘modelling’ going on in ‘Climate Science’. With ‘modelling’ you can be assured of getting the results that you want. If you conduct actual experiments on the weather and climate, you could get almost any result, even accurate ones. CAGW papers are usually based on models which invariably give the result required by CAGW. When El Nino caused spikes in temperature occur, they are dutifully used to support the CAGW theory and draw attention away from the Pause. Observable facts like the fact that the planet is greening under extra CO2 and that corals can adapt to temperature changes, or that polar bears are increasing in numbers, or that people like warmer weather are just so embarrassing to the CAGW Alarmism, that they serve the Cause to help at all.

  30. From what I’ve seen, the reason that so many Climate Science papers use the word model or modelling, is because they use output from the CMIP models as actual data to use in their own research projects, many of which involve further modelling on what are probably regular desktop computers. This of course produces “projections” that are so far removed from the real world that they serve no purpose other than keeping researchers busy (and getting paid and published in “peer reviewed” journals).
    I haven’t noticed any “third generation” models being cited; that is a treat to be looked forward to for our future entertainment.

  31. I would imagine that the discovery of the model that predicts this random walk climate would be the holy grail of discoveries.

  32. Not all of science is corrupted. Here’s an astronomy article from, The last paragraph reads
    “Looking to the future research in this field, Professor Lattanzio highlighted the role that advanced computer simulations will play in the next stage of research.
    “Computer simulations do not agree with this observation; so as well as continuing observations, new computer models will need to be generated to better understand what is taking place in the cores of these stars,” Professor Lattanzio said.”

  33. The study uses the mention of models in papers as a proxy for the reliance on them. It seems at least as valid as tree rings for temperatures.

  34. I haven’t worked my way through all the comments yet, but there’s some concern over the term “model” and which type of model each reference may mean. May I suggest searching through again, this time looking for the term “empirical”? I suspect the number of hits will drop significantly in climate change papers, although undoubtedly there will be issues with this too.
    Just a suggestion.

  35. Why do they need a model? Can’t the temperature for any range of time at any location be calculated as the average temperature 50 years ago times a constant times the log2 of current ppm/280 ppm

  36. I’m about to drop an apple from my hand that is poised three feet above the floor. I predict the apple will fall to the floor. I just used a scientific model to make a projection.

  37. Interesting concept. Simplistic to a fault.

    “…when we filter these results to only include items that also use the term climate change, something strange happens. The number of articles is only reduced to roughly 55% of the total…”

    Baying to the ‘climate change’ moon or bowing to the ‘climate change’ delusion is all too necessary for researchers chasing grants.
    You need to work out a search on more definitive terms; or read every paper to determine if the ‘climate change’ is an obeisance gesture or genuine delusion.
    Not that I think you’ll get different results as I doubt there are all that many disciplines where baying to the ‘climate change’ false science is as requisite as the enviro-nutty and climate science groups are.

  38. Dr. Michaels: The problem with the use of models in climate science is not the models themselves. It is the blatantly dishonest way that modeling results are presented. Steven Schneider once said that ethical scientists “are expected to tell the truth, the whole truth, and nothing but — which means that we must include all the doubts, the caveats, the ifs, ands, and buts.” Every conclusion derived from a climate model therefore should begin with “If our unvalidated climate model(s) correctly describes X, then X is projected to be Y. For example, “If our unvalidated climate model(s) correctly describes global warming under RCP 8.5, then MGST will be 1-6 degC (95% ci) warmer in 2100 under that scenario than it is today.
    When did scientists start fooling themselves by presenting 70% confidence intervals – 1/3 of which will be wrong chance and part of the remaining 2/3 will be systematic errors and confirmation bias?
    Why is warming reported relative to the pre-industrial conditions when we don’t know how cold it was back then – in the LIA? No living person has experienced pre-industrial climate.
    While climate models are based on well-tested physical theories, they require parameters that can’t be systematically optimized. Changes in the entrainment parameter alone have changed model ECS by 1 K/doubling without degrading model performance. The IPCC calls their models “an ensemble of opportunity” and recognizes that no conclusions should be drawn from the spread of the multi-model mean – and then uses their “expert judgment” to draw such conclusions.

  39. PATRICK J. MICHAELS and David E. Wojick are absolutely right about the climate
    “modeling” being divorced from reality. Not only are the modelers completely off
    the mark with predictions of their pseudo-science, they also change official
    temperature records at will which should count as a crime. An example follows. A mainstay of
    their doctrine is the belief that carbon dioxide is warming up our atmosphere by its
    greenhouse effect. There is no scientific proof of this so they trot out laboratory
    measurements of infrared absorption by carbon dioxide. A sneaky part of this is
    that carbon dioxide is neither the only nor the most abundant greenhouse gas in air.
    Atmosphere water vapor is both, comprising 95 percent of total greenhouse gas
    in the atmosphere.. Carbon dioxide, by comparison, is a miserable 3 percent.
    And yet the Arrhenius greenhouse theory they think of as justifying
    their work. leaves water vapor completely out, and uses only three percent
    of existing greenhouse gases to predict the world’s future. This is an absurdity that
    Tim Ball pointed out and I agree with him. I made that same point in a comments
    I attached to Walter Dnes article in WUWT of April 30th Below is an adaptation of my
    comments that cover to their use of pseudoscience to create warming where none
    exists. Let’s begin with the existence of the hiatus in the eighties and nineties,
    something you probably never heard of. It is present in satellite data which is how
    I discovered it in 2008. But it has been covered up by an imaginary “late twentieth
    century warming” in all ground-based temperature curves. It is clear from satellite
    data hat there simply was no warming from 1979 to 1997. These dates go from the
    beginning of the satellite era to the beginning of the giant super El Nino of 1998 You
    can see what the real curve looks like in Figure 15 of my book “What Warming?”
    Since no one was listening to me about this I decided to put a warning about it into
    the preface of my book when it came out in 2010. No one listened. I used that same figure again in
    an article I posted on October 29th last year in WUWT. That article criticized
    Karl et al.’s attempt to declare the twenty-first century hiatus non-existent.
    Amazingly, a Bob Tisdale, trying to defend the global warming cabal, added a
    comment accusing me of having fabricated the data in Figure 15. He is the same
    man who thinks that El Ninos are warming up the world. This act is of course
    pure libel which he has to publicly retract and apologize for. Fortunately, I
    was able to get NASA’s own description of what temperature was doing the eighties
    and nineties, issued in 1997. This is what NASA had to say then:
    “Unlike the surface-based temperatures, global temperature measurements of
    the Earth’s lower atmosphere obtained from satellites reveal no definitive
    warming trend over the past two decades. The slight trend that is in the data
    actually appears to be downward. The largest fluctuations in the satellite
    temperature data are not from any man-made activity, but from natural
    phenomena such as large volcanic eruptions from Mt. Pinatubo, and from
    El Niño. So the programs which model global warming in a computer say the
    temperature of the Earth’s lower atmosphere should be going up markedly,
    but actual measurements of the temperature of the lower atmosphere reveal
    no such pronounced activity.”
    Note the fact that NASA specifically rejects the validity of computer-predicted
    temperature rise for this period. I can see now how, despite NASA’s warming,
    those modelers’ computer predictions became the seed for changing that section
    into a non-existent “late twentieth century warming.” With that, they effectively erased
    the first hiatus we had. (But not completely, it is still visible in satellite data). The
    second hiatus is the twenty-second century hiatus we are experiencing now.
    This is the one that Karl et al. were supposed to have buried. Two hiatuses gone
    with these two moves: is there any meaning or pattern to this? The answer is yes,
    when we follow through on it. What happens when a hiatus arrives is that from
    that point on there is no increase of global temperature while atmospheric carbon
    dioxide just keeps increasing. Why is this a big deal? you may ask. It is a big deal
    because according to the Arrhenius greenhouse theory, any increase of atmospheric
    carbon dioxide must be accompanied by an increase of global temperature. This is
    the greenhouse effect at work. But what we have experienced instead for the last
    18 years or so is a steady increase of atmospheric carbon dioxide with no
    corresponding increase of global temperature. If true, this means that Arrhenius
    greenhouse theory is simply not working – it predicts warming and we don’t get
    any. Therefore, that vaunted greenhouse effect the IPCC and 200 plus world
    governments are supposed to be fighting is simply not there! How can this be
    when the science is settled and our fate is sealed by the global greenhouse
    effect? The answer: there is no global greenhouse effect. With that, the theory
    of global warming by the greenhouse effect dies. And all multi-billion mitigation
    projects must be defunded because there is nothing to mitigate. The largest
    amount of global greenhouse gas is water vapor which
    makes up 95 percent of total global greenhouse gas by volume,
    as we saw. But the Arrhenius greenhouse theory leaves water vapor
    completely out. Small wonder that its predictions of warming are false. But
    there is another greenhouse theory that does include both carbon dioxide
    and water vapor as its subjects. It is the Miskolczi greenhouse theory or MGT. According to
    MGT the greenhouse effect predicted by Arrhenius does not exist.
    MGT does predict the existence of today’s hiatus accurately and should be used in
    place of the Arrhenius greenhouse theory that makes false predictions about
    a non-existent greenhouse effect. To understand why MGT is correct and Arrhenius is wrong read:
    Arno Arrak (2014) “The Miskolczi Greenhouse Theory.”

    • Thanks for the post Arno. Good to read multiple perspectives. still digesting. And will be for the rest of the evening.
      Beautiful, gorgeous night on the patio with my iPad here in Tucson Az. Near full moon, Mars, and my puppy dog for company. Life’s good.

    • “PATRICK J. MICHAELS and David E. Wojick are absolutely right about the climate
      “modeling” being divorced from reality. Not only are the modelers completely off
      the mark with predictions of their pseudo-science, they also change official
      temperature records at will which should count as a crime. An example follows. A mainstay of
      their doctrine is the belief that carbon dioxide is warming up our atmosphere by its
      greenhouse effect. There is no scientific proof of this so they trot out laboratory
      measurements of infrared absorption by carbon dioxide.”

      It is worse than that:
      1) CO2 shows a logarithmic decay in its absorption. The data adjustments are to make the relationship between CO2 and temperature linear. That is clear fraud, and demonstrates that they are manipulation the data in a manner that will make sense to the average person that doesn’t understand math, and is contrary to the actual science.
      2) All the efforts are towards modeling CO2, H20 is completely ignored. That is like doing a study on lung cancer and ignoring smoking as a factor, and concentrating on how many candle someone has in a room.
      3) There are no laboratory experiments or theories as to how atmospheric CO2 can warm the oceans. If you can’t explain how CO2 warms the oceans, you can’t claim CO2 is causing the atmosphere above the oceans to warm.

      • It’s not that they ignore H2O, it’s that they ignore the affect H2O itself has on the atmosphere.
        They assume a simple relationship, that the relative humidity will stay the same regardless of actual temperature. Thus as temperature increases, total H2O in the atmosphere will also increase.
        A number of problems with this.
        First and foremost, the assumption of constant relative humidity was never experimentally confirmed, they just assumed it must be right. Real world experiments have disproven that assumption, but it’s still in most models.
        Secondly, they ignore the impact extra H2O has on the atmosphere. That it makes the atmosphere unstable and hence promotes overturning, which takes the H2O and the rest of the air around it from the ground level to the upper atmosphere where the H20 condenses and releases it’s heat.

  40. Far too many climate change papers are just results of model runs with different sets of hypothetical input variables. The essential message is “honey i ran the climate model”.
    If they had empirical evidence they would not have to rely on modeling so much.
    more on this topic at

  41. @benben, May 19, 3: 64 pm,
    Anyway, the truth of the matter is that climate research is pretty well done nowadays. Websites like WUWT create this alternative narrative of a bunch of crazy power hungry bureaucrats abusing incredibly simplistic models to undermine democracy. It’s entertaining to read but it’s just really far removed from the boring daily grind of science, of which climate research is very much a part.
    Hey, you’re always free to follow a couple of MOOCs on the topic and see for yourself how things are done, instead of rely on stuff like whats posted here!
    and this a little later, you said:
    Contrast that with the post on which we are commenting here. The guys at WUWT can’t even put together a decent google scholar search.
    Now, I understand that most of the stuff on this site is actually a super conservative American flamewar against liberal Americans, but its just weird to see science caught up in it.
    Just a wild ass guess, are you a liberal? I am surprised you lower yourself to our level. Frankly your contribution should be a waste of your “valuable time” get a grip ( and leave).
    BTW as a “red neck” it took me awhile to simmer down but after thinking about it and assessing your comments I had to speak up and a little later I thought about deleting this but you need to get out of the basement.

    • hmmm well, it certainly wasn’t my intention to offend. My apologies, asybot. I’m not american so I don’t really follow the distinction american liberal VS american conservative. I wrote that more as an observation.
      But, ok, so just out of honest curiosity. It seems to me that almost everybody here is an american republican, or the occasional UK equivalent. And almost all the people being trashed on this website (e.g. Al Gore, Obama, etc.) are american liberals. So it’s an accurate assessment that a lot of the stuff here is about the intersection of climate science and american politics?
      Again, I don’t want to offend, but I don’t really see how that observation is offensive.

  42. Apparently climate change research is a virtual reality career : )
    What has disturbed me for the entire history of this debate is the naïve faith in the accuracy of models.
    As a professional in the computer industry with a degree in engineering I have followed the use of computer modeling for aircraft design for some time. Finally after decades of development including countless rounds of calibrating program results against real world wind tunnel tests we have codes that can do a reasonable job of predicting the performance of an aircraft design in well-behaved flight before it is ever built.
    These same programs however do a poor job of predicting performance for extreme maneuvers because the codes cannot deal well with turbulent flow. The simplest way to describe this is that we can simulate how a plane flies pretty well but not how a parachute will open. The former case is essentially trivial by comparison to the latter (just ask the engineers at JPL trying to design the parachutes for our Mars lander missions) and I would argue that modelling the climate is a similar challenge. There are simply too many unknowns and too many uncertainties involved.
    Too drive national policy based on the predictions of climate models is simply dillusional. It’s like believing that you are a great athlete because you excel at video game sports.
    MHO anyway.

    • Mike Nelson wrote: “Too drive national policy based on the predictions of climate models is simply dilusional. It’s like believing that you are a great athlete because you excel at video game sports.
      MHO anyway.”
      I agree with your opinion.

    • Excellent comment .
      The point I’d make is that calculating a relatively long term global mean is a much simpler task than calculating atmospheric dynamics .
      It’s more akin to calculating the mean temperature of a volume of gas from gas laws without any attempt to model the eddies within .
      This has led to an ignoring of the non-optional macro physical constraints and careers tweaking the dials on the parameters influencing the eddies .

  43. An end to a means. In this case an agenda. Empirical data does not matter to these people.

  44. Once again, this documentary is way ahead of its time, and is being proven very prophetic. This clip highlights the graphic by Josh and Dr Christy has a great quote on the Model based science. Unfortunatly this documentary isn’t shown in schools.

  45. On a physics blog I read, “… experimental physicists and theoretical physicists must work together. Their symbiotic relationship – with theorists telling experimentalists where to look, and experimentalists asking theorists for explanations of unusual findings – is necessary, if we are to keep making discoveries.” There was a time when the distinction between the two sorts was not so stark. It seem the climate science suffers from the lack of such a distinction. Or rather the relative difficulty on the experimentalist side to achieve the sort of direct measurement data of so much highly useful data. Would the money spent on Cray computers for modeling be better spent on [for example] a high resolution real time satellite at L2 monitoring outgoing nighttime LWIR?That sort of data would provide plenty of useful info for falsification or limits on lots of model parameters it would seem.

    • Randy Bork wrote: “Would the money spent on Cray computers for modeling be better spent on [for example] a high resolution real time satellite at L2 monitoring outgoing nighttime LWIR?That sort of data would provide plenty of useful info for falsification or limits on lots of model parameters it would seem.”
      What a good suggestion! Real data instead of speculation. Are you listening NASA? I bet Trump will listen. Then, NASA will listen for sure.

  46. And what do computers do, what are they if not a reflection of whatever mind or thought process programmed the thing?
    one of 2 possibilities is that The Computer is the ultimate appeal to authority. No-one is going to pick a fight with a computer. But computers do what computers do, you wouldn’t argue with a 300hp tractor in a ploughing contest would you, so it is with computers.
    they are the modern day perfect oracles. Digital= Right or Wrong. On/Off. Black/White. No Compromise.
    And how do you know if its output is right or wrong.
    Simple. It is what you expect it to be.
    If it produces something unexpected, then its obviously wrong and must be re-programmed.
    2nd is a follow on from 1 above. Computers produce what is expected. They are then safe, predictable, not a threat. A return to the womb if you like.
    Just how much damage can be done before this (insanity?) stops?
    Now *there’s* a thing to worry about.

    • Peta in Cumbria wrote: “Just how much damage can be done before this (insanity?) stops?
      Now *there’s* a thing to worry about.”
      Quite a bit of damage has already been done. Just look at how Europe is crippling their electricty production over this CAGW scam. Electricity prices in Europe are skyrocketing, and electricity availability gets more uncertain, the farther down this road they go.
      All because of the unproven CAGW theory and the fear its advocates have instilled in people.
      CAGW is not only wasting enormous amounts of money, it is also putting people’s lives in real jeopardy by causing the costs of electricity to soar, and crippling the electrical grid. People die when they can’t get electricity.

      • As a European I just don’t see what you are talking about. Electricity availability is rock steady and has been for as long as I’ve been around and prices are somewhat higher than in the US (+30%?) but have been going down the last couple of years. In my country at least. Now gasoline, that is 4x more expensive in my country than in the US. But that is mostly due to taxes of course.
        And anyway electricity is a negligible part of overall household expenditure so its a pretty moot point.

    • I speak as someone who used computer model runs of weather out to T+120 professionally.
      They are merely a *possible* outcome, with a probability of success that ranges from 0 to 100%.
      The further forward in time we project we become increasingly uncertain of the outcome.
      We do more runs and end up with an ensemble (as happens in GCM’s), each time altering the starting conditions a little.
      What does that tell us?
      I tells us the sensitivity of the the atmosphere to initial conditions and narrows the possibilities of error at some future time.
      IOW: Model runs can never be truly deterministic.
      We learn *stuff* from them.
      End of.

  47. There is a parallel in post-modern philosophy, where the intellectual effort is fixated on creating a plausible alternative to the reality we must live in. Based on how all their proposed solutions seem to converge on socialism it appears that most climate scientists are also post-modernists. So for them their work is a “two-fer”.

  48. Maybe they use models because reality is too discomforting to their (and their superiors’) beliefs?

  49. I think real models should be hired to present the climate models to the public.
    And not the really skinny runway models .. and they should demonstrate clothing appropriate for global warming, such as a bikini.
    If the taxpayers money is going to be wasted, shouldn’t we get something of value?

  50. I know what you are saying, but strictly speaking all science is models. A scientific theory is a model!

  51. Whatever the field, the problem is that a computer’s output is taken as the conclusion rather than a piece of the puzzle used to arrive at a conclusion.
    Punch a bunch of numbers into a “super pocket calculator” and some will believe that they only have $2 in their wallet rather than the $20 they can count. It doesn’t occur to them that maybe “the puncher” got the decimal place wrong or their “super pocket calculator” is defective.

  52. “Isn’t science wonderful, ladies and gentlemen? You get such a wholesale return of conjecture from such a trifling investment in facts.” — Mark Twain
    This is not hard to understand. There is a lot of government grant money to be had, but acquiring facts in the climate change business is expensive and difficult. Who wants to freeze in Antarctica drilling holes in the ice, when one can sit comfortably at a keyboard sipping a latte and still publish? Models are wonderful sources of revenue.
    Kudos to the real scientists who still insist on gathering real data. Fie on the ones who rearrange said precious data to fit their models, instead of the other way around.

  53. The internal validity and accuracy of the climate models serve as the basis for the talking point / story line for the entire policy development. It is so full of holes it leaks like a sieve. The climate models are deterministic models based on assumptions (as Pat Michaels points out) and do not account for natural causes and misrepresent attribution of cause – the models are fit to manmade greenhouse gases because that is what the establishment wants the attribution to be.
    I think there is a need to focus on the logical fallacy of the approach of using climate models built with a predetermined stack of independent variables to give an output that is based on the establishment expectation of finding cause in the predetermined chosen independent variables. it is circular reasoning, it is misleading bordering on deception especially when global policies are being set based on the results. The policy push would be severely weakened if this were properly challenged.
    In this re: I think there is a big need for studying the climate modeling in a framework of probability and causal reasoning / statistical causality. Along the lines of the work by Judea Pearl. Pearl won the AM Turing award in 2011 for fundamental contributions to artificial intelligence through the development of a calculus for probabilistic and causal reasoning. Judea Pearl created the representational and computational foundation for the processing of information under uncertainty, Bayesian networks and creating a mathematical framework for causal inference.
    The paper by Judea Pearl “Causal Inference in Statistics” ( discusses a framework which is based on :
    1. Counterfactual analysis
    2. Nonparametric structural equations
    3. Graphical models
    4. Symbiosis of counterfactual and graphical methods.

    • I think there is a need to focus on the logical fallacy of the approach of using climate models built with a predetermined stack of independent variables to give an output that is based on the establishment expectation of finding cause in the predetermined chosen independent variables. it is circular reasoning,

      It’s worse than this, it’s not just GCM’S, the same hypothesis is used to infill and homogenize the surface series.
      BEST, does out of band testing, but their process is to construct a field that represents climate, Mosh says they can get this field with only latitude, altitude and whether it’s near a large body of water, the difference between the field and a measurement is weather.
      So 2 points, first the field has to have the hypothesis baked into the fields temp. Second, because there is no global average temperature measurement for any of the out of band stations, in fact they might only record a min and max temp, so the measurements have to be processed to be compared to the field.
      Didn’t DaVinci say when looking at a block of stone, you had to imagine the statue inside the block and remove what wasn’t statue.

  54. You say the science of climate processes is far from adequate.. i.e in this article you are arguing for more modelling. You say we need more understandi
    g of natural vs human factors.. ie you want more modelling.

  55. to make an argument, you have to compare climate science with other types of science, to see how important modeling is
    For instance, in studies of the heat resistance of nickel super alloys (important to jet engines) we can do real world relevant experiments
    In studies on how changes in the DMD gene (responsible for duchenne’s dystrophy) we can do real world studies
    In studies on black hole radiation, we can’t do many studies; it is all modeling, althought the word modeling may not be used in that field
    in other words, this post is the sort of sloppy, easy to do, not a lot of thought sort of thing that you precisely accuse warmers of..irony, thy name is legion

  56. Even back in the 1960’s the National Air Pollution Control Administration (now the EPA) was fixated on mathematical modeling, so the current fixation on modeling is no great surprise. It’s become tradition. A number of corporations like Battelle Memorial Institute were always submitting modeling proposals for review and (hopefully) government funding. Government Funding being the operative words.

    Look at the Cray picture carefully. Would it fit into your basement?
    That is the only question that matters.
    In Australia, as they fire the climate fellows, they are also looking for a home for their computer models. Presumably with the supercomputer and a hefty power bill.
    You could try to cover that big (fossil based) power bill by minting bitcoins.
    Of course you could also try to sell the computer model predictions, but that market is shrinking fast.

  58. Seriously speaking, a lot of people wrote above about computationally intensive models in other fields.
    In climate, the difference is that some of the main data and even fundamental, dominating processes are missing, especially concerning cloud cover and cloud seeding. So even a ten or hundred fold increase in computation power would not make much of a difference for the long term predictions.
    The most interesting questions are the causes and mechanisms of the millennial oscillations, such as MWP-LIA-now, and of the multidecadal oscillations.
    Remember that when a journal tried to address the later, they terminated it, like fascist thugs.

Comments are closed.