Do Climate Projections Have Any Physical Meaning?

Guest essay by Pat Frank

This essay expands on a point made in a previous post here at WUWT, that climate models do not produce a unique solution to the energy state of the climate. Unique solutions are the source of physical meaning in science, and make a physical theory both predictive and falsifiable.

Predictive because a unique solution is a derived and highly specific statement about how physical reality behaves. It allows that only one possibility, among an infinite number of possibilities, will occur. A unique solution asserts an extreme improbability; making it vulnerable to disproof by observation.

Falsifiable because if the prediction is wrong, the physical theory is refuted.

Figure 1 in the previous post showed that the huge uncertainty limits in projections of future global air temperatures make them predictively useless. In other words, they have no physical meaning. See also here (528 kB pdf), and see Figure 1 here, a paper just now out in Energy & Environment on the pervasive negligence that infects consensus climatology. [1]

This post will show that hindcasts of historically recent global air temperature trends also have no physical meaning.

The Figure below shows data from Figure SPM.5 of the IPCC 4AR. [2] The dark red line in the top panel shows the multi-model average simulation of the 20th century global surface air temperature. The blue points are the 1999 version of the GISS land+sea global average air temperature record. [3] The correspondence between the simulated and observed temperatures is good (correlation R = 0.85; p<0.0001). The inset at the top of the panel shows the SPM.5 multi-model average as published in the 4AR. The grey IPCC uncertainty envelope about the 20th century simulation is ± one standard deviation about the multi-model mean.

The IPCC’s relatively narrow uncertainty envelope implies that the hindcast simulation merits considerable confidence. The good correspondence between the observed and simulated 20th century temperatures is well within the correlation = causation norm of consensus climatology.

The bottom panel of Figure 1 also shows uncertainty bars about the 20th century multi-model hindcast. These represent the CMIP5 average ±4 Wm-2 systematic cloud forcing error propagated through the simulation. The propagation is carried out by inserting the cloud error into the previously published linear equation that accurately emulates GCM air temperature projections; also see here (2.9 MB pdf). Systematic error propagates as the root-sum-square.

clip_image002

Figure 1. Top panel (red line), the multi-model simulation of the 20th century global air temperature (IPCC AR4 Figure SPM.5). Inset: SPM.5 multi-model average 20th century hindcast, as published. Blue points: the GISS 1999 land+sea global surface air temperature record. Bottom panel: the SPM.5 multi-model 20th century simulation with uncertainty bars propagated from the root-sum-square CMIP5 average ±4 Wm-2 cloud forcing error.

The consensus sensibility will now ask: how is it possible for the lower panel uncertainty bars to be so large, when the simulated temperatures are so obviously close to the observed temperatures?

Here’s how: the multi-model average simulated 20th century hindcast is physically meaningless. Uncertainty bars are an ignorance width. Systematic error ensures that the further out in time the climate is projected, the less is known about the correspondence between the simulation and the true physical state of the future climate. The next part of this post demonstrates the truth of that diagnosis.

Figure 1 from Rowlands [4], below, shows “perturbed physics” projections from the HadCM3L climate model. In perturbed physics projections, “a single model structure is used and perturbations are made to uncertain physical parameters within that structure…” [5] That is, a perturbed physics experiment shows the variation in climate projections as model parameters are varied step-wise across their physical uncertainty.

clip_image004

Figure 2. Original Legend: “Evolution of uncertainties in reconstructed global-mean temperature projections under SRES A1B in the HadCM3L ensemble.” The embedded black line is the observed surface air temperature record. The horizontal black lines at 1 C and 3 C, and the vertical red line at year 2055, are PF-added.

The HADCML model is representative of the behavior of all climate models, including the advanced CMIP3 and CMIP5 versions. Different sets of parameters produce a spread of projections of increasing deviation with simulation time.

Under the SRES A1B scenario, atmospheric CO2 increases annually. This means the energy state of the simulated climate increases systematically across the years.

The horizontal black lines show that the HADCM3L will produce the same temperature change for multiple (thousands of) climate energy states. That is, different sets of parameters project a constant 1 C temperature increase for every single annual climate energy state between 1995-2050. The scientific question is, which of the thousands of 1 C projections is the physically correct one?

Likewise, depending on parameter sets, a constant 3 C increase in temperature can result from every single annual climate energy state between 2030-2080. Which one of those is correct?

None of the different sets of parameters is known to be any more physically correct than any other. There is no way, therefore, to choose which temperature projection is physically preferable among all the alternatives.

Which one is correct? No one knows.

The identical logic applies to the vertical red line. This line shows that the HADCM3L will produce multiple (thousands of) temperature changes for a single climate energy state (the 2055 state). Every single Rowlands, et al., annual climate energy state between 1976-2080 has dozens of simulated air temperatures associated with it.

Again, none of the different parameter sets producing these simulated temperatures is known to be any more physically correct than any other set. There is again no way to decide which, among all the different choices of projected annual air temperature, is physically correct.

This set of examples shows that the HADCM3L cannot produce a unique solution to the problem of the climate energy state. No set of model parameters is known to be any more valid than any other set of model parameters. No projection is known to be any more physically correct (or incorrect) than any other projection.

This means, for any given projection, the internal state of the model is not known to reveal anything about the underlying physical state of the true terrestrial climate. More simply, the model cannot tell us anything at all about the physically real climate, at the level of resolution of greenhouse gas forcing.

The same is necessarily true for any modeled climate energy state, including the modeled energy states of the past climate.

Now let’s look back at the multi-model average 20th century hindcast in the top panel of post Figure 1. Analogize the multiple temperature projections in Rowlands, et al., Figure 1, that represent the ignorance widths of the parameter sets, onto the single hindcast line of SPM.5. Doing so brings the realization that there must be an equally large set of equally valid but divergent hindcasts.

Each of the multiple models that produced that hindcast has a large number of alternative parameter sets. Those alternative sets are not known to be any less physically valid than whatever set produced each individual model hindcast.

There must exist a perturbed physics spread, analogous to Rowlands Figure 1, for the 20th century hindcast projection. The alternative parameter sets, all equally valid, would produce a set of hindcasts that would diverge with time. Starting from 1900, the individual perturbed physics hindcasts would diverge ever further from the known air temperature record through to 2000. But they have all been left out of Figure SPM.5.

The model states that produced the SPM.5 20th century hindcast, then, do not reveal anything at all about the true physical state of the 20th century terrestrial climate, within the resolution of 20th century forcing.

That means the multi-model average hindcast in SPM.5 has no apparent physical meaning. It is the average of hindcast projections that themselves have no physical meaning. This is the reason for the huge uncertainty bars, despite the fact that the average hindcast temperature trend is close to the observed temperature trend. The model states are not telling us anything about what caused the observed temperatures. Therefore the hindcast air temperatures have no physical connection to the observed air temperatures. The divergence of the perturbed physics hindcasts will increase with simulation time, in a manner exactly portrayed by the increasingly wide uncertainty envelope.

This conclusion remains true even if a given climate model happens to produce a projection that tracks the emergent behavior of observed air temperatures. Such correspondences are accidental, in that the parameter set chosen for that model run must have had offsetting errors. They were inadvertently assigned beneficial values from within their uncertainty margins. Whatever those beneficial values, they are not known to be physically correct. Nor can the accidental correlation with observations imply that the underlying model state corresponds to the true physical state of the climate.

The physical meaning of the recently published study of M. England, et al., [6] exemplified in Figure 3 below, is now apparent. England, et al., reported that some CMIP5 projections approximated the air temperature “hiatus” since 2000. They then claimed that this correspondence proved the, “robust nature of twenty-first century warming projections” and that it, “increase[s] confidence in the recent synthesized projections reported in the Intergovernmental Panel on Climate Change Fifth Assessment Report.”

Compare Figure 1 of England, et al., 2015, below, with Figure 1 of Rowlands, et al., 2012, above. The horizontal black lines and the vertical green line transmit the same diagnosis as the analogous lines in Rowlands, et al., Figure 1.

The England, et al., set of CMIP5 models produced constant air temperatures for multiple climate energy states, and multiple air temperatures for every single annual climate energy state. This, despite the fact that, “all simulations follow identical historical forcings ([6], Supplementary Information).” The divergence of the projections, despite identical forcings, clearly reveals a spread in model parameter values.

clip_image006

Figure 3. Figure 1 from England, et al. 2015. [6] Original Legend: Global average SAT anomalies relative to 1880–1900 in individual and multi-model mean CMIP5 simulations. Blue curves: RCP4.5 scenario; red curves: RCP8.5 scenario. The horizontal black lines at 2 C and 3 C and the vertical green line at 2060, are PF added.

The diagnosis follows directly from Figure 3: CMIP5 climate models are incapable of producing a unique solution to the problem of the climate energy state. They all suffer from internal parameter sets with wide uncertainty bands. The internal states of the models do not reveal anything about the underlying true physical state of the climate, past or future. None of the CMIP5 projections reported by England, et al., has any knowable physical meaning, no matter whether they track over the “hiatus” or not.

This brings us back around to the meaning of the huge uncertainty bars in the bottom panel of the 20th century hindcast in post Figure 1. These arise from the propagated CMIP5 model ±4 Wm-2 average cloud forcing error. [7, 8] Like parameter uncertainty, cloud forcing error also indicates that climate models cannot provide a unique solution to the problem of the climate energy state.

Uncertainty bars are an ignorance width. They indicate how much confidence a prediction merits. Parameter uncertainty means the correct parameter values are not known. Cloud forcing error means the thermal energy flux introduced by cloud feedback into the troposphere is not well-known. Models with internal systematic errors introduce that error into every single step of a climate simulation. The more simulation steps, the less is known about the correspondence between the simulated state and the physically true state.

The more simulation steps, the less knowledge, and the greater the ignorance about the model deviations from the physically true state. This is the message of the increasing width of the uncertainty envelope of propagated error.

Every single projection in England, et al.’s Figure 1 is subject to the ±4 Wm-2 CMIP5 average cloud forcing error. A proper display of their physical meaning should include an uncertainty envelope like that in post Figure 1, bottom. Moreover, the systematic error in the projections of individual models enters a multi-model average as the root-mean-square. [9] England, et al.’s multi-model mean projections — the dark red and blue lines — have even greater uncertainty than any of the individual projections. This is an irony that regularly escapes consensus climatologists.

So, when you see a figure such as Figure 4 top, below, supplied by the US National Academy of Sciences [10], realize that a presentation that fully conformed to scientific standards would look like Figure 4 bottom.

clip_image008

Figure 4. Top: Figure 4 from [10]; original legend: Model simulations of 20th century climate variations more closely match observed temperature when both natural and human influences are included. Black line shows observed temperatures. Bottom, the top left US NAS panel showing the global 20th century air temperature hindcast, but now with uncertainty bars from propagated ±4 Wm-2 CMIP5 average cloud forcing error.

It makes no sense at all to claim that an explanation of later 20th century warming is not possible without including “human influences,” when in fact an explanation of later 20th century warming is not possible, period.

Climate modelers choose parameter sets with offsetting errors in order to successfully hindcast the 20th century air temperature. [11] That means any correspondence between hindcast temperatures and observed temperatures is tendentious — the correspondence is deliberately built-in.

The previous post made the case that their own statements reveal that climate modelers are not trained as physical scientists. It showed that climate modeling itself is a liberal art in the manner of cultural studies, but elaborated with mathematics. In cultural studies, theory just intellectualizes the prejudices of the theorist. This post presents the other side of that coin: the lack of understanding that follows from the lack of professional training.

The fact that England, et al., can claim the “robust nature of twenty-first century warming projections” and ‘increased confidence‘ in IPCC projections, when their models are obviously incapable of resolving the climate energy state, merely shows that they can have no understanding whatever of the source of physical meaning. This is why they exhibit no recognition that their models projections have no physical meaning. Likewise the editors and reviewers of Nature Climate Change, the management of the US National Academy of Sciences, and the entire IPCC top to bottom.

The evidence shows that these people do not know how physical meaning emerges from physical theory. They do not know how to recognize physical meaning, how to present physical meaning, nor how to evaluate physical meaning.

In short, they understand neither prediction nor falsification; conjointly the very foundation of science.

Climate modelers are not scientists. They are not doing science. Their climate model projections have no physical meaning. Their climate model projections have never had any physical meaning.

To this date, there hasn’t been a single GHG emissions climate projection, ever, that had physical meaning. So, all those contentious debates about whether some model, some set of models, or some multi-model mean, tracks the global air temperature record, or not, are completely pointless. It doesn’t matter whether a physically meaningless projection happens to match some observable, or not. The projection is physically meaningless. It has no scientific content. The debate has no substantive content. The debaters may as well be contesting theology.

So, when someone says about AGW that, “The science is settled!,” one can truthfully respond that it is indeed settled: there is no science in AGW.


References:

1. Frank, P., Negligence, Non-Science, and Consensus Climatology. Energy & Environment, 2015. 26(3): p. 391-416.

2. IPCC, Climate Change 2007: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, S. Solomon, et al., Editors. 2007, Cambridge University: Cambridge.

3. Hansen, J., et al., GISS analysis of surface temperature change. J. Geophys. Res., 1999. 104(D24): p. 30997–31022.

4. Rowlands, D.J., et al., Broad range of 2050 warming from an observationally constrained large climate model ensemble. Nature Geosci, 2012. 5(4): p. 256-260.

5. Collins, M., et al., Climate model errors, feedbacks and forcings: a comparison of perturbed physics and multi-model ensembles. Climate Dynamics, 2011. 36(9-10): p. 1737-1766.

6. England, M.H., J.B. Kajtar, and N. Maher, Robust warming projections despite the recent hiatus. Nature Clim. Change, 2015. 5(5): p. 394-396.

7. Lauer, A. and K. Hamilton, Simulating Clouds with Global Climate Models: A Comparison of CMIP5 Results with CMIP3 and Satellite Data. J. Climate, 2013. 26(11): p. 3823-3845.

8. Frank, P., Propagation of Error and the Reliability of Global Air Temperature Projections; Invited Poster, in American Geophysical Union Fall Meeting. 2013: San Francisco, CA; Available from: http://meteo.lcd.lu/globalwarming/Frank/propagation_of_error_poster_AGU2013.pdf (2.9 MB pdf).

9. Taylor, B.N. and C.E. Kuyatt., Guidelines for Evaluating and Expressing the Uncertainty of NIST Measurement Results. 1994, National Institute of Standards and Technology: Washington, DC. p. 20.

10. Staudt, A., N. Huddleston, and I. Kraucunas, Understanding and Responding to Climate Change 2008, The National Academy of Sciences USA: Washington, D.C.

11. Kiehl, J.T., Twentieth century climate model response and climate sensitivity. Geophys. Res. Lett., 2007. 34(22): p. L22710.

Advertisements

232 thoughts on “Do Climate Projections Have Any Physical Meaning?

    • “In short, they understand neither prediction nor falsification; conjointly the very foundation of science.
      Climate modelers are not scientists. They are not doing science.”

      Oh, that is beautiful! Nearly brings a tear to my eye.

      “So, when someone says about AGW that, “The science is settled!,” one can truthfully respond that it is indeed settled: there is no science in AGW.”

      Another gem! Thanks for a very fine essay. These are getting posted on my office wall.

  1. While the end products cannot be falsified, what about the intermediate predictions on wester vapor, tropical heat, CO2 transport time, etc.

    • It suffers from the same large range of predictions because there are two many guesses in the calculations.

      • even if the parameters were known exactly, and the science was absolutely correct, an infinite number of different predictions are possible due to computer round off errors.

        Even a very simple linear programming model with only 3 parameters (x,y,z), such as you likely solved in high school math to calculate the intersection of a line and a plane, suffers from this problem. Likely the only reason you got the right answer in high school was because the test was contrived to be well behaved mathematically. And this is an extremely small and simple problem as compared to a climate model.

        Round off errors accumulate in virtually all computerized numerical solutions and depending on the complexity of the problem these errors grow until the error overwhelms the result. All you can say is that the answer lies somewhere between the error bounds, but you most certainly cannot draw a line (average) half-way between the bounds and say “here is the correct answer” (unless of course you are climate science).

      • MOre likely the model mesh is too coarse to properly ‘model’ the small scale but high significance events like tropical cells etc. This leads to convergence to a false solution.

        An anlogy is imagine when you were a kid and you whip a wave down a length of rope. Imagine the rope is light plastic chain of the same weight per length as the rope and the wave is pretty much the same. Now imagine that the links are progressively longer. The ‘wave’ will progressively distort from its ‘true’ form becoming a clunky version which might exhibit some very strange behaviours even locking back on itself. That is the mesh size effect.

        The problem is that as mesh size decreases to get a realistic modelling, the calculation time increases. increase a 2D mesh density and calc time goes up by 100.

        People doing CFD studies have the same problem (which is how I found out about it)

      • How could climate projections have any physical meaning. They are the output of a computer program, and one that doesn’t get all the physical universe parameters included in its considerations.

        So no it is all fiction.

        You have to model something that is physically real if you expect the model behavior to have any physical meaning. And that modeling can only (if done correctly) replicate the past.

        Once you project it into the future it can only tell you how surprised you might be, one you get to the future and find out what really happened.

      • Given enough computer power you can make any model’s “mesh” as fine as you like; but that doesn’t do you any good.

        It is the size of the physical mesh from which you obtain measured values to put into the model that determines how good it is.

        The satellite based data gathering systems can at least scan most of the planet, to measure spatial samples, and temporal data as well.

        The ground based sampling system is a joke, and creating a fictitious model of arbitrarily fine mesh size doesn’t improve the model’s accuracy. You are just interpolating between data values that are inadequately sampled anyhow.

    • Good question, Jean. The physical parameter uncertainties are large enough that even the first step of a simulation has no predictive value. Parameter uncertainties are so wide that the incompleteness of the climate physical theory itself cannot be diagnosed from the predictive errors.

      The major problem, as I see it, is that climate modeling has abandoned the reductionist program. The field is trying to model the entire climate in one shot, rather than building up by combining understandings of the physics of climate subsystems. But understanding climate physical subsystems requires that modelers collaborate intimately with empirical climate physicists like Richard Lindzen.

      There needs to be a constructive back-and-forth between models and observational experiments on small climate phenomena, that are of a magnitude that can be modeled at a level tested by those observations. The field needs to learn to model small before it can model large.

      But climate modelers have scorned the reductionist program, no matter that it’s been wildly successful. Adopting that program would mean subjecting their work to the gritty and ruthless accounting of real physics. My perception is that most modelers are trained as mathematicians and are Platonist in outlook. The purity of the ideal is all that matters.

  2. “Hindcasts” are particularly unconvincing given the number of fudge factors that can be adjusted. With enough parameters to play with you can fit an elephant into a mini coup, but this does not mean that the model says anything relevant about reality.

    • And what are they “hindcasting” to. The historical record has changed as much as the projections.

      • The future is known. It’s the past that keeps changing.

        The substantial and dynamic adjustments made to the temperature record is a crisis for climate science. Even if every adjustment is defensible, bias can creep in by not seeking with the same vigour those adjustments that might increase historic temperatures.

  3. Something that has always bugged me is how can a single physical experiment (ie our climate’s trajectory) give results to confirm an average of thousands of climate simulations?

    • You’ve got the elements but better to invert your question, bobbyvalentine: how can the average of thousands of inaccurate simulations give any physical information about the climate trajectory?

  4. See Section 1 at
    http://climatesense-norpag.blogspot.com/2014/07/climate-forecasting-methods-and-cooling.html

    for further discussion of the meaninglessness of climate model outputs for for forecasting purposes.
    Here are some quotes.

    “1.The Problems with the IPCC – GCM Forecasting method.

    1.1 The Inherent Inutility of the Modeling Approach when Dealing with Complex Systems

    The CAGW meme and by extension the climate and energy policies of most Western Governments are built on the outputs of climate models. In spite of the inability of weather models to forecast more than about 10 days ahead, the climate modelers have deluded themselves, their employers, the grant giving agencies, the politicians and the general public into believing that they could build climate models capable of accurately forecasting global temperatures for decades and centuries to come. Commenting on this reductionist approach, Harrison and Stainforth say in: http://onlinelibrary.wiley.com/doi/10.1029/eost2009EO13/pdf

    “Reductionism argues that deterministic approaches to science and positivist views of causation are the appropriate methodologies for exploring complex, multivariate systems … where the behavior of a complex system can be deduced from the fundamental reductionist understanding. Rather, large, complex systems may be better understood, and perhaps only understood, in terms of observed, emergent behavior. The practical implication is that there exist system behaviors and structures that are not amenable to explanation or prediction by reductionist methodologies … the GCM is the numerical solution of a complex but purely deterministic set of nonlinear partial differential equations over a defined spatiotemporal grid, and no attempt is made to introduce any quantification of uncertainty into its construction … [T]he reductionist argument that large scale behavior can be represented by the aggregative effects of smaller scale process has never been validated in the context of natural environmental systems … An explosion of uncertainty arises when a climate change impact assessment aims to inform national and local adaptation decisions, because uncertainties accumulate from the various levels of the assessment. Climate impact assessments undertaken for the purposes of adaptation decisions(sometimes called end-to-end analyses)propagate these uncertainties and generate large uncertainty ranges in climate impacts. These studies also find that the impacts are highly conditional on assumptions made in the assessment, for example, with respect to weightings of global climate models(GCMs)—according to some criteria, such as performance against past observations—or to the combination of GCMs used .Future prospects for reducing these large uncertainties remain limited for several reasons. Computational restrictions have thus far restricted the uncertainty space explored in model simulations, so uncertainty in climate predictions may well increase even as computational power increases. … The search for objective constraints with which to reduce the uncertainty in regional predictions has proven elusive. The problem of equifinality (sometimes also called the problem of “model identifiability”) – that different model structures and different parameter sets of a model can produce similar observed behavior of the system under study – has rarely been addressed.”

    1.2 The impossibility of computing valid outcomes for GCMs

    The modelling approach is also inherently of no value for predicting future temperature with any calculable certainty because of the difficulty of specifying the initial conditions of a sufficiently fine grained spatio-temporal grid of a large number of variables with sufficient precision prior to multiple iterations. For a complete discussion of this see Essex: https://www.youtube.com/watch?v=hvhipLNeda4

    Models are often tuned by running them backwards against several decades of observation, this is
    much too short a period to correlate outputs with observation when the controlling natural quasi-periodicities of most interest are in the centennial and especially in the key millennial range. Tuning to these longer periodicities is beyond any computing capacity when using reductionist models with a large number of variables unless these long wave natural periodicities are somehow built into the model structure
    ………………….
    In summary the temperature projections of the IPCC – Met office models and all the impact studies which derive from them have no solid foundation in empirical science being derived from inherently useless and specifically structurally flawed models. They provide no basis for the discussion of future climate trends and represent an enormous waste of time and money. As a foundation for Governmental climate and energy policy their forecasts are already seen to be grossly in error and are therefore worse than useless. A new forecasting paradigm needs to be adopted.”
    Sections 2 and 3 at the same link provide estimates of the timing and extent of the coming cooling based on the natural 60 and 1000 year periodicities in the temperature data and using the 10be and neutron count data as the most useful proxy for solar activity.

    • I like your analysis. One thing we can do is to calculate the positions of all the planets and the sun (relative to the barycenter of the solar systems. From this we can predict 1000 year and 60 year cycles. These cycles correctly predict major changes in the climate in the past. They even predict the “pause’ that is currently occurring). The theory also makes predictions that can be falsified.
      With regards to climate models, we have to look at the feasibility of using computational fluid dynamics to produce predictions that actually model reality. Without going into the theory of turbulence and Navier-Stokes partial differential equations, we can ask the question “Why do Boeing and Airbus Industries have wind tunnels to test airflow over wings and other parts of their planes? Why not use a computer model?”.
      And when all is said and done we do not even know how human CO2 emissions contribute to the total CO2 in the atmosphere.

    • I tend to disagree with Harrison’s and Stainforth’s rejection of the reductionist program, Norman. Their assessment implicitly assumes no possible fundamental advances in computing and modeling of complex systems, ever. It’ll always be done using the methods we know today, maybe just bigger nd more. Can anyone really say that the difference between computation now and in 2100, or 2300, will just be in the size, power, or parallelism of the computer architecture we know today?

      I think the reductionist approach is the only viable approach. Growth in knowledge of the physical system, over decades, will almost certainly be accompanied by advances in ways to compute, e.g., perhaps advanced heuristic computers employing some form of three-valued logic to account for deterministic chaos. Or maybe something else.

      The reductionist program must have a long-range view. It proceeds step-wise, and every forward step is small, fraught, and hard-earned. That means an ability to truly model the complete climate might be a century away. Or two. It seems to me that modelers today are naive and impatient with partial results (all we ever really get in physical science). They’ve abandoned the reductionist program and describe their work in the language of universal significance. It paints wonderful pictures for the naive, but is very premature and scientifically vacant. This basic mistake, to me, is yet one more evidence that climate modelers are not scientists.

      • Reductionism sure but only because it’s all we’ve [reliably] got. Perhaps one day we will be able to predict emergent systems? A computer advance presumably.

      • Pat and Jon. We may speculate about future possibilities but for the present, certainly, the reductionist approach is useless. I disagree strongly with Jon’s thinking that that is all we reliably have. Quasi- repetitive patterns are clearly present in the changing temperature data which we use as the symbol of climate change, You can think of these emergent patterns as the product of the real world as a virtual computer if that makes the numerical and digitally minded more comfortable. Similar patterns are seen e.g. in the solar data ,the ocean data (PDO AMO etc) and as you well know in the planetary orbits and the Milankovic cycles,. The human brain is at this time superior to computers in seeing these patterns . Think about it – computers cannot produce ( see )patterns unless they have been fed the input data and algorithms on which they run . Computer outputs at the core are always tautologous ie circular in the sense that they depend upon what was fed into them by human programmers.
        I think that if we stand back and view the climate data with the right time scale perspective and have a wide knowledge of the relevant data time series so that we can judge its reliability, that patterns are clearly obvious ,that their period and amplitude ranges can be reasonably estimated and projected forward and that the relationships between the driver and temperature data may be reasonably well inferred without being necessarily precisely calculated..
        The biggest mistake of the establishment was to ignore the longer term cycles and to project forward several decades of data linearly when we are obviously approaching, at or just past a peak in a millennial cycle. This is more than scientific inadequacy – it is a lack of basic common sense. The modelers approach is analogous to looking at a pointillist painting from 6 inches – they simply can’t see the wood for the trees or
        the pattern for the dots. ( In a recent paper Mann has finally after much manipulation managed to discover the 60 +/- year cycle which any schoolboy can see by looking at Fig 15 at
        http://climatesense-norpag.blogspot.com/2014/07/climate-forecasting-methods-and-cooling.html

  5. “Falsifiable because if the prediction is wrong, the physical theory is refuted.”

    wrong.

    even Popper knew this wasn’t the case.
    Even Feynman knew

    http://www.pbs.org/wgbh/nova/physics/solar-neutrinos.html

    “Well, right from the beginning it was apparent that Ray was measuring fewer neutrinos events than I had predicted. He came to Caltech in early 1968 to spend a week with me while he and I wrote our papers up describing for me a refined calculation, for him the first measurement of the rate in his tank. It was clear that the rate that he was getting was a factor of three smaller than I was predicting, and that was a very serious problem.

    There was a famous meeting at Caltech, just a few physicists—Dick Feynman, Murray Gell-Mann, Willie Fowler, Bob Christie, and a couple of others—in a small meeting room, where Ray presented his results and I presented my calculations of what he should have measured. There was some discussion of it afterwards, and it was pretty inconclusive. There was a discrepancy; it looked like one of us was wrong.

    “I was very visibly depressed, I guess, and Dick Feynman asked me after the meeting if I would like to go for a walk. We just went for a walk, and he talked to me about inconsequential things, personal things, which was very unusual for him, to spend his time in quite idle conversation; it never happened to me in the many years that I knew him that he did that before or afterwards. And only toward the end of the walk, which lasted over an hour, he told me, “Look, I saw that after this talk you were depressed, and I just wanted to tell you that I don’t think you have any reason to be depressed. We’ve heard what you did, and nobody’s found anything wrong with your calculations. I don’t know why Davis’s result doesn’t agree with your calculations, but you shouldn’t be discouraged, because maybe you’ve done something important, we don’t know. I don’t know what the explanation is, but you shouldn’t feel discouraged.””

    But there were plenty of scientists who did think there was something wrong with your model of the sun.

    Well, initially very few people paid any attention to this discrepancy, but the discrepancy persisted. … And every year for 30 years I had to look at different processes that people would imaginatively suggest that might play a role in the sun, and it didn’t matter how convinced I was that they were wrong. I had to demonstrate scientifically that these processes were not important in order to convince people [that] yes, the expectation from the sun was robust and therefore you should take the discrepancy seriously. It took I would guess three and a half decades before I convinced everybody.

    ################################

    there was a model of the sun.
    the observations disagreed with the model.

    What did feynman conclude? Did he say the theory is wrong or falsified?

    NOPE!

    He said “we just dont know”

    When Theory and its predictions conflict with observations, we just don’t know.

    A) The whole theory could be wrong
    B) Some part of the theory may be wrong.
    C) The theory may be incomplete
    D) The observations can be wrong.
    E) some combination.

    The Myth is that science does critical experiments and then throws theories out. Not.

    For 30 years the discrepency between theory and observation was largely ignored.

    In other words if you actually observe what scientists DO, if you take a scientific approach to the question of what science is, you find out that the path to understanding isnt anything like simple falsification. In other words, the theory that science operates by falsification, is, well, falsified..

      • If you have a closed system and are able to create an experiment where you hold every parameter constant apart from the one of interest then you can probably say that your theory is falsifiable. Climate, like all science outside of perhaps basic chemistry and physics, operates in an open system where you can never prevent changes in other parameters.

        Even if you get the results you expect you can’t say your theory is right because the results may be caused by something you’re unaware of (clouds? neutrinos?). And if you don’t get the results you expect other people can’t necessarily say your theory is wrong because the expected results may be being prevented by something they’re unaware of.

        Hence the hiatus doesn’t disprove AGW but there’s probably nothing that can prove it either. The important thing therefore is to seek understanding of underlying mechanisms and if necessary to humbly indicate possible outcomes all things being equal (which they never are) based on that understanding.

        As per Critical Realism, Bhaskar.

      • Evolutionary biology requires a heritable trait and would have been falsified had none been found. Geology requires temporal stratigraphic sequence, and would have been falsified had none been found. Even History requires an ordered time-sequence of events, and would be falsified were none found.

        Absent the central hypotheses, falsification is an intimate and immediate threat.

        Disciplines that rely on physical reality have such central hypotheses; even analytically historical disciplines. They can at least aspire to the status of science. So your exclusion of everything except physics and chemistry is badly overstated, Peter.

    • I wish that Steven would quit trying to practice the history and philosophy of science without a license. Of course amateurs can and do make contributions in fields outside their expertise, but only if they are disinterested in supporting a failed hypothesis, ie actually interested in discovering the truth.

      Feynman is this instance did not consider the model of the sun conclusively falsified because there were legitimate questions about the observational method. Only when the results were repeatedly confirmed was the model considered falsified.

      Steven and Orestes should stop trying to spread the corruption of “climate science” to the rot of the entire scientific enterprise, in support not only of a failed hypothesis but anti-human ideology, which has already cost at least hundreds of thousands of lives and trillions in treasure.

      • If you going to get into philosophy of science with climate models, you may as well invoke Occam’s Razor and go with Monckton’s pocket calculator model that only contains 8 parameters and outperforms all of the more complicated climate models.

      • sturgishooper,

        You miss the point of Mosher’s excellent comment. The model predicted 3 times as many neutrinos as were observed. That means that the model was basically correct. So were the measurements.

      • I’d be happy if he stopped flattering himself with the title “engineer” without a license. He could sit for the P.E. exam, but…

    • Steven
      “Falsifiability” depends on the number of variables and the uncertainty (“accuracy”) of the data and the confidence of the results. e.g.
      Ch 2 Origin of the Scientific Method

      The origin of modern scientific method occurred in Europe in
      the 1600s: involving (1) a chain of research events from Copernicus to Newton,
      which resulted (2) in the gravitational model of the solar system, and (3) the theory
      of Newtonian physics to express the model. . . .
      Science began in that intellectual conjunction of the research of six particular
      individuals: Copernicus, Brahe, Kepler, Galileo, Descartes, Newton. Why this
      particular set of people and their work? For the first time in history, all the component
      ideas of scientific method came together and operated fully as empirically
      grounded theory:
      1. A scientific model that could be verified by observation (Copernicus)
      2. Precise instrumental observations to verify the model (Brahe)
      3. Theoretical analysis of experimental data (Kepler)
      4. Scientific laws generalized from experiment (Galileo)
      5. Mathematics to quantitatively express theoretical ideas (Descartes and Newton)
      6. Theoretical derivation of an experimentally verifiable model (Newton)

      Comparing models against experiments has been used to show some models are better than others. aka. invalidating the older models. e.g., heliocentric vs geocentric. Keppler’s elliptical orbits replacing circular orbits and and epicycles. PS Keppler was expecting to prove a geocentric model. Tacho Brahe’s very accurate data persuaded Kepler that the geocentric model did not match, nor did the epicyclic or circular heliocentric models compared to the elliptical. So we now talk about those other models falsified compared to Keppler’s elliptical orbits.
      e.g. See Fowler on Keppler and on Newton
      Similarly see the recent discovery of the Higgs Boson announced with the data reached five sigma and then six sigma significance.
      With Global Climate Models I understood there to be more than 100 parameters. Thus we have many parameters with little data to validate/differentiate them and very large uncertainties as shown in this post above. From the above post, it appears we are not yet at 2 sigma, and probably not at the 1 sigma level. e.g.,

      1) What confidence do we have that climate will warm or cool for the next thirty years?

      2) What portion of that warming or cooling will be anthropogenic?

    • The consequences here:

      A) The theory is wrong
      B) The theory is wrong
      C) The theory is wrong
      D) The theory of the *instrument* is wrong
      E) Some or all of the above

      It’s worth noting here that the observations are *never* wrong — they simply are. However, our understanding of just what the instrument is telling us may be. Now, I’ll grant you that the theory in question here is not wrong if you’ll grant that the only possibility is that the theory behind the vast bulk of instruments is wrong.

      But if you grant that, then we’ve no manner to test Climate Theories at all. And so it’s a cute novelty in the same boat as all the other currently untestable theories. Entertaining, but meaningless. And our funding should be going towards the instruments themselves, and not the rampant, untestable speculations.

      But if the theory underlying enough of the instruments is right? Then the theory is simply wrong and we’ve got more work to do before pronouncing ourselves competent of any proper understanding of the system.

      • Jquip

        You say

        It’s worth noting here that the observations are *never* wrong — they simply are. However, our understanding of just what the instrument is telling us may be. Now, I’ll grant you that the theory in question here is not wrong if you’ll grant that the only possibility is that the theory behind the vast bulk of instruments is wrong.

        But if you grant that, then we’ve no manner to test Climate Theories at all. And so it’s a cute novelty in the same boat as all the other currently untestable theories. Entertaining, but meaningless.

        Yes, and I think the following anecdote is pertinent.

        In 2000 there were 15 scientists invited from around the world to give a briefing at the US Congress, Washington DC. The briefing consisted of three panels that each provided a briefing Session. In each Session each member of the panel gave a presentation and then questions were invited from the audience.

        Fred Singer chaired Session 1 that was about climate data.
        I chaired Session2 that was about climate models.
        David Wojick chaired Session 3 that was about political responses.

        When I opened Session 2 to questions the first questioner asked in aggressive manner,
        The first session said we can’t trust the climate data and this session said we can’t trust the climate models: where do we go from here ”.
        Gerd Rainer- Weber started to stand to provide a detailed answer but, as Chairman, I signalled him to sit and I said,
        Sir, the climate data are right or they are not.
        If the climate data are right then the climate models don’t adequately emulate past climate.
        If the climate data are not right then we cannot assess the climate models.
        In either case, we cannot use the models to predict future climate.
        So, I agree your question, Sir. Where do we go from here?

        The questioner studied his shoes and Gerd indicated he was satisfied the answer needed no addition, so I took the next question .

        Richard

      • Jquip, there’s also the possibility that the theory isn’t so much wrong, as incomplete.
        For an example, there are Newton’s laws of motion, which worked well for hundreds of years but broke down for really small or really fast objects. (None of which were observable in Newton’s time.)
        Einstein’s equations fixed these problems, but they still resolve to Newton’s equations when you solve for large objects travelling well short of the speed of light.

      • For anyone who has not read this 1950’s dissertation by Prof. Irving Langmuir, it is heartily recommended.

        When reading about “N-Rays”, “Mitogenetic rays”, the “Allison effect”, and other unproven phenomena, keep in the back of your mind the “dangerous man-made global warming” conjecture. You will find many similarities.

        I have an open mind: AGW may well exist. There is certainly support for it, in radiative physics. But as I keep pointing out, there are no verifiable, testable measurements quantifying MMGW. Thus, it is not much different from the putative “N-Rays” or any of the other phenomena that Langmuir discusses.

        Langmuir describes these nebulous effects as:

        Symptoms of Pathological Science

        • The maximum effect that is observed is produced by a causative agent of barely detectable intensity, and the magnitude of the effect is substantially independent of the intensity of the cause.

        • The effect is of a magnitude that remains close to the limit of detectability; or, many measurements are necessary because of the very low statistical significance of the results.

        • Claims of great accuracy.

        • Fantastic theories contrary to experience.

        • Criticisms are met by ad hoc excuses thought up on the spur of the moment.

        • Ratio of supporters to critics rises up to somewhere near 50% and then falls gradually to oblivion.

        Apply the definitions to human CO2 emissions, and their claiumed effect on global temperature. Does Langmuir’s Pathological Science not describe the “dangerous man-made global warming” conjecture? Is “dangerous MMGW” any different from his other examples?

      • I don’t think Langmuir (sp?) was a professor, at least not at the time. He was head of research at GE.

      • >> It’s worth noting here that the observations are *never* wrong

        Observations can be wrong in several ways. The easiest way is to measure the wrong thing, because we misunderstand the system.

      • But Richard, the models are all we have! So we have to use them!

        (I would be rich if I had a dollar for every time I have read one of those statements)

      • Vuk, have you seen the Compass Rose…… It is a graphic creation which provides a visual record of all the ‘Balloons’ that have occurred in the track of the PCM (Planetary Centre of Mass)

        A full ‘prime’ balloon is produced, when the track of the PCM, which is moving around the Ecliptic Plane at a distance that is beyond the orbit of Jupiter, suddenly decides to dive in towards the Sun, which it reaches in less than 5 years. It then swings around the Solar System Centre of Mass (SSCM) and, of course, the Sun, and then goes back out again, to almost the same position on the Ecliptic Plane, that it started from, in another 5 years! The last time that this happened was back in 1985 -1995, No.51 on the Compass Rose.
        http://solarchords.com/solar-chord-science/balloons-and-the-compass-rose/

        Matches your spikes well.

    • Steven M

      Your a clown and not a very funny one. Your not even trained as a scientist. You deliberately divert the arguments by sceptics right across the web, even at Steve Mc’s site. I respected you a few years back but since teaming up with that other clown Zeke Hausfather, well you lost me.

      • “Your a clown” It’s “you’re”

        That common grammatical mistake when insulting someone negates your point.

      • @ Paul re: “your” and “you’re”

        YOU are a clown, lol. As IF a scrivener’s error negates the substance of what was said.

        btw: you left off a “.” Your point was not negated by that mistake; I just thought you’d like to know.

      • Sorry Janice, it wasn’t meant to be derogatory. I saw that it was used twice, sometimes that’s how I learn.

        I often see “your an idiot”. That very well may be, but the lack of contraction in that usage really does deflate the insult, no? And I agree, missing or improper punctuation can often change the meaning of a sentence. YMMV.

      • Dear Paul,

        And I apologize for mis-reading your “tone” (my “how-dare-you-talk-like-that-to-that-kindhearted-Stephen Richards” emotions colored my perceptions). You do make a good point.

        What does “YMMV” mean?

        Thank you for taking the time to clarify. I’ll try to be more gracious (lololol)…. “try”… .

        Sincerely,

        Janice

      • Oh! lol, thanks, Paul (smile).

        So! “Buy our GCM-mobile. It runs great!” (YMMV)

        Heh.

      • I enjoy SM coming here to post comments. AW has hammered and warned him enough to stop (mostly) with the the 1 word drive-bys. So hi comments now, like the one above are robust enough to generate a consideration of his critical thought(s). That IS one aspect of what correct science is supposed to be, challenge self biases and one’s own conclusions/interpretation.
        So please, Steve M comment as you feel needed.

        And please everyone stop with the “yeah but you’re not a PE or PhD. …. please. Intelligent, well framed arguments can and do come from those outside established fields of study.

    • The Myth is that science does critical experiments and then throws theories out. Not.
      ===========
      You missed the point. The discrepancy between theory and observation means that you may have discovered something important.

    • Feynman on the Scientific Method

      Of course, part of your theory could be wrong but that means as stated the theory is incorrect and has to be adjusted. Theories can only be mostly correct any way, can’t be proven…only disproven. Most here know that.

      Mosher is just doing drive-bys. Must be getting paid to post.

    • A better summation would have been that you can’t falsify a model with uncertain data, which is the case in the example you give.
      However this is not the case with climate science. We have pretty good handle on temperatures in the satellite era, and the error bars for the various proxies are small enough that the data from them is useful.

      • MarkW

        Oh yea. I’d say we all agree we have a “pretty good handle on temperatures in the satellite era”.

        This probably explains the unexplained data revisions.

    • “What did feynman conclude? Did he say the theory is wrong or falsified?

      NOPE!

      He said ‘we just dont know'”

      Steven M0sher
      at 9:50am today

      *****************************************

      “None of the different sets of parameters is known to be any more physically correct than any other. There is no way, therefore, to choose which temperature projection is physically preferable among all the alternatives.

      Which one is correct?

      No one knows.”

      Pat Frank

      *********************************

      The point, O One of Slippery Tongue, on which Richard Feynman and Pat Frank agree, is whether or not a theory or hypothesis CAN be falsified.

      Your money-making CO2 conjecture, Mr. M0sher, CAN-not be falsified. Thus, to claim it is physically meaningful is nonsense.

      ****************************************

      Oh. By — the — way, Mr. M0sher… what happened to Part III of the Climategate E mails? You told us about 2 years ago that you were working in that…

      • Janice:
        If observations do not support the theory, the the theory is FALSE. This is definitional. It has nothing to do with Karl Popper’s Demarcation Criterion as to what constitutes a scientific theory.
        Scientific models were falsified by experimentation long before Karl Popper came along.(Google the Michelson Morley experiment).
        Also that there are many theories in mainstream modern physics that do not fall into Poppers Falsification Criterion – String Theory, for example.

      • Hi, Walt D.,

        True! :)

        And the conjecture about CO2 emissions does not even rise to the level of a disprovable/falsifiable theory. It is, in short: junk!

        Janice

    • Popper said also that a theory might be the best we have, tested and verified many times, but that does not make it truth. A theory can never be taken as truth.

    • “Falsifiable because if the prediction is wrong, the physical theory is refuted.”

      I don’t agree with the author’s use of falsifiability. The best definition I found:

      “It is the principle that in hypothesis testing, a proposition or theory cannot be considered scientific if it does not admit the possibility of being shown to be false. Falsifiable does not mean false. For a proposition to be falsifiable, it must – at least in principle – be possible to make an observation that would show the proposition to be false, even if that observation has not actually been made.”[Psychology Wiki]

      So for example, as far as IPCC climate model predictions to the year 2100 are concerned, they are theoretically falsifiable, but not practically in our lifetime. So I do not consider them meet the criteria of being a valid scientific hypothesis. They are meaningless. Not falsifiable.

      • The uncertainty envelope shows that the year 2100 temperature projection is meaningless, which is exactly what is stated in my post. E.g. “Figure 1 in the previous post showed that the huge uncertainty limits in projections of future global air temperatures make them predictively useless. In other words, they have no physical meaning.

        The falsifiable definition you quoted, SGW, is given as just the criterion establishing the scientific theory-status of an analytical proposition.

      • SkepticGoneWild

        Sorry, but I dispute your argument.

        You say

        So for example, as far as IPCC climate model predictions to the year 2100 are concerned, they are theoretically falsifiable, but not practically in our lifetime. So I do not consider them meet the criteria of being a valid scientific hypothesis. They are meaningless. Not falsifiable.

        “Theoretically falsifiable” is often sufficient.

        For example, when Halley proposed the hypothesis that comets are orbiting the Sun he predicted that a comet (which is now given his name) would return ~75 years later. A comet did appear when he had predicted. If a comet had not appeared when Halley had predicted then that would have been evidence to falsify Halley’s hypothesis at least in the form he specified it. Please note the specific nature of Halley’s prediction: he stated the specific year when ‘his’ comet would return.

        Which is not to say the climate model predictions are falsifiable: they are NOT falsifiable because they encompass such a large range of predictions that it is not possible to determine which predictions if any are right so it cannot be known which predictions are wrong. (This is equivalent to Halley having said his comet would return at some time between 50 and 500 years in the future: some comet(s) would appear in that time.)

        Richard

    • In a sense, the theory was wrong. It didn’t include the now known fact that neutinos have a tiny mass allowing electron neutrinos that are formed in solar fusion to transform into muon neutrinos which could not be detected by Davis’ experimental apparatus. Very early on there were two schools of thought, the model was wrong, or neutrinos had mass. It took a long time to do the experiments to measure the neutrino mass that cleared up the discrepancy.

      In this case a conflict between the well understood nuclear physics that drives the sun and a very careful experiment led to new understanding about nature. There are other cases like this, for example the orbit of Mercury is slightly different from what is predicted by Newtonian mechanics. With the advent of Einstein’s General Relativity the discrepancy was precisely explained.

      In both these cases, an extremely well understood and thoroughly tested theory was in conflict with very careful measurements. Nothing of the kind exists in “Climate Science.” The data prior to the satellite era is mediocre to bad and very sparse. Even now there are only a few years of data from the satellites and the oceans are barely sampled even with the Argo floats. There are phenomena like ENSO that have a very large effect on the climate that can’t be predicted in either their timing, duration, or magnitude. Clouds are a huge problem and are only included via parameterizations, aka, hand waving. Yet somehow the models, if we take enough of them and do averaging of some sort, are capable of predicting the effect of CO2 on the climate many years into the future. Really?

      There is also a fundamental problem at the mathematical level that the climate models cannot circumvent. They acknowledge that the models are nonlinear dynamical systems and exhibit chaos, sensitive dependence on initial conditions that make long term prediction impossible. If I take two initial states with slightly different temperatures the time series will agree for a short while but after a long time will look completely different and be completely uncorrelated. In a simple enough system, the averages, such as average high and low, will be the same despite the fact that the time series don’t look anything alike. This is what the modelers hope for. They can then turn the CO2 knob and see what it does to the averages at long times.

      The problem is that all but the simplest dynamical systems are much more complex. Different initial conditions will lead to different climates entirely. Depending on which one you choose an initial condition might lead to the climate of Europe, Jurrasic Park, or an Ice Age. Worse yet, they are not neatly separated but all jumbled together so tightly that there is no way of knowing what you’re going to get. They lie on a fractal. Suppose that you mark the initial conditions that lead to different climates by red for Europe, green for Jurrasic Park, and blue for Ice Age and now draw a little circle around some starting points. Alll three colors would be there. The climate averages you would get from different red dots would give the climate of Europe even though the time series would be different, just as in the case of a simple model. The same would be true for green and blue dots.

      Here’s where it gets interesting. Pick one red dot and magnify the area around it. It will contain red, green, and blue dots. Pick a red dot in the magnified area and magnify again. More red, green, and blue dots. Repeat. No matter how high the magnification, you will always see all three colors. The initial condtions leading to Europe, Jurrasic Park, or Ice Age lie on a fractal and can’t be separated. All you can do is pick one starting point and hope.

      To summarize, not only can’t you predict the future, you can’t even predict which future you’ll get. And averaging the climate of Europe with that of Jurrasic Park and an Ice Age is meaningless nonsense.

      • Hi Paul, has there ever been runs done on various climate models that use the parameters from say 1900 to compare them with where we are now?

    • If you look through physics and chemistry, there are many examples of theories that have been overthrown by experimental observations.
      1) The corpuscular theory of light – (light travelling faster in a transparent fluid or solid than in vacuo).
      2) Phlogiston theory – (Lavoisier’s experiment).
      3) The classical interpretation of the photoelectric effect.
      4) The aether – (Michelson- Morley experiment).
      5) CP conservation – (Wu experiment).
      6) Gamma Ray Bursts – (only possible from inside the Milky Way – but then observed from distant galaxies)
      7) Ultimate Contraction of the Universe -(distant galaxies were observed to be moving away faster).
      8) Anthropogenic Global Warming – (no increase in temperature over the last two decades despite a monotonic increase in CO2).

    • Analysis of your argument would suggest that you have a special meaning of the word ‘wrong’.

      Whether the answer is A), B), C), D) or E), it is wrong! You can redefine ‘wrong’, if you wish, but a rose by any other name…

      And yes, Richard Feynman would use the word ‘wrong’. He said, “If it disagrees with experiment [here observation], it is wrong. In that simple statement is the key to science…” (see here https://www.youtube.com/watch?v=OL6-x0modwY The video is only one minute long.)

      We may not know why it is wrong, but it is still WRONG. Not knowing why it is wrong doesn’t make any less wrong.

      BTW, you may wish to add a F) to your list… ‘the model was inconsistent with the hypothesis’. In that way, you could claim the hypothesis wasn’t falsified, just the hypothesis-model combination.

    • I have to concede I held Steven Mosher’s rather juvenile relativist view of science for nearly 25 years, but after contemplating these matters over that period, I eventually grew out of it. Falsifiability, like occam’s razor, are meta-rules. They are useful but not infallible. They are not 100% guaranteed to work all the time. They are subject to interpretation. What constitutes simplest? What constitutes a falsification for a theory (rather than merely an experiment) ? In reality, working scientists use these rules and principles but they are not infallible. Because Mosher can point to a few rare exceptions, where the rules failed, this doesn’t (a) mean that you can simply ignore the meta rules when it suites you and (b) that they in any way invalidate the importance of the meta rules within their understood limits.

      Falsifiability should continue to be used by working scientists. It’s not something to be ignored as Mosher implies. The problem with people like Mosher is that it’s rather grating to be lectured by people who don’t really understand what they wish to lecture. Climate science is a mess and more of a junk science than a science now, exactly because of views held by people like Mosher.

      • Will,

        Mosher has long ago decided his position on AGW which is more alarmist than lukewarm and has been for the last few years using drive-by comments to poison the well of every discussion here that he feels threatens his belief. People waste their time with him because they do not realize they are arguing with someone who has a degree in English. I became aware of this lack of a scientific and technical education early on after he made some incoherent technical comments but apparently many here still do not know his liberal arts / marketing background. There are so many highly intelligent people to engage in discussions with on this topic I do not know why people waste their time with someone who does not even have a rudimentary education in the subject.

      • Mosher’s opinions don’t interest me but he did try to explain a certain point of view about science that I’ve always found interesting. It’s a very popular view point among non-scientists and also now some scientists. (I recall Gavin Schmidt holds similar ideas and is a believer in the philosopher Feyerabend, who made this point of view somewhat fashionable in academia.) It took me a long time to reject Feyerabend as many of his claims are rather seductive. The problem was that Feyerabend’s view of science is largely inconsistent with how science is actually practiced by actual practitioners making useful discoveries.

    • Everyone bookmark this post for anytime Mosher attempts to use his liberal arts education to criticize a skeptic theory and hold him accountable. He has just argued that he is the biggest hypocrite to ever post here.

    • From “The Unended Quest,” Karl Popper’s autobiography (p. 38 pbk): after discussing a conversation he’d had with the “brilliant young student of mathematics” Max Elstein about Einstein’s theory of relativity Popper wrote, “But what impressed me most was Einstein’s clear clear statement that he would regard his theory as untenable if it should fail in certain tests. Thus he wrote, for example, “If the redshift of spectral lines due to the gravitational potential should not exist, then the general theory of relativity will be untenable.”

      “Here was an attitude utterly different from the dogmatic attitude of Marx, Freud, Adler, and even more so that of their followers. Einstein was looking for crucial experiments whose agreement with his predictions would by no means establish his theory; while a disagreement, as he was the fitst to stress, would show his theory to be untenable.

      “This, I felt, was the true scientific attitude. … Thus I arrived by the end of 1919,at the conclusion that the scientific attitude was the crucial attitude which did not look for verification but for crucial tests; tests which could refute the theory tested, though they could never establish it.”

      That is about as clear a statement as could be imagined. A critically central issue is that the falsifiability idea came from Einstein. It is not an assumption of or a deduction from Popper’s philosophy of science. Popper’s view came from Einstein’s working-scientist empiricist approach to physical theory.

      So, you’re wrong about Popper and you’re wrong about science, Steve.

      Two places in my post, stressed with bolding that climate projections are meaningless at the level of resolution of greenhouse forcing. That is the central message. Nothing you’ve written contests that conclusion.

      Feynman is not here to provide the meaning of his comment about not being able to explain the disagreement between theory and result. However, your generalization of his comment as the foundational methodology of all of science is completely misguided.

      You wrote: “The Myth is that science does critical experiments and then throws theories out. Not.” I’ve done exactly that four times in my own work. No falsifications of earth-shaking news-worthy large scale physical theories, but of at-the-time guiding theories for the behavior of certain systems within Inorganic Biochemistry. Nevertheless, critical experiments –> falsification of theory.

      • As an addendum – that scientific attitude is a result of the public nature of science. If a scientist didn’t have to worry about others trying to disprove the claim, s/he wouldn’t be so concerned with falsification [no one to falsify it]. What’s crucial about the current situation is that, academically, people are NOT allowed to falsify AGW claims. So AGW ipso facto is not science.

      • “critical experiments –> falsification of theory” puts it well, critical experiments= falsification of theory not so.

    • Mosher writes ““Falsifiable because if the prediction is wrong, the physical theory is refuted.”
      wrong.”

      Way to go on completely misrepresenting that statement. You see, “prediction is wrong” implies “[measurements show the] prediction is wrong” and in turn there is room for the measurements being wrong. That is why Feynman wasn’t quick to throw out the theory because he knew there were still viable alternative explanations.

      Misrepresenting Feynman’s meaning to suit your own argument is poor form, Steve.

    • @Mosher
      It is also quite clear that the discrepancy was taken very seriously, debated in a respectful and useful way, and changes to the theory were being proposed and tested to explain and remove the discrepancy. Presumably the experimental data were being examined in the same manner.

      In short the discrepancy was being used to further scientific knowledge. That is the correct way forward. Were extra laws, taxation and the dismantling of the current economic system being based on this theory? No.

      Unfortunately, Mr Mosher ( or is it Dr? Apologies if I use the wrong title), today a theory with obvious, long term and growing discrepancies with reality are being used to completely alter the fabric of society. That is the point you miss. I do think there is a glimmer of doubt appearing and climate scientists are starting to look seriously at the discrepancy.between their forecasts ( not allowed to call them predictions according to the IPCC ) and reality. Only when this is recognized will the science progress by working on theory and data to explain the discrepancy.

      But meantime, lose the arrogance of certainty and stop trying to change the world based on a currently failing theory.

    • I don’t disagree – but the above sun model wasn’t the basis for world-spanning economic disruption.
      Had it been, I’m sure the spotlight would have been much more intense.

  6. Is there an easy source for the “CMIP5 average ±4 Wm-2 systematic cloud forcing error”. Not questioning the number, just hoping to use it elsewhere and unsure how to reference. Thanks for an interesting read.

    • The ±4 Wm^-2 cloud forcing error is in Lauer and Hamilton, post reference 7, MJB.

      Essentially, it’s the average cloud forcing error made by CMIP5-level GCMs, when they were used to hindcast 20 years of satellite observations of global cloud cover (1985-2005).

      The differences between observed and CMIP5 GCM hindcast global cloud cover were published in Jiang JH, et al. 2012. Evaluation of cloud and water vapor simulations in CMIP5 climate models using NASA “A-Train” satellite observations. J. Geophys. Res. 117(D14): D14105, doi:10.1029/2011jd017237.

  7. I am going to flesh-out a brief comment I made regarding this topic some years ago. I will start with a question. Do the control limits in statistical process control have meaning? The answer is, yes, as long as one understands the process one is monitoring and as long as that process remains under control. The analogy to climate models and the climate system should be apparent, but let me elaborate. Climate models represent an understanding of the process or they do not. Let us assume that they fully explain the climate system for a moment. The projections of various climate models then, particularly the 95% limits, represent a reasonable set of control limits for the combination of underlying physical process, the climate system, and also for for whether the system is operating according to the hypothesis of being under control–i.e. operating according to expectations.

    The observation that actual climate measurements are wandering across the control region and have crossed the lower 95% limit can mean only one of two things in manufacturing. First, that the process is no longer under control; that is, that some parameter of the process is no longer operating according to target value. Second, it is possible that our control limits were wrongly set in the first place because we did not understand the climate process or our models do not capture its characteristics. Either way, this observation is not good for those who think they understand the climate system fully, and believe they can monitor it fully.

    There is always the possibility that measurements are not really capturing the state of the climate system faithfully–i.e. that there is a bias in the measurements, and then one has to argue about whether this indicates a problem with models or system behavior. But once again it spells trouble for the CAWG community. By the way, I think that Dr. Christie’s observations regarding the atmosphere as a whole though balloon or satellites, which he presented in a hearing some month or two ago, is also highly pertinent to this argument–perhaps even more powerful evidence of something amiss with the statistical process control experiments going one here.

    • the process control analogy is interesting along with the notion of an “out of control” process. it shows the value of other disciplines analyzing the results, rather than confining the analysis to “climate science”.

      • Control systems (manmade at least) rely on two principles, Controllability and Observability.

        This point is especially critical in consideration of Observables in global climate feedbacks. Maybe you could take a two part stab at it. One post on Observability in feedback system control. Another in controllability and where those parameters lie in the phase space of real and imaginary elements.

  8. Pat wrote: “These represent the CMIP5 average ±4 Wm-2 systematic cloud forcing error propagated through the simulation.”

    Cloud feedback is the reduction in outgoing OLR and reflected SWR caused by clouds that accompanies a change in surface temperature. It is measured in units of W/m2/K. What is cloud forcing (measured in units of W/m2). Why is the uncertainty in cloud forcing +/-4 W/m2? (Lauer and Hamilton (2013) is paywalled.)

    Variations in clouds cover do not necessary produce changes in global temperature. Low clouds cool the planet will high clouds warm the planet. A model with greater cloud cover than average may also be a model with higher clouds than average – errors that offset each other and possibly create equally good (or bad) representations of our current climate. When errors are not independent, the usual rules of error propagation don’t apply.

    • Frank, the most accepted value for global cloud radiative forcing is Hartmann DL, et al. 1992. The Effect of Cloud Type on Earth’s Energy Balance: Global Analysis. J. Climate 5: 1281-1304. It’s defined there as, “the effect of clouds on the radiation balance at the top of the atmosphere.

      Hartmnn derived an average cloud radiative forcing of -27.6 W/m^2 – a net cooling – as the overall average effect of clouds on global climate.

      You’ll find the ±4 Wm^-2 described as “cloud forcing error” throughout my post. So, why are you asking about cloud feedback?

      Lauer and Hamilton (post ref. 7), describe the difference between hindcasted and observed cloud forcing for 27 CMIP5 GCMs.

      The ±4 Wm^-2 is the average of the errors the models made in hindcasting global cloud forcing. Lauer and Hamilton define this as, “as the difference between ToA all sky and clear-sky outgoing radiation … in the thermal spectral range (LCF).” LCF is long-wave cloud forcing.

      Long wave cloud forcing is the contribution to atmospheric thermal energy flux most relevant to air temperature. Lauer and Hamilton directly give the average CMIP5 LCF error as ±4 Wm^-2; the root-mean-squared-error, not as ±4 Wm^-2K^-1.

      • Pat Frank: Thanks for you reply. Cloud forcing has an impact on using models to simply reproduce today’s climate. Meehl (2007) reported that the spread of absolute GMST throughout the 20th century was about 3 degC, partly for because of differences in cloud forcing. A 3 degK change in temperature (about 1%) is associated with a 4% change in emission – 10 W/m2 in terms of post-albedo irradiation! So the 4 W/m2 error in cloud forcing is about half of the problem models have representing today’s climate. You will find this figure illuminating; the spread is far greater than 20th century warming: https://curryja.files.wordpress.com/2013/07/presentation1.jpg

        Climate scientists avoid this problem by using temperature anomalies, assuming that the large errors in reproducing today’s climate will remain constant as GHG’s increase. IF climate models get the RATE of warming from forcing + feedbacks correct, the error is representing current climate is irrelevant (since we already know what current climate is). You and I debated the same issue several years ago with respect to calibration curves for SST proxies – whether the uncertainty in reconstructing temperature CHANGE could be smaller than the uncertainty in reconstructing of absolute temperatures and then taking the difference. I provided a link showing that analytical chemists calculated calibration errors in a manner different than you did. So your disagreement is with more than the climate science community. We didn’t reach any agreement then, so I don’t want to repeat that debate. In the case of climate models, there is no good reason to assume that errors in representing today’s climate are irrelevant to representing climate sensitivity (the rate of warming with forcing).

        IMO, the IPCC’s big lie is to pretend that the spread of results between “an ensemble of opportunity” (the phrase used to describe the models in AR4) with different parameters comes close to representing the full uncertainty that would be uncovered by completely exploring “parameter space” and “initialization space” for all of these models. Then there are the systematic errors arising from using grid cells larger than weather phenomena.

      • Pat: I just noticed that your +/-4 W/m2 error in cloud forcing and my 10 W/m2 spread (ie +/-5 W/m2) are essentially the same. The result of this error is a +/- 1.5 degC range for GMST in climate model output.

    • Frank writes “errors that offset each other and possibly create equally good (or bad) representations of our current climate.”

      Current climate being the operative term. And is why we can do “weather” quite well. If its parameter driven rather than derived from first principles from physics (and it is), then how do these clouds change in a changing climate?

      Nobody knows and so its not possible to calculate climate change due to that unknown alone. Let alone all the other parameterised values used in GCMs…

  9. Wonderful stuff!

    This could just be (and I hope that it is) the final stake through the heart of the AGW vampire that threatens to suck the life blood from Western economies and condemn poorer communities around the world to unnecessary hardship.

    Thank you Pat and WUWT

  10. The models just grow based on the assumptions about how CO2/GHGs will warm the planet. Each one has a slightly different built-in assumption and each one has some level of random variation which also occurs. Then there is 4 or 5 different trajectories for CO2/GHGs.

    Take any model and show me that is not the case.

    You can follow the high and low CO2/GHG assumed sensitivity models here from AR4.

    • The lowest assumed ECS is 2.1 degrees C per doubling, and the highest 4.5. Clearly, all those models assuming an ECS above 3.0 should be tossed, and models should be run with assumptions from 0.0 to 2.0 as well. On the best evidence, ECS lies between 0.0 and 2.0. Most likely positive and negative feedbacks roughly cancel each other out, so that actual ECS is around 1.0 to 1.2, the value for the radiative forcing of CO2 by itself. The feedbacks are probably net negative, so I’d go with 1.0 on an a priori basis.

    • Bill Illis

      You say

      The models just grow based on the assumptions about how CO2/GHGs will warm the planet. Each one has a slightly different built-in assumption and each one has some level of random variation which also occurs. Then there is 4 or 5 different trajectories for CO2/GHGs.

      Take any model and show me that is not the case.

      On WUWT I have repeatedly shown that it IS the case.
      I now explain it again.

      None of the models – not one of them – could match the change in mean global temperature over the past century if it did not utilise a unique value of assumed cooling from aerosols. So, inputting actual values of the cooling effect (such as the determination by Penner et al.) would make every climate model provide a mismatch of the global warming it hindcasts and the observed global warming for the twentieth century.

      This mismatch would occur because all the global climate models and energy balance models are known to provide indications which are based on
      1.
      the assumed degree of forcings resulting from human activity that produce warming
      and
      2.
      the assumed degree of anthropogenic aerosol cooling input to each model as a ‘fiddle factor’ to obtain agreement between past average global temperature and the model’s indications of average global temperature.

      Nearly two decades ago I published a peer-reviewed paper that showed the UK’s Hadley Centre general circulation model (GCM) could not model climate and only obtained agreement between past average global temperature and the model’s indications of average global temperature by forcing the agreement with an input of assumed anthropogenic aerosol cooling.

      The input of assumed anthropogenic aerosol cooling is needed because the model ‘ran hot’; i.e. it showed an amount and a rate of global warming which was greater than was observed over the twentieth century. This failure of the model was compensated by the input of assumed anthropogenic aerosol cooling.

      And my paper demonstrated that the assumption of aerosol effects being responsible for the model’s failure was incorrect.
      (ref. Courtney RS An assessment of validation experiments conducted on computer models of global climate using the general circulation model of the UK’s Hadley Centre Energy & Environment, Volume 10, Number 5, pp. 491-502, September 1999).

      More recently, in 2007, Kiehle published a paper that assessed 9 GCMs and two energy balance models.
      (ref. Kiehl JT,Twentieth century climate model response and climate sensitivity. GRL vol.. 34, L22710, doi:10.1029/2007GL031383, 2007).

      Kiehl found the same as my paper except that each model he assessed used a different aerosol ‘fix’ from every other model. This is because they all ‘run hot’ but they each ‘run hot’ to a different degree.

      He says in his paper:

      One curious aspect of this result is that it is also well known [Houghton et al., 2001] that the same models that agree in simulating the anomaly in surface air temperature differ significantly in their predicted climate sensitivity. The cited range in climate sensitivity from a wide collection of models is usually 1.5 to 4.5 deg C for a doubling of CO2, where most global climate models used for climate change studies vary by at least a factor of two in equilibrium sensitivity.

      The question is: if climate models differ by a factor of 2 to 3 in their climate sensitivity, how can they all simulate the global temperature record with a reasonable degree of accuracy.
      Kerr [2007] and S. E. Schwartz et al. (Quantifying climate change–too rosy a picture?, available here ) recently pointed out the importance of understanding the answer to this question. Indeed, Kerr [2007] referred to the present work and the current paper provides the ‘‘widely circulated analysis’’ referred to by Kerr [2007]. This report investigates the most probable explanation for such an agreement. It uses published results from a wide variety of model simulations to understand this apparent paradox between model climate responses for the 20th century, but diverse climate model sensitivity.

      And, importantly, Kiehl’s paper says:

      These results explain to a large degree why models with such diverse climate sensitivities can all simulate the global anomaly in surface temperature. The magnitude of applied anthropogenic total forcing compensates for the model sensitivity.

      And the “magnitude of applied anthropogenic total forcing” is fixed in each model by the input value of aerosol forcing.

      Kiehl’s Figure 2 can be seen here.
      Please note that the Figure is for 9 GCMs and 2 energy balance models, and its title is:

      Figure 2. Total anthropogenic forcing (Wm2) versus aerosol forcing (Wm2) from nine fully coupled climate models and two energy balance models used to simulate the 20th century.

      It shows that
      (a) each model uses a different value for “Total anthropogenic forcing” that is in the range 0.80 W/m^2 to 2.02 W/m^2
      but
      (b) each model is forced to agree with the rate of past warming by using a different value for “Aerosol forcing” that is in the range -1.42 W/m^2 to -0.60 W/m^2.

      In other words the models use values of “Total anthropogenic forcing” that differ by a factor of more than 2.5 and they are ‘adjusted’ by using values of assumed “Aerosol forcing” that differ by a factor of 2.4.

      So, each climate model emulates a different climate system. Hence, at most only one of them emulates the climate system of the real Earth because there is only one Earth. And the fact that they each ‘run hot’ unless fiddled by use of a completely arbitrary ‘aerosol cooling’ strongly suggests that none of them emulates the climate system of the real Earth.

      Richard

  11. I’d say climate projections have less significance and worth than short-term and even long-term weather projections (like the Farmer’s Almanac and the like). At least weather predictions are sometimes proven right but I’m not aware of any predictive value of the CAGW climate models. Except maybe when they go back and change the original “predictions” like they’ve changed temperature data…

  12. What is represented by the shorter term irregular jogs up and down in a climate model projection graph? Do they represent some anticipated imbalance based on fact or are they just random variations programmed in to make the graph look like real climate variations.

    I suspect the latter, in which case the progression of error will most certainly increase over time. In my opinion they would appear more realistic if they were smooth lines.

  13. You make some good points about the propagation of systematic errors and uncertainty. However, the following statements seem very overstated:

    Climate modelers are not scientists. They are not doing science. Their climate model projections have no physical meaning.

    By your Boolean standards, a whole lot of scientific and engineering analysis would be declared “non science”. I believe that you’re implying that climate analysis is inherently impossible.

    I also think you’re making a fundamental weather vs. climate error. You are comparing a climate analysis to an observation of the weather. For example, a meteorologist tries to predict where and when a certain storm will go, while a climatologist tries to determine how many storms are happening all over the globe on a decade time frame. I believe that that average global temperature for any given month is closer to weather than it is to climate.

    • When you take a single statement which summed up everything he said before, and use it out of context, of course you can be lead astray, and mislead others. Climate “science” is in a category of its’ own, and has nothing whatsoever to do with any real science since it isn’t based on anything real. It is pseudoscience. The GCMs don’t serve science; they are in the service of an ideology.
      I see absolutely nothing in the post which remotely suggests a confusion between weather and climate, so you are way out in left field on that one.

      • Bruce, I didn’t take anything out of context.

        Climate “science” is in a category of its’ own, and has nothing whatsoever to do with any real science since it isn’t based on anything real. It is pseudoscience. The GCMs don’t serve science; they are in the service of an ideology.

        Your statements are assertions without any real support. This is your agenda speaking. I probably share your agenda, but one must not let emotion clouds one critical thinking skills. It’s one thing to say that this small group (and yes, it’s a small group) are engaging in pseudoscience. It’s quite another to claim that climate science is pseudoscience. That would mean that scientists skeptical of AGW are also pseudo scientists.

        which remotely suggests a confusion between weather and climate, so you are way out in left field on that one.

        How about the whole post? The author’s premise is that climate modelers are trying to predict what the average temperature will be in a certain month. The author misses the point completely.

        Analogy: a scientist is trying to predict how a river will change it’s course over a century. The river slowly erodes it’s way into a different course. The model for this could include fluid dynamics, etc. However, there is a lot of chaos involved. If one drops a leaf, no one could predict it’s path on any given day. However, one might be able to predict that it will go downstream. That’s the difference between climate and weather.

        Anyone who claims that the river modeler has failed because the model did not predict the path of a leaf has completely missed the point. That was never the goal.

        The author writes “There is no way, therefore, to choose which temperature projection is physically preferable among all the alternatives”. The author seems oblivious to the fact that chaos is involved. By comparing the average temperature from these simulations to current observation, the author is confusing climate with weather.

        Now, I’m certainly not defending any particular climate model, and I believe that many of them are not doing a good job. However, the author comes to conclusions which are not supportable. If being .1% different than observation is grounds for being declared “physically meaningless”, than the law of gravity is in trouble, because recent empirical data was 2% off.

      • VikingExplorer, The author’s premise is that climate modelers are trying to predict what the average temperature will be in a certain month.

        Where does that premise appear anywhere in my post?

        You wrote, “By comparing the average temperature from these simulations to current observation, the author is confusing climate with weather.

        Where in my post is there any comparison between model simulations and current observations of weather?

        Your claims are not substantiated in anything I wrote. It seems your statements are assertions without any real support. Could it be your agenda speaking?

      • Where does that premise appear anywhere in my post?

        The first synonym for “premise” is “assumption”.

        And so you are really asking where does that assumption appear in my post

    • See my post where the status of climate modelers as scientists is discussed, VikingExplorer. Perhaps you’ll come to agree with the assessment.

      My analysis has nothing to do with weather. It’s all about climate models, and temperature projections at the step-wise annual level.

      Nevertheless, the area of earth is 510 million square kilometers. Let’s suppose that an individual weather system dominates 100×100 km = 10k km^2 and let’s suppose that any weather system lasts an average of 5 days. That makes a monthly global accounting an average of about 300,000 weather systems. Even if your diagnosis of monthly average was correct (it isn’t), a monthly global average is not about weather.

      • >> Could it be your agenda speaking?

        What would my agenda be?

        >> a monthly global average is not about weather.

        The first reason why this is false is that the atmosphere is a tiny fraction of the system (Energy-Atmosphere =~ 1/1280 Energy-Ocean). The average surface air temperature would represent only .07% of the thermal mass of the system under study. Natural variability and chaos would cause sinusoidal-like Heat flows throughout the system.

        The second reason is timescales. Referring only to Figure 1 of this web site, one can see the timescale differences between weather and climate change. The timescale for climate change is on the order of centuries, with the smallest unit of climate change being a decade.

        Therefore, a monthly global, average, surface air temperature IS definitely about weather.

      • What would my agenda be?

        Perhaps whatever agenda you had in mind for Bruce Cobb. You’re the only one who’d know what that is.

        Neither of your discursions about weather — minor heat flow compared to the total Earth system, and timescale — has any critical relevance to anything in my post, or indeed to any illumination of weather. It is beyond ludicrous to suppose, as you do, that an average of 3E5 systems reflects the physical behavior of an individual system.

        Here’s how your reference site describes weather: “Weather describes current atmospheric conditions, such as rainfall, temperature, and wind speed, at a particular place and time. It changes from day to day.” Your own reference doesn’t support your position.

        Every figure in this post is about simulated global average surface air temperature. They never were about weather, and certainly are not about weather.

    • Unique solutions are the source of physical meaning in science

      This seems to me to be an assertion without support. In fact, the article is full of such assertions, but this seems to be the primary one, upon which everything else depends. It doesn’t strike me as the last little bit true.

      It would seem that this statement excludes all sciences with chaotic elements. For example, geology, mathematics, microbiology, biology, chemistry, economics, engineering, finance, algorithmic trading, meteorology, physics, politics, population dynamics, psychology, and robotics. source: wiki

      In fact, Chaos theory seems to be the best explanation for the Tacoma Narrows Bridge Collapse

      The major reinsurance companies use many different simulations of economic and weather situations to determine their level of risk. I guess Actuarial Science isn’t really science either?

      • This seems to me to be an assertion without support.

        That seems to me a statement by someone who knows nothing whatever about science.

        So, here’s a question, VikingExplorer: If, as you have it, unique solutions are not central to physical meaning in science, what’s the point of physical error bars and uncertainty intervals?

        The unique solution with respect to chaos, VikingExplorer, is to derive to which systems chaos theory applies.

        I guess Actuarial Science isn’t really science either?” Where is the falsifiable theory of Actuarial phenomena?

      • >> That seems to me a statement by someone who knows nothing whatever about science.

        So, asking for logical support for an assertion indicates to you that the person doesn’t know anything about science? An ironic response.

        >> So, here’s a question, VikingExplorer: If, as you have it, unique solutions are not central to physical meaning in science, what’s the point of physical error bars and uncertainty intervals?

        Error bars and Confidence intervals both seem to be related to observations. I don’t see how they provide support for your bald assertion.

        You claim that only “Unique Solutions” are science, yet you have not provided any support for this claim. I’ve never heard that before, and it seems extremely dubious. Examples such as this and many others are clearly scientific.

        You claim to have a definition for physical meaning but it’s even more dubious. Even Popper disagreed with the idea that non-falsifiable statements are meaningless.

        Obviously, if people were told that there was a 50% chance of rain today, it would have significant meaning. Yet, this information would be the result of multiple simulations of a model. Meteorology is one of those sciences that does not provide unique solutions. It’s rather bold of you to claim on a weather man’s site that Meteorology is not a science.

        >> The unique solution with respect to chaos, VikingExplorer, is to derive to which systems chaos theory applies.

        Huh? You seem lost. Take a look at the history section of Chaos theory to see all the ways that chaos theory has helped various scientific endeavors. Perhaps you’re unaware of Complex adaptive systems or Complexity Science.

        >> “I guess Actuarial Science isn’t really science either?” Where is the falsifiable theory of Actuarial phenomena?

        You seem to hold that Popper is the ultimate authority on truth, but he can’t be completely correct, because that would mean Anthropology is not a science. In fact, it would exclude many of the most interesting fields of scientific study.

        Your focus on “physical meaning” makes me wonder if you’re a verificationist.

        Most people recognize that there are many definitions of “science“, and that Popper’s view is not persuasive since it’s way too restrictive. Many people realize that scientists do not generally do inductive reasoning at all but rather abductive reasoning, or inference to the best explanation.

        The bottom line is that most reasonable people have concluded that these kind of scientific demarcation arguments are uninteresting and useless. This is because they are almost exclusively used by some people to denounce other people when they are too inadequate to criticize the ideas themselves.

        That’s exactly what you’re doing. Either make a valid scientific criticism of the AGW theory or of a specific climate model. Making a specious claim that all climate models are not science adds nothing to science, but is really just another form of ad-hominem.

      • Look at it this way, VikingExplorer: if there isn’t a unique solution to the problem of lightning, how would you know a strike isn’t a warning from god?

        If there’s no unique solution to the problem of heredity (DNA, genes, and chromosomes), how would you know that biological evolution happened?

        If there’s no unique solution to the problem of electromagnetic waves, how would you know that cell phones aren’t magic?

        Definitive explanations require unique solutions. Science is in the business of supplying them. Technology would be rule-of-thumb without them.

        No unique solution leaves ambiguity. Pervasive ambiguity permits unchecked speculation.

        Unchecked speculation is superstition: that’s your no-unique-meaning.

        Bright light moving across the daytime sky? How would you know it’s not Apollo? Or Re? Or Mithra? Or space aliens?

        Let science step in, and only one answer is forthcoming. One answer = one unique solution, VikingExplorer. There can be no progress in understanding until ambiguity is removed.

        A falsifiable theory necessarily supplies a unique meaning. No unique meaning, no falsifiablity. No falsifiability, no discovery of error in theory. No discovery of error, no improvement in theory.

        Absent a unitary meaning, any given result transmits an ambiguity. The result has no discrete meaning. Ambiguities permit no resolution. No resolution, no solved problems. No solved problems, no progress in knowledge.

        Every single advance in our civilization rests upon the removal of ambiguity. The strongest advances rest upon a unique meaning.

        The paper you referenced merely discusses how to proceed when theory is not sufficiently advanced and the data are not sufficiently accurate (or not present) to provide a unique solution. If anything, it proves my point.

        You clearly didn’t understand the point that knowing when to apply chaos theory is, ipso facto, the unique solution you deny is necessary to scientific meaning.

        I noted above that the falsifiable criterion came to Popper from Einstein. Popper didn’t assume the methodology, he adopted it from working science into his philosophy of science.

        You wrote that, “Most people recognize that there are many definitions of science. Philosophers say all sorts of things about science. None of that is definitive, none of it impacts scientific methodology, none of it is important to science, none of it influences how scientists work, or none of it impacts the meanings derived from scientific practice.

        Science is not philosophy. The philosophy of science is not science. I co-authored a short paper about that (P. Frank and T.H. Ray (2004) Science is Not Philosophy Free Inquiry 24(6), 40-42). It provides the discrete definition of science, derived from the methodology of science: science is theory and result (objectively qualified). That’s what makes science free of culture or opinion. That’s all one finds in scientific journals. Nothing else survives the ruthless winnowing of scientific practice.

        Scientific inferences are hypotheses, VikingExplorer. Hypotheses qualify as scientific when they offer an analytical solution to a problem, suggest explicit experiments regarding that problem, and predict the results of those experiments in the context of that problem.

        That is, scientific hypotheses are predictive and falsifiable, in exactly the same manner as bona fide theories. The only difference between a theory and a hypothesis in science, is that the theory derives from a hypothesis that has been overwhelmingly and persistently successful in its predictions. I.e., a theory is a hypothesis that has graduated.

        So, abduct all you like, VikingExplorer. Sorry to say, your comments evidence a clear absence of understanding about science. In that, you risk becoming akin to a consensus climate modeler in good standing. Avoid it with all due integrity.

      • You wrote that, “Most people recognize that there are many definitions of science. Philosophers say all sorts of things about science. None of that is definitive, none of it impacts scientific methodology, none of it is important to science, none of it influences how scientists work, or none of it impacts the meanings derived from scientific practice.

        Pat, you seem oblivious to the fact that YOU are the one making a philosophy of science argument. Your post says absolutely NOTHING about how a certain climate model has incorrect science. It’s full of philosophical assertions. It’s only natural that I respond in the same subject.

        At one point (10 years ago), I also fell into the Popperian/Falsification view point. It was an attractive argument to make. However, slowly, I evolved as I realized that there is a tremendous amount of scientific work that would be excluded from this kind of demarcation argument. I also realized that it was the same approach that the AGW hockey team uses. They use ostracism as a tactic to demonize their opponents instead of making an actual scientific argument.

        The reality is that although Popper is not incorrect, it’s not the whole picture. Ben Lillie (Ph.D. in theoretical physics from Stanford University) explains it well:

        Second, while falsifiability is an important principle, it’s far from the final word. Two very simple (and again, simplified) examples are evolution and public health. In large areas of evolution and much of natural history, you simply can’t do experiments. These are clearly science, and there are ways of making strong inferences, it just isn’t by the principle of falsification. Or, in public health, in particular epidemiology. reference

        For example, recent empirical data contradicts the law of gravity, however, we’re not about to falsify the theory. It’s also well explained here:

        The standard Popperian view of falsifiability falls down on two grounds. Firstly and most simply, it is an absolutist position. A single negative result is sufficient to nullify a hypothesis. Much of modern science relies of statistical measures of certainty. For example, in a modern drug trial, you might have 33/50 patients healed with a drug and 12/50 patients healed with the placebo. From these numbers, you can then derive a confidence that your hypothesis (namely, the drug you are testing is effective against the disease) is correct. Under such a double blind study, there is no set of results that can render a hypothesis completely false. Even if you had 0/50 healed for your drug and 50/50 healed for the placebo, there is a trivial but non-zero chance that your drug might actually work. Thus, drug trials are not strictly falsifiable but are undoubtable modern science.

        Secondly, Falsifiability only works for experiment based sciences, not evidence based ones. In experiment based sciences, you are allowed to some degree to control your variables and to run an experiment as many times as you want. In evidence based ones, artificial experiments are not possible and you must decipher the results of natural experiments. The example I like to use to explain this comes from Jared Diamond’s book, Collapse. In this, he posits that there is a correlation between the size of Polynesian islands (among other things) and how deforested they become. Now, this is clearly a scientific hypothesis. The problem is, there are only a finite amount of islands (81 to be precise). The only way to falsify it would be to discover an 82nd island which showed an opposite trend but this is clearly absurd. When asking some questions, the sample size is big enough such that you can essentially ignore the lack of falsifiability. i.e.: dinosaur fossils are being discovered at a large enough rate that they essentially serve the same role as experiments in terms of falsifiability. However, in other cases, only a tiny handful of examples exist or the examples differ so much from each other that essentially no hypothesis can be falsified. Hypothesis’s about a particular species for which only a few fossils exist. Or about whether a certain greenhouse gas is a significant cause of global warming. Or what factors affected planetary formation. In all of these cases and many more, the evidence that exists is not enough to conclusively falsify any of some mutually exclusive predictions and there is no mechanism for gathering more yet they are all undoubtedly science.

        Falsifiable science is merely a subset of science in general. Falsifiability hasn’t fallen by the wayside as a criteria, merely superseded.

        You tell me to abduct all you like, but the reality is that abduction is the most common form of human reasoning. In fact, it’s been called the “cornerstone of scientific methodology”. It’s really no surprise to me that your article was rejected, since it’s about philosophy of science instead of science. Not only that, it’s clearly incorrect to draw a bright line of scientific demarcation that would exclude so many areas of science.

        Anyone reading this that agrees with you had better be prepared to declare the following areas to be non-science:

        Anthropology, Archaeology, Medicine, Geology, Astronomy

      • VikingExplorer, my post is about error analysis. How you conceive that error analysis is a “philosophy of science argument” is anyone’s guess. Your conception certainly has nothing to do with anything in the head post.

        You’re right that my post doesn’t say anything about how any certain climate model has incorrect science. That shouldn’t be a surprise though, as it did not set out to assess the science in any given climate model.

        The post concerns the cloud forcing error known to be made by all climate models. There is no question about that error. GCM cloud forcing error is well established and the subject of many papers. The post follows on with the impact of that error on the reliability of air temperature projections.

        So your criticism that the post does not concern “how a certain climate model has incorrect science” is a complete non-sequitur. It’s irrelevant.

        Nor is the head post “full of philosophical assertions.” Physical uncertainty is not about philosophy. Nor is assigning physical meaning through a unique solution. These are merely standard scientific practice for the last 200 years.

        As a lower limit of error, known to be common to all CMIP5 GCMs (and all their predecessors), cloud forcing error alone is sufficient to show that climate models cannot resolve the effect of GHGs on air temperature.

        That annual ±4 Wm^-2 average GCM cloud forcing error is 110x larger than the 0.036 Wm^-2 annual average increase in GHG forcing, and ±114% of all the excess forcing of all the GHGs emitted since 1900. That error alone demonstrates, unequivocally, that climate models cannot resolve the thermal energy flux of the troposphere to sufficient accuracy to resolve the effect, if any, of emitted GHGs.

        This conclusion is obvious to anyone trained in the physical sciences. However, I have yet to meet a climate modeler who can grasp the concept that large errors do not permit tiny resolutions.

        You referenced, “Ben Lillie (Ph.D. in theoretical physics from Stanford University)…” as an authoritative source dismissing the falsifiability criterion of science. Ben Lillie is wrong in both his (your) selected instances. Evolutionary theory is falsifiable in that it predicts a physically heritable trait. Genes are that trait. Following Lillie, it is certainly true that one cannot experimentally reproduce the evolutionary history of any organism, for example. That does not remove evolutionary theory itself from the test of falsifiability, however.

        On the other hand, epidemiology is all about statistical methodologies and correlations. It’s analytical, but it’s strictly inductive and therefore it’s not science. Ben Lillie’s use of epidemiology as an example is entirely misguided. His use of it, actually, implies that he, himself, doesn’t understand what science is. Your use of it implies the same thing.

        Your drug trial exemplar as negating falsification also fails on the same grounds. Double blind studies are just epidemiology localized to a specific group.

        In your second example, from Jarred Diamond’s Collapse, “In this, he posits that there is a correlation between the size of Polynesian islands (among other things) and how deforested they become. Now, this is clearly a scientific hypothesis.

        Except that it is not. A scientific hypothesis posits a causal explanation and makes a deductive prediction of future behavior. Diamond’s correlation is strictly associational; it merely produces an inductive extrapolation. It’s not a scientific hypothesis, as it implies no causality.

        You wrote, “It’s really no surprise to me that your article was rejected, since it’s about philosophy of science instead of science.

        One of my reviewers asserted as philosophy the distinction between accuracy and precision. That put him thoroughly outside the pale of science. My reviewers have been climate modelers. Not one of them gave any evidence of understanding the absolutely fundamental to science accuracy – precision distinction. Or any evidence of understanding the meaning of propagated error. Or the meaning of physical error itself. It is no wonder they recommended rejection. They weren’t competent to understand the argument — an error analysis argument that is common in the physical sciences.

        You seem to have the same limited understanding.

        You wrote, “Not only that, it’s clearly incorrect to draw a bright line of scientific demarcation that would exclude so many areas of science.” Or else, one should have the intellectual integrity to admit some fields are not science, no matter their analytical rigor.

        Among those you listed, Astronomy is observationally falsifiable by reason of distance indicators and spectroscopy, as is Geology by reason (at least) of radio-dating and stratigraphy. Medicine depends on evolutionary theory, making its practice rooted in a falsifiable science. Both Archeology and Anthropology are entirely dependent on physical methods, such as 14C dating, that allows falsification of the provenance of their discoveries.

        None of your examples are examples in the sense you intend.

      • >> How you conceive that error analysis is a “philosophy of science argument” is anyone’s guess. Your conception certainly has nothing to do with anything in the head post

        Premise: Science demarcation arguments ARE by definition a philosophy of science argument.

        Premise: Any reference to physical meaning indicates a philosophy of science argument (verificationism)

        You wrote:

        Climate modelers are not scientists. They are not doing science. Their climate model projections have no physical meaning.

        Conclusion: Your post contains a philosophy of science argument.

        cloud forcing error known to be made by all climate models. There is no question about that error

        Are you omniscient? It seems unlikely that you are personally familiar with every single climate model in existence.

        >> None of your examples are examples in the sense you intend

        Actually, all of your falsification examples are bogus, as they don’t deal with the central hypotheses of those fields of scientific study. Also, most of these sciences make extensive use of model simulations of problems without unique solutions. They are all based on abduction (inference to the best explanation).

        You also missed my point entirely. Your simple minded approach was to respond oh yes they are sciences based on XYZ. My point was that these type of scientific demarcation arguments have been widely discredited because ultimately, they add nothing to understanding the world around us. They are used almost exclusively for political purposes.

      • if there isn’t a unique solution to the problem of lightning, how would you know a strike isn’t a warning from god?

        If there’s no unique solution to the problem of heredity (DNA, genes, and chromosomes), how would you know that biological evolution happened?

        It is truly incredible that anyone would write such words. Lightning is generally explained by Coulomb’s law. However, the theory does not give us a unique solution to the where and when of lightning. Fortunately, most people don’t share your strange ideas about physical meaning and are creating computer models to help understand it better. I hope they don’t get discouraged when they realize that they aren’t scientists.

        The second sentence is truly ignorant. I can’t believe that anyone could possibly believe that heredity indicates that biological evolution happened. I’m shocked that anyone could believe that the theory of Darwinian evolution provides a “unique solution” that predicts that the current set of life forms would come into existence.

        The reality is that the theory of Darwinian evolution is based on abduction (inference to the best explanation). Although some may argue about whether it is still the best explanation, it is most definitely science, and it does provide physical meaning.

        However, your definition of science would exclude Lightning modelers and evolutionary theorists from the scientific realm. Your whole science demarcation argument is bogus.

      • VikingExplorer, you wrote, “Premise: Science demarcation arguments ARE by definition a philosophy of science argument.

        Not correct. Science separated itself fully from philosophy when its theories became contingent upon observations. Since the time of Galileo. That is methodology not philosophy, and is sufficient to separate science from all other forms of inquiry.

        You wrote, “Premise: Any reference to physical meaning indicates a philosophy of science argument (verificationism)

        Also not correct. Meaning in science resides in the interplay of physical theory and observations, as per the methodology. Theory explains the observations; explanation confers meaning; observations challenge the theory. The entire system is self-contained. Philosophy has no part in it.

        None of my discussion of science has ever been verificationist.

        Your mistaken views insubstantiate your conclusion that, “[My] post contains a philosophy of science argument..

        It’s quite clear from the practice of climate modelers as previously noted, and following from the above, that their work is removed from the methodology of science, namely theory and result.

        The signal failure of climate modelers is that they are evidently untrained in evaluating physical error. They have no apparent approach to evaluating the physical reliability of their own models. They have, in other words, short-circuited the judgment of observation.

        You wrote, “Are you omniscient? It seems unlikely that you are personally familiar with every single climate model in existence.” The literature is filled with papers documenting that error. Every single model reported exhibits cloud forcing error.

        You’re welcome to publish a study showing some climate model that gets clouds right, at a resolution that will reveal a GHG effect on air temperature. When that’s done, I’ll change my view. And good luck with that.

        You wrote”Actually, all of your falsification examples are bogus, as they don’t deal with the central hypotheses of those fields of scientific study.

        So you think spectroscopy doesn’t deal with the central hypothesis of Astronomy that those lights in the night sky are sun-like bodies. You think that stratigraphy and radio-dating are not central to a Geology that must order and date strata and sediments to provide its basic context. You don’t think evolutionary biology is central to a medicine that deals with gene-based organisms.

        Your dismissal is thoughtless, VikingExplorer.

        You keep making the same mistake about abduction, generalizing it into some defining role in science. Prediction is deductive. Inference is not.

        Your mistake is in not realizing that science is a bootstrap method (very a-philosophical, no?). Bootstrap is most evident when theories are incomplete. Inferences can then be used to inform a hypothesis — your “best explanation.” “Best explanations” must be predictive to be useful. Predictive explanations are hypotheses, and the prediction is a deduction from that hypothesis.

        Your final paragraph is a crock. I gave explicit reasons why each of those disciplines belongs with science. You dealt with none of them. Your dismissal is hardly different from one based in personal incredulity. The demarcation of science from other ways of knowing is complete, in that science is based in observation and is explicitly not axiomatic. Everything is contingent. All of it is free of cultural opinion. That puts the demarcation argument outside of politics; politics being thoroughly culture-bound.

        More later.

      • you wrote, “Premise: Science demarcation arguments ARE by definition a philosophy of science argument.”

        Not correct. Science separated itself fully from philosophy when its theories became contingent upon observations. Since the time of Galileo. That is methodology not philosophy, and is sufficient to separate science from all other forms of inquiry.

        The fact that you think that what you wrote above supports your assertion that what I wrote is incorrect is a microcosm of the whole problem. You seem incapable of thinking abstractly about your argument.

        There is no doubt that the demarcation problem is part of the philosophy of science. There is no doubt that you claimed that climate modeling efforts are NOT science, and yet you deny that you have made a philosophy of science argument.

        Your ability to deny reality exceeds that of the AGW enthusiasts. This isn’t an argument, this is just contradiction.

      • VikingExplorer, you wrote, “There is no doubt that the demarcation problem is part of the philosophy of science.

        Philosophy is necessarily axiomatic. Science is not.

        Grounding the content of physical theory in observation completely separates science from philosophy. Recognizing that fundamental difference is not an element of philosophy.

        Just to make it fully clear, “fundamental distinction” equates to demarcation.

        What philosophers do with that idea is their business. But whatever they do doesn’t have any impact on science.

        You finished with, “Your ability to deny reality exceeds that of the AGW enthusiasts. This isn’t an argument, this is just contradiction.“, which is pretty ironic, given that it is your posts, not mine, that have descended to being merely rejectionate.

      • Don’t bother telling us again, because by now it’s clear that you are incapable of grasping that you hold a certain philosophy of science as axiomatic, and that you’re going be incognizant and assert that what you just wrote isn’t demarcation, or that demarcation isn’t philosophy of science, but in fact it is by definition.

        noun: dramatic irony – a literary technique, originally used in Greek tragedy, by which the full significance of a character’s words or actions are clear to the audience or reader although unknown to the character.

      • “Definition” in your hands is indistinguishable from mere assertion, VikingExplorer. As recourse to conclusion by fiat, it doesn’t even rise to a poor argument.

        I have pointed out that methodological theory contingent on observation puts science forever outside philosophy. Philosophy is necessarily axiomatic. Science is necessarily not.

        That same methodological distinction must therefore necessarily demarcate science from philosophy, without itself being philosophy. That is, the demarcation is a consequence of orthogonal content. Supposing a demarcation from philosophy is philosophy (your position), is to make as fundamental a category mistake as is possible to make. Your position supposes a set includes its own negation.

        Your entire argument has rested upon a need to degrade science into the polysemous. Doing so muddies the language of science, allowing misrepresentation of the spurious as scientific. Spurious is what consensus air temperature projections are; represented as having physical meaning when in fact they do not. Physical meaning: the interplay of theory and observation; not philosophy, never philosophy, orthogonal to philosophy.

      • VikingExplorer, you wrote, “ However, the theory does not give us a unique solution to the where and when of lightning.

        The theory of lightning gives us an explanation of what lightning is (its physical meaning); which is what the theory is meant to do.

        Attacking a misrepresented position is called a straw man argument. Just so you know.

        You wrote, “The second sentence is truly ignorant. I can’t believe that anyone could possibly believe that heredity indicates that biological evolution happened.

        The solution to the problem of heredity is the existence of an inheritable trait. Genes, chromosomes and DNA provide the solution to that problem. In short, you got the problem backwards.

        Biological evolution is the observable. Darwin began “Origin of Species” with a long discussion of the breeding outcomes of pigeons, as evidence for the existence of evolution. The theory of evolution provides the mechanism for the observation: organismal variation and natural selection. Organismal variation is found in the genes. Genes are the heritable trait required by the theory. QED, theory and result, and the threat of falsification in biology.

        Ignorance is on display here, VikingExplorer, but it’s not mine.

        You wrote, “I’m shocked that anyone could believe that the theory of Darwinian evolution provides a “unique solution” that predicts that the current set of life forms would come into existence.

        You’re asserting what is not in evidence. This seems to be a continuing problem.

  14. Based on the response by life on this planet over the last 30 years to CO2, weather and climate. these were the best conditions to be a creature or part of the biosphere on planet earth……….. in at least 1,000 years(the Medieval Warm Period).

    The only realm that shows otherwise is the one that exists in models.

  15. Has anyone tried to do an nth order polynomial or a Fourier series curve fit on the climate data? I wonder if such a curve fit would produce better results than the climate models? DaveW

    • A “random walk” may be the place to start with that. I haven’t delved into it much, but search on this site and the net for “random walk” and climate. Here are some starting points. Ross McKitrick did an article last month summarizing some of the work critiquing model fits and noted:

      http://www.rossmckitrick.com/uploads/4/8/0/8/4808045/mckitrick_greer-heard2015.pdf
      “Fildes et al. (2011) took the same data set and compared model predictions against a “random walk” alternative, which simply means using the last period’s value in each location as the forecast for the next period’s value. Basic statistical forecasting models with no climatology or physics in them typically got scores slightly better than a random walk. The climate models got scores far worse than a random walk, indicating a complete failure to provide valid forecast information at the regional level, even on long time scales. The authors commented ‘This implies that the current [climate] models are ill-suited to localised decadal predictions, even though they are used as inputs for policy making.’ ”

      Dr, Dr. Roy Spencer just did a relevant post on random walks and climate models:

      http://www.drroyspencer.com/2015/05/mystery-climate-index-2-explanation/

      and a statistician a few years ago:
      http://wmbriggs.com/post/257/

      which points to an old paper from 91 on the topic which suggests a random walk matches:
      http://journals.ametsoc.org/doi/pdf/10.1175/1520-0442%281991%29004%3C0589%3AGWAAMO%3E2.0.CO%3B2

      as do these:

      https://www.academia.edu/7625150/Application_of_random_walk_model_to_fit_temperature_in_46_gamma_world_cities_from_1901_to_1998

      http://www.scirp.org/journal/PaperInformation.aspx?PaperID=5318#.VVSwjvtVhBc

    • gingoro, climate model hindcasts are little more than a complicated exercise in curve fitting. Their projections are then extrapolations of the curve fit into regions without data.

      • climate model hindcasts are little more than a complicated exercise in curve fitting

        By not specifying which climate models you are talking about, and making completely general statements like this, you are guaranteed to be wrong.

        You are actually making the unsupportable assertion that ANY climate model is INHERENTLY impossible to create in a scientific manner.

        According to your statement, even if one collected a group of extremely intelligent and intellectually honest scientists with no a-priori biases, they could not possible create a climate model that would not be a “complicated exercise in curve fitting”.

      • The current post shows that all current climate models, right up to the CMIP5 level, fall under that diagnosis, VikingExplorer.

        You are actually making the unsupportable assertion that ANY climate model is INHERENTLY impossible to create in a scientific manner.

        Overwrought hyperbole.

      • They seem to be in essence curve fits of some variation on a random walk, tweaked to have an overall upward bias and to manage to match some historical data in some ways (or as some studies suggest, tweaked to match each other even more than being tweaked to match reality), with the same question that any curve fit therefore has as to whether there is any reason to believe it matches the real world process generating the data. To do so it would have to be argued both that it matches not merely the areas it was tweaked to match (i.e. it matches regionally and over different timescales) and that the underlying physics matches reality.. which is hard to do with certain areas of the physics poorly understood and differing between models (e.g. cloud details). The very fact that models with differing and incomplete/uncertain physics sort of vaguely match reality, getting the “right” (to some degree) answer for the wrong reasons, should cause people to wonder how they are tweaked to do so and question why their future projections should be taken seriously since the “wrong reasons” are more likely to lead to the “wrong answers” in the future than to accidentally sort of be “right” in a future they can’t tweak it now to match.

      • VikingExplorer

        You dispute the true accurate and uncontroversial statement by Pat Frank that says

        climate model hindcasts are little more than a complicated exercise in curve fitting

        Please see my above post in this thread that is here. It explains this matter about which – in common with many other matters – you are misguided.

        Richard

      • Richard,

        You are missing a point which is clearly explained. I’m making a fairly obvious point of logic. If you were claiming that some or most of the climate models are just curve fitting (implication: full of fudge factors), then I would readily agree.

        However, as written, and as I said before:

        “You are actually making the unsupportable assertion that ANY climate model is INHERENTLY impossible to create in a scientific manner.”

      • VikingExplorer

        I asked you to read my above post and I linked to it.
        Either you did not read it or you lack reading comprehension skills.

        My post is about existing climate models and energy balance models. I did NOT discuss – nor mention – hypothetical other models.

        It is simply true that as Pat Frank says

        climate model hindcasts are little more than a complicated exercise in curve fitting

        His statement is true of ALL EXISTING climate models and my explanation is of why it is true.

        You raise a ‘straw man’ when you say

        You are actually making the unsupportable assertion that ANY climate model is INHERENTLY impossible to create in a scientific manner.

        NO! ABSOLUTELY NOT!
        You are claiming that the existing models are the totality of all possible climate models, and your claim is blatant nonsense. Pat Frank has not made that suggestion and I have not, either.

        Richard

      • >> His statement is true of ALL EXISTING climate models

        Richard, you and Pat seem incapable of refraining from making over generalized statements.

        Considering your statement above, how is that you have personal knowledge of every climate model in every university around the world?

        You resorting to all caps and bold indicates that you are getting angry, and that I’m touching a nerve. Perhaps what I thought was just a minor issue is a lot more important to you. If it were me, I would have immediately agreed and inserted the word “most” or “many” in front of “climate models”.

        Instead, you react by doubling down on the “ALL EXISTING” claim. Strange…

  16. Everything else about CO2-CAGW theory is in the process of failing or has fallen apart.
    – Polar sea ice levels are not cooperating with their alarmist projections.
    – Global Storm freq-intensity measures are demonstrably flat or declining. (While regions are fluctuating, some high, some low.)
    – Droughts have always existed some place on continents (as Joe Bastardi regularly points out with data to substantiate). So pointing to an on-going drought (as evidence of CC) somewhere is always possible if you ignore the historical variability patterns across centuries.
    – Record world food crop productions continue, with most variability attributable to market choices (of price-supply/demand) and regional impacts (of weather and drought), rather than changing climate.
    – SLR is not accelerating, and continues at a leisurely pace (for an interglacial period) of ~2.8 mm/yr.
    – A modestly warming world (coming out of the LIA) is improving air quality of coastal cities as diurnal wind flows between land-sea improves lower troposphere air exchange.

    Throw out the GCM models from the IPCC AR’s, and they have nothing.
    The models are a success, not because they are crap (which they are), but because they produce the desired results to sell a problem-solution ideology. With solutions that involve new taxes, crony capitalism, ruling class controls, and the ensuing gobs of money to line pockets under the guise of re-distribution.

    So they (the CAGW establishment) will resist the logic and science attacks on model vs reality to the very bitter end. Call it some modern-day cross-mix of NCC, hubris, and pursuit of money and power that drives them to varying individual degrees. Against these realities, we can expect that the global temp data set machinations and manglings will get worse in the defense of the CAGW cause. In their minds, they have no other choice.

    • Besides the lying models, they sell the scare with other outright lies, such as “extreme weather”, which many fall in the public for because the media fail in their constitutional mission of calling BS on the regime and its sc*m du jours. Their failure is so extreme that they actually serve as propagandistic cheerleaders rather than the gadflies they’re supposed to be.

  17. The trouble is thinking you won because you proved their facts wrong, this does not work becasue it simply was not an argument about facts in the first place.

    • Thanks, Willis. Much appreciated. I figure if I’ve passed your sniff test, I’ve done well. :-)

  18. Do mathematical projections have any real meaning?
    This answer to this is nothing new. Huxley said it in 1869.
    “Mathematics may be compared to a mill of exquisite workmanship, which grinds you stuff of any degree of fineness; but, nevertheless, what you get out depends upon what you put in; and as the grandest mill in the world will not extract wheat-flour from peascod, so pages of formulae will not get a definite result out of loose data. ” -Thomas Henry Huxley

      • Except that TH Huxley had no ability to do math, so that his opinion was self serving. Also, he argued vehemently for a theory which was based purely on abduction, and is not falsifiable.

      • And yet, where is Huxley wrong?

        In science, inference is to best predictive (falsifiable by observation) hypothesis, not to best explanation. The difference is thoroughly fundamental.

        Evolutionary theory is indeed falsifiable. Darwin knew nothing of genes or genetics, and yet predicted a material mechanism of heredity (see gemmule), to account for the action of natural selection upon organismal variation.

        Evolutionary theory would be falsified were there no evidence for the physical mechanism of inherited traits. And yet Mendelian genetics was found, independently of Darwin and his prediction.

  19. If 100 climate models give 100 different climate scenarios, aren’t 99 wrong by definition? There is only one right answer, and if you know which model is right, why bother with the others? Using an average of 99 incorrect models and one perhaps correct model is meaningless. If 2 guys say they are Jesus, at least one is lying.

    • The warmist claim the science is settled and that CAGW is based upon sound principles and understanding of the physics.

      If the science was settled and understood, why are there approximately 100 GCMs and not jsut one?

      Doesn’t the fact that there are about 100 GCMs demonstrate initself that the science is not settled and understood?

      I frequently ask warmists to identify which of the about 100 GCMs is the one that is based upon setlled science and the correct physics of the Earth’s climate system and its response to CO2?

      So far, no one has been able to answer that simple question.

    • Dr. Brown highlights this issue repeatedly.

      I believe the primary issue is the political nature of the IPCC. All countries (i.e. models) are equal, we must not discriminate. I am old enough to remember when “discriminating” meant something different.

    • Craig,

      If the weather report says that there is a 75% chance of rain, will you consider bringing your umbrella? This has falsified your hypothesis. How do you think they came up with the 75% figure? They ran many different simulations, and 75% of them showed rain.

      The list of sciences that make extensive use of computer simulation has grown to include astrophysics, particle physics, materials science, engineering, fluid mechanics, climate science, evolutionary biology, ecology, economics, decision theory, medicine, sociology, epidemiology, and many others.
      reference

  20. “So, when someone says about AGW that, “The science is settled!,” one can truthfully respond that it is indeed settled: there is no science in AGW.”.
    LOL

    • Yes! They do, indeed, provide meaningful “information”…

      WELL DONE, Dr. Pat Frank!

      Bottom line for those in a hurry:

      “… the correspondence is deliberately built-in.

  21. Let me add that one of the most pernicious assumptions of the modelers is that the actual climate parameter space is adequately both explored and mapped by the variations in both the models and the parameters used in the model runs …

    This incorrect assumption is implicit in e.g. the equally incorrect idea that the average of an “ensemble” of untested, uncalibrated, unvalidated, and unverified models has some kind of statistical value.

    w.

    • The average of wrong is wrong.

      The averaging is to reduce the width of the error in the wrongness of it all, not to make the output correct. it does nothing to aid the reasonablenss and reliability of the projection/prediction.

    • I totally agree with you, Willis.

      The implicit assumption underlying all those multi-model projections and averages is that the model physical theory itself is physically complete and would yield a physically true representation of the climate if only the parameters were exactly known, along with the initial conditions.

      • Somewhat like Newtonians [though I hate to glorify them as being that competent] locked into the orbit of Mercury and going round and round getting nowhere.

      • The implicit assumption underlying all those multi-model projections and averages is that the model physical theory itself is physically complete and would yield a physically true representation of the climate if only the parameters were exactly known, along with the initial conditions.

        Wait, are you saying that it’s impossible to create a model that is complete enough to predict the climate to a reasonable accuracy?

        I pride myself on being the most skeptical, but you people are starting to froth at the mouth with anti-science rhetoric.

        >> The average of wrong is wrong.

        Reality check: the models are off by .1%.

      • VikingExplorer May 21, 2015 at 10:50 am

        Wait, are you saying that it’s impossible to create a model that is complete enough to predict the climate to a reasonable accuracy?

        I pride myself on being the most skeptical, but you people are starting to froth at the mouth with anti-science rhetoric.

        Recognizing our own limitations is now “anti-science”?

        Viking, the real answer is that we do not know whether such a climate model is possible, even in theory. As even the IPCC has noted,

        “…we should recognise that we are dealing with a coupled nonlinear chaotic system, and therefore that the long-term prediction of future climate states is not possible.”

        Climate is far and away the most complex system that we have ever tried to model. It has at least six major subsystems, none of which are completely understood and some of which are hardly investigated—atmosphere, cryosphere, lithosphere, biosphere, ocean, and electrosphere. Modeling even one of these to the required level of detail is currently beyond our abilities, because they are all inter-related by feedbacks and chains of effect and non-linear couplings and individual and multi-system resonances, both known and unknown.

        So no, Viking, I’m afraid that today in 2015, we truly do not know if it is possible to “predict the climate to a reasonable accuracy” even in theory … however, we can confidently say that to date, we have completely failed to be able to do so in practice.

        w.

      • >> even in theory …

        Willis, I think your comment is plausible and perhaps likely correct up to the above words.

        All of the areas that you list may be problematic but with enough time, they are knowable. The problem is certainly not inherently impossible. Not understanding something is an issue for 2015, but the rest of what you wrote is just a matter of analysis and computer horsepower.

        With all due respect to your incredible writing skill and remarkable citizen scientific efforts, when someone with your background says it’s impossible even in theory, it rings hollow.

        My goal is to create a climate model someday. Although I don’t currently have a PhD (like my father – PhD EE), but I’ve got a BSEE (as well as my wife and brother). Yesterday, my two oldest kids (in school for Physics and Chem Engineering) were here calculating the behavior of a proton gas inside a metal container. So, from my point of view, it looks different.

      • VikingExplorer May 22, 2015 at 8:12 am

        >> even in theory …

        Willis, I think your comment is plausible and perhaps likely correct up to the above words.

        All of the areas that you list may be problematic but with enough time, they are knowable. The problem is certainly not inherently impossible. Not understanding something is an issue for 2015, but the rest of what you wrote is just a matter of analysis and computer horsepower.

        So you believe that all systems are inherently modelable and computable? Let me introduce you to my little friend … chaos. The future evolution of some chaotic systems seem to be uncomputable with anything less complex than a model which is essentially a totally identical parallel universe.

        That is to say, in some chaotic systems you literally need to know the location and velocity of every particle in the system to compute which way it will evolve.

        Which sounds doable, at least in theory … but as Heisenberg observed, simultaneously knowing both the location and velocity of even a single particle is not possible, even in theory.

        Best regards,

        w.

        … which leads to the following mathematician’s joke.

        A policeman pulled Werner Heisenberg’s car over because he was speeding. The cop asked him, “Do you know how fast you were going?”

        “No,” said Heisenberg, “… but I know where I was!”

      • >> my little friend … chaos

        That’s the great thing about climatology: it’s much easier than Weather, because the timescales are much greater. In my pervious example, the study of how a river changes it’s course over time is much, much easier than determining the course of a leaf thrown into the river. The latter is all about chaos, while the former is much more predictable.

        You should be aware that we’ve done it before. At the molecular level, it’s totally chaotic. However, this didn’t stop humanity from creating Thermodynamics and Circuit Theory. These sciences remain completely valid, although they summarize the net effects of a tremendous amount of chaotic behavior.

        But in the end, V = I*R.

      • VikingExplorer May 22, 2015 at 9:55 am

        >> my little friend … chaos

        That’s the great thing about climatology: it’s much easier than Weather, because the timescales are much greater.

        Thanks, Viking. That could be true … but only if the weather is chaotic and the climate is not. However, Mandelbrot himself investigated the question and found as follows (emphasis mine):

        Among the classical dicta of the philosophy of science is Descartes’ prescription to “divide every difficulty into portions that are easier to tackle than the whole…. This advice has been extraordinarily useful in classical physics because the boundaries between distinct sub-fields of physics are not arbitrary. They are intrinsic in the sense that phenomena in different fields interfere little with each other and that each field can be studied alone before the description of the mutual interactions is attempted.

        Subdivision into fields is also practised outside classical physics. Consider for example, atmospheric science. Students of turbulence examine fluctuations with time scales of the order of seconds or minutes, meteorologists concentrate on days or weeks, specialists whom one might call macrometeorologists concentrate on periods of a few years, climatologists deal with centuries and finally paleoclimatologists are left to deal with all longer time scales. The science that supports hydrological engineering falls somewhere between macrometeorology and climatology.

        The question then arises whether or not this division of labour is intrinsic to the subject matter. In our opinion, it is not in the sense that it does not seem possible when studying a field in the above list, to neglect its interactions with others, We therefore fear that the division of the study of fluctuations into distinct fields is mainly a matter of convenient labelling and is hardly more meaningful than either the classification of bits of rock into sand, pebbles, stones and boulders or the classification of enclosed water-covered areas into puddles, ponds, lakes and seas.

        Take the examples of macrometeorology and climatology. They can be defined as the sciences of weather fluctuations on time scales respectively smaller and longer than one human lifetime. But more formal definitions need not be meaningful. That is, in order to be considered really distinct, macrometeorology and climatology should be shown by experiment to be ruled by clearly separated processes, In particular there should exist at least one time span on the order of one lifetime that is both long enough for micrometeorological fluctuations to be averaged out and short enough to avoid climate fluctuations…

        It is therefore useful to discuss a more intuitive example of the difficulty that is encountered when two fields gradually merge into each other. We shall summarize the discussion in M1967s of the concept of the length of a seacoast or riverbank. Measure a coast with increasing precision starting with a very rough scale and dividing increasingly finer detail. For example walk a pair of dividers along a map and count the number of equal sides of length G of an open polygon whose vertices lie on the coast. When G is very large the length is obviously underestimated. When G is very small, the map is extremely precise, the approximate length L(G) accounts for a wealth of high-frequency details that are surely outside the realm of geography. As G is made very small, L(G) becomes meaninglessly large. Now consider the sequence of approximate length that correspond to a sequence of decreasing values of G. It may happen that L(G) increases steadily as G decreases, but it may happen that the zones in which L(G) increases are separated by one or more “shelves” in which L(G) is essentially constant. To define clearly the realm of geography, we think that it is necessary that a shelf exists for values of G near λ. where features of interest to the geographer satisfy G>=λ and geographically irrelevant features satisfy G much less than λ. If a shelf exists, we call G(λ) a “coast length”.

        After this preliminary, let us return to the distinction between macrometeorology and climatology. It can be shown that to make these fields distinct, the spectral density of the fluctuations much have a clear-cut “dip” in the region of wavelengths near λ with large amounts of energy located on both sides. But in fact no clear-cut dip is ever observed.

        When one wishes to determine whether or not such distinct regimes are in fact observed, short hydrological records of 50 or 100 years are of little use. Much longer records are needed; thus we followed Hurst in looking for very long records among the fossil weather data exemplified by varve thickness and tree ring indices. However even when the R/s diagrams are so extended, they still do not exhibit the kinds of breaks that identifies two distinct fields.

        In summary the distinctions between macrometeorology and climatology or between climatology and Paleoclimatology are unquestionably useful in ordinary discourse. But they are not intrinsic to the underlying phenomena.

        Mandelbrot and Wallis, 1969. Global dependence in geophysical records, Water Resources Research 5, 321-340.

        So on the one hand, you claim without proof that there is a statistical difference between weather and climate such that although you agree that weather is chaotically unpredictable, you say climate is predictable.

        Mandelbrot, on the other hand, offers investigative observational proof that your claim is wrong. He looked at the statistics of 9 rainfall series, 12 varve series, 11 river series, 27 tree ring series, 1 earthquake occurrence series, and 3 Paleozoic sediment series. He found no evidence for your claim of distinctions between weather and climate. This means that if weather is chaotic, the climate is as well … and we know the weather is chaotic.

        There’s a good discussion of this from a decade ago over at Climate Audit.

        Best regards,

        w.

      • Viking, you might also enjoy (emphasis mine):

        On the credibility of climate predictions
        D. KOUTSOYIANNIS, A. EFSTRATIADIS, N. MAMASSIS & A. CHRISTOFIDES

        Abstract

        Geographically distributed predictions of future climate, obtained through climate models, are widely used in hydrology and many other disciplines, typically without assessing their reliability. Here we compare the output of various models to temperature and precipitation observations from eight stations with long (over 100 years) records from around the globe. The results show that models perform poorly, even at a climatic (30-year) scale. Thus local model projections cannot be credible, whereas a common argument that models can perform better at larger spatial scales is unsupported.

        The paper is here. Koutsoyiannis is always good, detailed, well cited, and persuasive.

        w.

      • >> This means that if weather is chaotic, the climate is as well … and we know the weather is chaotic.

        I don’t believe Mandelbrot is correct in the general case. Thermodynamics and Circuit Theory falsify his idea.

        >> weather is chaotically unpredictable, you say climate is predictable.

        1) When you said “without proof”, you missed a subtle point of logic. I’m not one of those people who believe that given enough time, man will be able to do anything, like redesign our own DNA and travel at warp speeds. I think we agree that it may be impossible. However, if someone says “it’s impossible”, that seems clearly wrong.

        2) I also think some people should be concerned about arguing themselves into a corner. If the climate were in fact chaotic, then the idea of a tipping point becomes more plausible. The long history of earth would seem to falsify this idea. This is also the pattern we see with other chaotic phenomena. No one can easily predict a path of a leaf, but we can say with some certainty that it will go downstream.

        3) We also have to consider that chaotic doesn’t mean totally unpredictable. Weather predictions are better today than they were 20 years ago.

        4) Thanks for the link to the CA discussion. I note that there was no significant argument in favor of a chaotic climate. One commenter pointed out that if climate is that which is caused by external forcing, then solar weather would introduce some chaos into the climate.

        5) Another point is that what we normally think of when we say weather actually only involves .07% of the thermal mass of the water/air system. Although there is still some chaos in the oceans, it’s far less than the atmosphere.

        6) The issue with the Koutsoyiannis reference is one of logic. “Thus local model projections cannot be credible”. Clearly, climate is an average over the whole globe (land/sea/air), with the minimum timescale being a decade. Koutsoyiannis compares model results to a certain 8 places around the globe. This is weather by definition. My plan for a climate model would be to have N number of weather systems randomly traversing the globe, with no attempt to try to predict the where or when of any particular real life weather system. A climate model should NOT be considered an extension of a weather model.

      • VikingExplorer May 23, 2015 at 7:28 am

        >> This means that if weather is chaotic, the climate is as well … and we know the weather is chaotic.

        I don’t believe Mandelbrot is correct in the general case. Thermodynamics and Circuit Theory falsify his idea.

        Mandelbrot said nothing about a general case. He, just like us, is discussing whether there is a difference between weather and climate such that one is chaotic and one is not. He said no. He said that both are chaotic.

        Let me review the bidding. My statement that you seemed to find incorrect was:

        Viking, the real answer is that we do not know whether such a climate model is possible, even in theory.

        You said the first part was correct, up to “even in theory”.

        … even in theory …

        Willis, I think your comment is plausible and perhaps likely correct up to the above words.

        All of the areas that you list may be problematic but with enough time, they are knowable. The problem is certainly not inherently impossible. Not understanding something is an issue for 2015, but the rest of what you wrote is just a matter of analysis and computer horsepower.

        I said we don’t know if the climate is predictable in theory. So your claim is that we DO know that modeling the climate is possible in theory … perhaps you’d be so kind as to point to the study that in your mind shows that we can model chaotic systems over long time spans? Serious question, Viking. I see you believe we can model anything. I used to believe that. I’ve been writing computer programs of all types, including complex models of a variety of physical systems, for fifty-two years now. I no longer believe that anything can be modeled.

        I see also that you have not commented on the quote I gave from the IPCC:

        “…we should recognise that we are dealing with a coupled nonlinear chaotic system, and therefore that the long-term prediction of future climate states is not possible.”

        Heck, my statement was much weaker than that. I merely said we didn’t know if it was possible in theory … which is why your objecting was such a surprise. They flat-out state that it is NOT possible in theory.

        Finally, you say:

        We also have to consider that chaotic doesn’t mean totally unpredictable. Weather predictions are better today than they were 20 years ago.

        Weather predictions are incrementally better now than then, but much of that is because of the widespread use of ever more sophisticated satellite data. Better input gives better output. But even with all of the incremental increases, far too often you look at the forecast on Wednesday and plan your weekend barbecue … only to get rained out on Sunday. The problem is simply stated:

        In a chaotic system, divergence between model and reality increases with time.

        We can predict the weather five minutes from now with good accuracy. We can’t make much more than an educated guess about the weather five years from now. Divergence increases with time. And as Mandelbrot not only claimed but actually measured, this is true up to and including time spans of hundreds of years. The climate is no less chaotic than is the weather.

        Viking, the ugly reality is that there are systems for which we simply cannot compute the future evolution with any computer which is less complex than the system itself.

        w.

      • “The climate is no less chaotic than is the weather.”

        Not really. The average global temperature in 2024 won’t be more than a few tenths of a degree different than it was in 2014. The high temperature at my location 10 days from now is probably going to be a few degrees different than the high temperature today.

      • >> Mandelbrot said nothing about a general case

        Sorry, I misspoke. I disagree with Mandelbrot about climate being dominated by chaos. I believe this because the long term reconstructions of earth’s climate indicate as much.

        >> I said we don’t know if the climate is predictable in theory

        Ok, I see how we miscommunicated here. You meant we do not know whether such a climate model is possible (period). I agree with this rewording. Adding “in theory” is superfluous, and confused me, since by definition, a model is theory. It made me think that you meant that the underlying theory was unknowable.
        ( in theory is also a synonym for unproven, but that would also be redundant)

        >> I no longer believe that anything can be modeled.

        I agree with you that it the may be impractical to model physical phenomena when the complexity of the model approaches the reality (e.g. at the molecular level). However, impractical is not the same as impossible.

        We can’t make much more than an educated guess about the weather five years from now. Divergence increases with time.

        Yes, and if we were trying to predict the weather, you would be 100% correct. However, can you please take a step back, put your abstract thinking cap on, and read this for comprehension:

        A climate model should NOT be designed as an extension of a weather model.

        One should not ask a climate model if it’s going to rain, or what the temperature is going to be on a certain day or month 10 years from now.

        If I asked for a prediction of future beach erosion, a scientist should not start by modeling fluid dynamics at the molecular level. If I asked for a prediction of what current would flow in response to a voltage, a scientist should not start by modeling electrodynamics at the molecular level.

        For an example of what I mean:

        …model fills a niche in between small-scale simulations that treat the detailed plasma physics of breakdown… lightning models

        As for reality, it’s not that ugly, because we’ve discovered that although it is really chaotic at the small scales, yet at the macro scale, it typically follows well defined rules. Close up, the sun is complete and total chaos, and yet some people consider it a constant.

        Bill 2 is correct about 2024. The long term history of our climate temperature falls into a fairly small range.

  22. It’s a noob praising a Jedi, but I really liked this essay, just as I have greatly admired Pat Frank’s other essays on climate science. I love the clarity of expression and the force of the logic presented. Here’s my favorite bit: “The England, et al., set of CMIP5 models produced constant air temperatures for multiple climate energy states, and multiple air temperatures for every single annual climate energy state.” It’s indeed “robust,” but not in the way England, et al. think.

  23. re: “The physical meaning of the recently published study of M. England, et al., [6] exemplified in Figure 3 below, is now apparent. ”

    I posted this a bit late on a prior page so I’ll pass it on here also. There is a post by England, on the un-skeptical site Skeptical Science where he talks about that paper and says:

    http://www.skepticalscience.com/climate-hiatus-doesnt-take-heat-off-global-warming.html
    “Until now, however, no evaluation has been made of the possible consequences for long-term projections. Specifically, if the variability controlling the current hiatus is linked to longer-term sequestration of heat into the deep ocean, this might require us to recalibrate future projections.

    With this in mind, we decided to test whether 21st century warming projections are altered in any way when considering only simulations that capture a slowdown in global surface warming, as observed since 2001.”

    So he thinks that models that weren’t dealing with long-term ocean sequestration of heat, but somehow accidentally predicted the pause, have relevance to claims about future warming if the ocean were involved in a way that wasn’t in the models? Wow is that absurd, as is of course the paper’s claim of the “robust nature of twenty-first century warming projections” when only a minuscule fraction of the model runs matched the pause. The fact that they are all tweaked to meander a bit but wind up around the same high range eventually and some of them accidentally happened to match the pause isn’t indicating anything “robust” about them.

    • One thing I haven’t come across what were the ‘tweaks’ that produced accurate predictions of the pause?
      And I take it that all the other models have now been discarded?

      • The issue is that merely out of a large number of model runs, due to inbuilt random variations, they accidentally match the pause, just like accidentally you can manage to flip a coin to get heads a few times in a row. Picture a random walk tweaked to have a slight upward bias, but small enough that for some periods in some runs it can cycle up and down for a bit without rising too much and generate a “trend” that is somewhat flat for a number of years, and then the location of that trend accidentally falling in the right years. That isn’t exactly what is going on (the internal structure isn’t directly a random walk even if it essentially maps onto one in its results), but it illustrates the concept. Tuning the model to match the past likely biases it in favor of being a bit more likely to be able to reproduce the pause by chance due to the characteristics of whatever random walk it in essence corresponds to. By analogy with certain sets of data that are actually generated by say a quadratic function or other polynomial, there might be sections where the curve is almost flat and happens to match a linear fit, but that linear fit will then diverge from the more complicated reality.

        The very fact that they can claim “oh the ocean must be swallowing the heat in a way we didn’t account for in the models” but then act as though they can still look at those models for guidance merely because they found some runs that randomly matched reality, suggests a lack of awareness on their part that to be credible models can’t merely accidentally curve-fit to a small set of data, that they actually need to credibly claim to model the underlying processes involved. Getting the “right” answer for the wrong reasons doesn’t lend credibility to any future predictions.

  24. As far as I know, the internals of the climate models are unknown to the public. Do these models have some theory behind them that they are trying to demonstrate? If so, where is an English language version of this theory so that we may look over?

    Do they adequately consider winds, ocean currents, clouds, convection, conduction, advection, rain, storms, planetary motion, the effect of gravitation upon the mass of the atmosphere, variations is the output of the sun, and at least a dozen other factors I have seen mentioned in various places? Perhaps they do, but how would I know for sure?

    Has any group used the mega funds of government to run a model based on a theory other than the prevailing one of CO2 dominated back-radiation warming the surface? What if the present consensus is wrong? The only thing I know for sure is that the current consensus and the current models appear to be giving wrong results. Perhaps looking in another direction is warranted?

    • “[…] The only thing I know for sure is that the current consensus and the current models appear to be giving wrong results. Perhaps looking in another direction is warranted?

      When it comes to climate models, we would be wise to avert our eyes. Studies have shown that prolonged gazing at a spaghetti graph of climate model ensembles reduces visual acuity by 38% and lowers the I.Q. by 42 full points.

      Links? Why should I provide links to the studies? People would just try to find something wrong with them ;o)

  25. Climate philosophy, including models, is potentially science when constrained to a limited but variable frame of reference (i.e. scientific domain) in both time and space, where phenomena can observed, reproduced, and characterized through deduction.

    The innovation of the scientific method, that acknowledges the chaotic (i.e. incompletely or insufficient characterized and unwieldy) nature of the system, was to establish a firm separation of science and other logical domains: philosophy, faith, and fantasy. Theories will be first classified as philosophy until they are evaluated in the scientific domain. Any theory where there does not exist a probable path from philosophy to the scientific domain is either an article of faith or fantasy. The liberal use of inference (aka “post-normal science”), including simulation models in climate “science”, is the creation of knowledge, rather than its observation.

  26. There is much to criticize in climate models, so why come up with this sort of nonsense? With respect to the lower panel of Figure 1, the author asks “how is it possible for the lower panel uncertainty bars to be so large”. It is because they are calculated with the implicit assumption of infinite climate sensitivity. But that is wrong. What happens is that if more radiation reaches the surface, then it warms up and more radiation is emitted to space. As a result, the errors do not add up randomly.

    When you here someone blithely claim that measurements have “no physical meaning”, you should suspect that you are listening to a windbag.

    • Re: You at 2:54pm today: “… measurements have ‘no physical meaning’, … ”

      I looked for that quote in Dr. Frank’s post and could not find it (were you quoting someone else?).

      I found this in paragraph 4:

      “… projections of future global air temperatures make them predictively useless. In other words, they have no physical meaning.” Dr. Pat Frank

      • Janice,

        “This post will show that hindcasts of historically recent global air temperature trends also have no physical meaning”. Oops. I screwed up and somehow conflated that with the people who make all sorts of ridiculous criticisms of the observed record. Pat Frank did not do that. So by misquoting him, I undermined my credibility. I hate it when I do that. Especially when my basic point was right anyway.

      • Dear Mike M.,

        By promptly and completely admitting your error, your credibility is completely intact. And your character was polished up a bit, too!

        And if your basic point was that the IPCC’s GCM’s are junk: I agree!

        Janice

    • Mike M, “[the uncertainty bars] are calculated with the implicit assumption of infinite climate sensitivity.

      Rather, they’re calculated using a successful GCM emulator; one that shows GCM air temperature projections are mere linear extrapolations of GHG forcing.

      The essay is about the behavior of climate models, not about the climate.

      How you conceive of these two sentences, “What happens is that if more radiation reaches the surface, then it warms up and more radiation is emitted to space. As a result, the errors do not add up randomly.” as a logical sequitur is anyone’s guess.

      There was nothing “blithe” about my claim. It’s all analytically justified (2.9 MB pdf). And the propagated error is systematic, not random.

  27. The models do not provide a robust measurement system. The variance is too high. I mean high compared to observed variance of historical temperatures.

    Obviously such variance means the scientists do not agree with each other. They may claim to agree but their results prove mathematically that in reality they do not agree. Claiming that they agree is just being nice to each other, but means nothing mathematically. Their nice words are logically inconsistent with their theories.

  28. The models are an art form.
    A prerequisite of the model projections is that they must all show some warming, after all that’s the raison d’état for the IPCC.
    It’s no accident that the projected warming range is from the barely credible without being risible, to the not quite ignorable.

  29. Really enjoyed the story of Parkes Radio Telescope. The Alien signals turned out to be the microwave in the kitchen cooking the odd pie. When will we learn the short sighted thermometer reader breathes on the thermometer bulb. Can’t wait.

  30. Well really! Such gratuitous abuse, next you’ll be claiming Astrology isn’t a Science!

    • Here is the latest definition of science, which seems pretty good:

      Science is the pursuit of knowledge and understanding of the natural and social world following a systematic methodology based on evidence.
      What is science?

      Astrology fails this test. The definition makes no mention of unique solutions or that model simulations are outside of science.

      • Here is the latest definition of science, which seems pretty good:

        Science is the pursuit of knowledge and understanding of the natural and social world following a systematic methodology based on evidence.
        What is science?

        Astrology fails this test. The definition makes no mention of unique solutions or that model simulations are outside of science.

      • VikingExplorer

        You provide a definition of science you like then say

        The definition makes no mention of unique solutions or that model simulations are outside of science.

        Which demonstrates you don’t have a clue what you are talking about.

        Richard

      • That’s an impressive argument Richard.

        The head post makes a demarcation argument, and excludes from science any analyses that don’t provide “unique solutions” or include multiple model simulations.

        I’m just pointing out that the head post definition of science contradicts the most widely held ideas about science. In fact, I would speculate that besides you and Pat, almost no one else on earth has this very extremely narrow definition of science.

      • One can discuss the definitions of science and disagree but Richard’s understanding of science is surely spot-on in one field of science – engineering. Since ultimately the issue is of climate engineering then this strict understanding of science would be applicable. After all, would anyone build a bridge or a house in the same way climatologists build their ‘science’?

      • Jon,

        Yes, but not sure if you understand the narrowness of the head post definition. One, many engineering disciplines use computer model simulations to do their work, and to prevent Tacoma Narrows type situations. I know I did as an aerospace generator engineer.

        Two, according to his strict definition, engineering (applied science), along with medicine (applied epidemiology / biology), are not sciences. Engineers do not form hypotheses and perform falsification experiments.

        According to the widely held science council definition, engineers and doctors are doing science. According to the head post, they are not. As Peter Ward explained above, the head post definition essentially excludes everything except basic chemistry and physics.

      • VikingExplorer, guess what “… based on evidence” means in terms of measurement and prediction. That might (might) lead you to discover a conjunction between evidence and accuracy.

        Perhaps you will wonder at a connection, if any, between accuracy and answer. You may then explore how one determines limits of accuracy, and whether, to do so, one needs a strictly bounded solution. But then again, you may not.

        Reasoning sequentially through is the mechanism necessary to rise above arguing from authority. Which latter is all you’ve done here.

    • wwf, there’s no doubt that the radiation physics of CO2 is correct.

      CO2 absorbs the 15 micron radiation from the warm surface, and dumps it off into the kinetic energy of the atmosphere.

      The central question is how the climate responds to that kinetic energy. Climate models assume there’s only one response: atmospheric warming. But the real climate has many response channels. If a different one of them dominates (such as convection), there may be no detectible warming at all from extra CO2.

      So far, given the completely unremarkable behavior of the climate over the last 50 years, the latter eventuality seems much more likely.

      • Pat wrote: “The central question is how the climate responds to that kinetic energy.”

        CO2 both absorbs and emits outgoing OLR (and DLR). The net result of both processes is that about 3.7 W/m2 less radiation will reach space when CO2 has doubled IF nothing else changed. There is no doubt that the earth MUST respond by warming until it emits an addition 3.7 W/m2 – restoring radiative equilibrium. Radiation is the only way for energy to enter and exit the planet. The central question is how much will the planet need to warm to emit an additional 3.7 W/m2. If the earth behaved like a blackbody, the answer is about 1.2 degC. Satellites show that OLR from clear skies increases less than about 1 W/m2 less than expected per degC of warming from changes in water vapor and lapse rate (two of your response channels). There is no doubt that surface albedo will decrease somewhat due to changes in snow and ice cover. The answer to the central question mostly depends on clouds.

      • Collisional decay of CO2* is about 10^5 times faster than radiative decay, at 1 atmosphere pressure, Frank. Collisional decay dominates the relaxation of CO2* throughout the entire troposphere. Radiative decay of CO2* is virtually absent. It doesn’t contribute anything to tropospheric warmth, or to loss of energy from the troposphere to space. (It does dominate in the stratosphere, where the gas is too dilute to give collisional decay much probability.)

        The “back radiation” everyone talks about is not 15 micron radiation from decay of CO2*. It’s black body radiation. The kinetic energy resulting from CO2* collisional decay is heat, and shows up as slightly increased tropospheric black body radiation, which of course radiates equally up and down. Black body radiation and kinetic energy are different manifestations of the same thing: thermal energy.

        Your comment that, “There is no doubt that the earth MUST respond by warming…” is not correct in its insistence on warming.

        It’s true that the extra energy (kinetic or black body) must be removed to restore energetic equilibrium. But TOA radiative loss can just as easily be through the latent heat of water vapor condensation in the upper atmosphere. Tropical precipitation need increase by only a couple percent to achieve that effect. There need be no perceptible increase in tropospheric sensible heat at all. If convection dominates the climatic response to the kinetic energy deposited into the troposphere by CO2*, loss of energy through latent heat of condensation is likely the dominant outcome.

        The physical theory of climate has nowhere near the resolution to decide which climate response channel will dominate in removing the excess energy. That’s why your insistence on warming has no weight. Earth may indeed warm from CO2. Or it may just as well not. No one knows. So far, there’s zero evidence that it has.

      • Pat: You and I agree that radiation is not trapped by GHGs. Collisional excitation and relaxation of the vibrational excited states of GHGs is much faster than absorption and emission throughout the troposphere AND most of the stratosphere (local thermodynamic equilibrium). Emission of OLR (to space) and DLR is controlled by local temperature, not local radiation. Climate models and radiative transfer calculations (MODTRAN) are based on this assumption.

        Pat wrote: “But TOA radiative loss can just as easily be through the latent heat of water vapor condensation in the upper atmosphere. Tropical precipitation need increase by only a couple percent to achieve that effect. There need be no perceptible increase in tropospheric sensible heat at all. If convection dominates the climatic response to the kinetic energy deposited into the troposphere by CO2*, loss of energy through latent heat of condensation is likely the dominant outcome.”

        Latent heat obviously can not escape directly to space, it is first converted to simply heat by condensation and then to radiation (by collisional excitation of CO2 and other GHGs). The altitude where condensation occurs is warmer than it would be without latent heat and therefore emits more OLR and DLR than it would have otherwise.

        The surface of the earth would be cooler if latent heat were transferred faster by convection from the surface to the upper troposphere (where most photons escaping to space are emitted). However, latent heat can only escape faster when the upper troposphere is warmer. Spontaneous buoyancy-driven convection develops only when the rate of cooling with altitude (lapse rate) is greater than a critical threshold, so convection shuts down when the upper atmosphere get too warm through convection. Increasing humidity decreases the lapse rate (lapse rate feedback), allowing the upper atmosphere to warm more rapidly than the surface and more OLR to escape for a given rise in surface temperature. Water vapor feedback does the opposite. Instruments in space tell us how much OLR through clear skies varies with surface temperature, i.e. the combined effects of water vapor and lapse rate feedback. Observations agree with climate models that a 1.2 degC rise in surface temperature produces a 2.5 W/m2 increase in OLR, not the 3.7 W/m2 increase expected for a blackbody.

        Pat wrote: “The physical theory of climate has nowhere near the resolution to decide which climate response channel will dominate in removing the excess energy. That’s why your insistence on warming has no weight. Earth may indeed warm from CO2. Or it may just as well not. No one knows. So far, there’s zero evidence that it has.”

        We do know some things about how “excess energy” will be removed. 1) It will be by radiation to space. 2) Physics tells us that an instantaneous doubling of CO2 will reduce OLR by about 3.7 W/m2. It also tells us that a blackbody emitting 236.3 W/m2 will need to warm 1.2 degC to radiate 240 W/m2 (equal to post-albedo incoming SWR). 3) OBSERVATIONS tell us how our climate responds to surface warming* by increasing OLR through clear skies to space. See Figure 1B in http://www.pnas.org/content/110/19/7568.full.pdf The dotted line shows how a blackbody would behave. Notice the remarkably small error bars for climate science. Climate models are correct about OLR from clear skies. The rest of the paper shows that climate models are horrible at modeling the OLR response to surface warming from cloudy skies and the SWR response from clear (surface albedo) and cloudy skies (cloud albedo). 4) Common sense tells us that warming will reduce surface albedo (ice albedo feedback). 5) We don’t know how clouds will respond.

        * In this paper, surface warming is not “global warming”. Surface warming is “seasonal warming” associated with the asymmetric distribution of land (low heat capacity) and ocean (high heat capacity) between the hemispheres. (GMST does increase 3.5 degK every northern summer, but we eliminate this seasonal change when we calculate temperature anomalies.) Seasonal warming has limitations as a model for GW, but climate models should get both right. Seasonal changes in SWR from clear skies have little to do with the surface albedo feedback that will follow global warming (aka ice-albedo feedback).

  31. An excellent post, thank you. The same conclusion is accessible from information theory which forms the basis of our modern communication infrastructure. It is an axiomatic premise of that discipline that information gain can only occur when a contingency is resolved – that is to say, we can only lean something when we are surprised by a result. The contingency created by running a model of some physical system whose parameters are under-constrained is illustrated beautifully by the model output-plots in this article. But the information gain required to increase our knowledge can only be obtained when that contingency is resolved into a particular evolution that matches experimental data. And while a given set of parameters yields a particular model-state evolution, the only information gained in that “experiment” is about the model itself! No information about the physical world can be conveyed by running a computer simulation, no matter how complex. Either the model is perfectly constrained and thus the outcome certain before the operator hits the run button or under-constrained in which case the outcome is determined completely by the parameter values guessed at by the programmer.

    Feynman diagnosed the problem long ago:
    “There is a computer disease that anybody who works with computers knows about. It’s a very serious disease and it interferes completely with the work. The trouble with computers is that you ‘play’ with them!”

  32. @Pat Frank

    This is a very interesting analysis and from what little I know about the Monte Carlo methods used in climate modelling, I know enough to ask what type of Monte Carlo method is being used. Is it applying a Monte Carlo method or making a Monte Carlo simulation, for example. Basically you are saying they treat all possible outcomes as valid. I am not suggesting they are equally valid, just that I understand you are saying they are all treated as equally valid possibilities. There are Monte Carlo methods that do not treat all outcomes as equally likely, informed either by past records of variation, or models of the frequency of the input values for that dimension (freedom to vary).

    Well, if they are considered equally valid, there are problems because at the limits of any known set of dimensions of the climate, the likelihood of a set of all dimensions being simultaneously ‘at the limits’ becomes increasingly unlikely to the point of being a ‘rare event’ which would normally be ignored. Are they doing that? Ignoring possible ‘rare events’? (which has a definition)

    An alarmist will argue that a rare event must be catered for, like a 200 year storm, or a 2000 year storm, or an extinction event. But as Monckton points out, catering to manage a rare event has a cost and the probability is so low it will be cheaper and more progressive for society in general to cope rather than ‘prevent’, as if ‘prevention’ is possible in the first place.

    What your article confirms for me is that the average of a bunch of model outputs (a questionable procedure in the first place) is dependent for alarming content on the existence of high-end predictions from the likes of the Canadian simulator in British Columbia which is the second hottest-running of all models. If they eliminated the worst and most off-the-mark models, the central estimate would drop considerably and be un-alarming. So we know where that is coming from.

    They need these ‘equal treatment and equally possible’ limits on each dimensions of each simulation and all the manifestly hot-running models to maintain the fiction that the average of an estimated future world is hot, very hot, unless we have over the keys to the global energy economy.

    Exposing the math of the models is a good plan. What can’t work, won’t work. What can’t predict, won’t predict.

  33. Finally someone who notices that the emperor has no clothes on. I have always regarded these model results as nonsense. They are made possible by giving these incompetents nice toys like supercomputers for free so they can pretend to do research. Without supercomputers this stupidity could not exist. In the sixties we did not have supercomputers. We did not even have computers for spectrochemical calculations and simply used graph paper. Direct readers were just coming in but they went to large steel or aluminum producers, not aerospace where I worked. The concept of throwing out hundreds or thousands of attempted graphs was foreign to me until I accidentally started doing climate research. I have called for abandonment of the entire climate modeling enterprise for the simple reason that in 27 years of trying they have never produced any meaningful predictions. That goes for everyone including Hansen whose first attempt in 1988 to predict global temperature was a disaster. Close down the operation and fire the operators. It can be done. Nixon fired ten thousand lunar lander workers for nothing when he cancelled the last three moon shots.

    • “Nixon fired ten thousand lunar lander workers for nothing when he cancelled the last three moon shots.”

      To pay for war…

  34. I happened to notice the following comment posted at the prestigious website “skeptical science” a couple days ago:

    “The IPCC has been quoted as saying ‘The chaotic nature of weather makes it unpredictable beyond a few days.’ However, they assert that ‘when weather is averaged over space and time, the fact that the globe is warming emerges clearly…’

    “This means IPCC climatologists fully understand that predicting weather beyond a few days with a computer model is exactly as effective as predicting it with, say, chicken entrails.

    “Knowing this, they go ahead and consult their entrails, examining them carefully to learn what the weather might be like in 50 years, if only entrails had predictive value.

    “But since they know this is silly, they don’t stop there. They go on to examine the entrails of a MILLION chickens. They average the results of the million chicken-entrails predictions together and, voila, pronounce the result ‘scientific.’

    “It is amazing what nonsense people will allow themselves to believe.”

    The scientistical skeptics at “skeptical science” were not about to let this so-called “argument” stand without a withering logical counterattack. No ma’am!

    Mustering all the facts and logic and computer models at their disposal, they immediately responded…
    … by deleting the post. Because it consists of “…worn-out sloganeering, strawmen, and argumentative language” that their readers should not have to see.

    They went on to explain:

    “If you are not prepared to read the actual science with view to understanding, then you are in no position to comment on it. Commentators here have attempted to explain the difference between weather and climate what is predictable or not, but apparently to no avail. … Moderating this site is a tiresome chore, particularly when commentators repeatedly submit offensive or off-topic posts.”

    http://www.skepticalscience.com/argument.php?p=3&t=116&&a=227#111619

    That pretty much nails it. Nothing could be more persuasive.

    I share their words of logic and wisdom with all of you WUWT readers so that you will recognize the obvious folly of your beliefs. Repent, you heretics!

    • SkS is a waste of time. Just stay away. If they are interested in a debate they can come here.

Comments are closed.