Guest essay by Eric Worrall
In the wake of the science committee testimony, Climate Scientist Kevin Trenberth has insisted that Climate Science does follow the scientific method. But Trenberth himself may have strayed outside accepted scientific methodology.
Yes, we can do ‘sound’ climate science even though it’s projecting the future
Nobody can observe events in the future so to study climate change, scientists build detailed models and use powerful supercomputers to simulate conditions, such as the global water vapor levels seen here, and to understand how rising greenhouse gas levels will change Earth’s systems. NCAR/UCAR, CC BY-NC-ND
April 6, 2017 4.01am AEST
Authors
Kevin Trenberth
Distinguished Senior Scientist, National Center for Atmospheric Research
Reto Knutti
Professor, Eidgenössische Technische Hochschule (ETH) Zürich
Increasingly in the current U.S. administration and Congress, questions have been raised about the use of proper scientific methods and accusations have been made about using flawed approaches.
This is especially the case with regard to climate science, as evidenced by the hearing of the House Committee on Science, Space and Technology, chaired by Lamar Smith, on March 29, 2017.
…
Chairman Smith accused climate scientists of straying “outside the principles of the scientific method.” Smith repeated his oft-stated assertion that scientific method hinges on “reproducibility,” which he defined as “a repeated validation of the results.” He also asserted that the demands of scientific verification altogether preclude long-range prediction, saying, “Alarmist predictions amount to nothing more than wild guesses. The ability to predict far into the future is impossible. Anyone stating what the climate will be in 500 years or even at the end of the century is not credible.”
…
Why climate scientists use models
The wonderful thing about science is that it is not simply a matter of opinion but that it is based upon evidence and physical principles, often pulled together in some form of “model.”
In the case of climate science, there is a great deal of data because of the millions of daily observations made mostly for the purposes of weather forecasting. Climate scientists assemble all of the observations, including those made from satellites. They often make adjustments to accommodate known deficiencies and discontinuities, such as those arising from shifts in locations of observing stations or changes in instrumentation, and then analyze the data in various ways.
…
Projections, not predictions
With climate models as tools, we can carry out “what-if” experiments. What if the carbon dioxide in the atmosphere had not increased due to human activities? What if we keep burning fossil fuels and putting more CO2 into the atmosphere? If the climate changes as projected, then what would the impacts be on agriculture and society? If those things happened, then what strategies might there be for coping with the changes?
These are all very legitimate questions for scientists to ask and address. The first set involves the physical climate system. The others involve biological and ecological scientists, and social scientists, and they may involve economists, as happens in a full Intergovernmental Panel on Climate Change (IPCC) assessment. All of this work is published and subject to peer review – that is, evaluation by other scientists in the field.
The question here is whether our models are similar enough in relevant ways to the real world that we can learn from the models and draw conclusions about the real world. The job of scientists is to find out where this is the case and where it isn’t, and to quantify the uncertainties. For that reason, statements about future climate in IPCC always have a likelihood attached, and numbers have uncertainty ranges.
The models are not perfect and involve approximations. But because of their complexity and sophistication, they are so much better than any “back-of-the envelope” guesses, and the shortcomings and limitations are known.
…
Trenberth has a lot of faith in his models – so much so, a few years ago he demanded that the “null hypothesis” be reversed. If accepted, this would have meant a reversal of the burden of proof regarding the assumption of human influence on global climate.
…
“Humans are changing our climate. There is no doubt whatsoever,” said Trenberth. “Questions remain as to the extent of our collective contribution, but it is clear that the effects are not small and have emerged from the noise of natural variability. So why does the science community continue to do attribution studies and assume that humans have no influence as a null hypothesis?”
To show precedent for his position Trenberth cites the 2007 report by the Intergovernmental Panel on Climate Change which states that global warming is “unequivocal”, and is “very likely” due to human activities.
…
Read more: https://wattsupwiththat.com/2011/11/03/trenberth-null-and-void/
Trenberth’s demands for a reversal of the burden of proof with regard to climate were rejected by the scientific community. Even climate advocate Myles Allen, head of University of Oxford’s Atmospheric, Oceanic and Planetary Physics Department, thought Trenberth’s demands for a reversal of the burden of proof were wrong.
…
“The proponents of reversing the null hypothesis should be careful of what they wish for,” concluded Curry. “One consequence may be that the scientific focus, and therefore funding, would also reverse to attempting to disprove dangerous anthropogenic climate change, which has been a position of many sceptics.”
“I doubt Trenberth’s suggestion will find much support in the scientific community,” said Professor Myles Allen from Oxford University, “but Curry’s counter proposal to abandon hypothesis tests is worse. We still have plenty of interesting hypotheses to test: did human influence on climate increase the risk of this event at all? Did it increase it by more than a factor of two?”
###
All three papers are free online:
Trenberth. K, “Attribution of climate variations and trends to human influences and natural variability”: http://doi.wiley.com/10.1002/wcc.142
Curry. J, “Nullifying the climate null hypothesis”: http://doi.wiley.com/10.1002/wcc.141
Allen. M, “In defense of the traditional null hypothesis: remarks on the Trenberth and Curry opinion articles”: http://doi.wiley.com/10.1002/wcc.145
Read more: Same link as above
The problem with climate science is there is no way to test the core prediction, that the Earth will heat substantially in response to anthropogenic CO2 emissions, other than to wait and see.
Important secondary predictions which should be observable by now, such as the missing tropospheric hotspot, or a projected acceleration in sea level rise, have not manifested.
Even more embarrassing, mainstream models cannot even tell us what climate sensitivity to CO2 actually is.
Is equilibrium climate sensitivity 1.5C temperature increase per doubling of CO2? Or is it 4.5C / doubling of CO2? The IPCC Fifth Assessment Summary for Policy Makers cannot give you that answer.
… The equilibrium climate sensitivity quanti es the response of the climate system to constant radiative forcing on multi- century time scales. It is de ned as the change in global mean surface temperature at equilibrium that is caused by a doubling of the atmospheric CO2 concentration. Equilibrium climate sensitivity is likely in the range 1.5°C to 4.5°C (high confidence), extremely unlikely less than 1°C (high confidence), and very unlikely greater than 6°C (medium confidence)16. The lower temperature limit of the assessed likely range is thus less than the 2°C in the AR4, but the upper limit is the same. This assessment re ects improved understanding, the extended temperature record in the atmosphere and ocean, and new estimates of radiative forcing. {TS TFE.6, Figure 1; Box 12.2} …
Read more: IPCC Fifth Assessment WG1 Summary for Policy Makers (page 14)
Why is this range of possible climate sensitivities embarrassing? Consider the Charney Report, from 1979;
… We believe, therefore, that the equilibrium surface global warming due to doubled CO2 will be in the range 1.5C to 4.5 C, with the most probable value near 3°C …
Read more: http://www.ecd.bnl.gov/steve/charney_report1979.pdf (page 16)
As theories are refined, key physical quantities should be resolved with greater accuracy. For example, the first measurements of the speed of light, conducted in 1676, were 26% wrong – a remarkable estimate for that period of history, but still wide of the mark. More research – better quality measurements and calculations resolved the original uncertainty about the speed of light, which is now known to a high degree of accuracy.
This failure of climate science to follow the normal scientific progression to more accurate estimates should be a serious concern. This lack of convergence on a central climate sensitivity estimate, after decades of research effort, strongly suggests something is missing from the climate models.
Whatever the missing or mishandled factor is, it has a big influence on global climate. The evidence for this is the embarrassingly broad range of estimates for climate sensitivity to a doubling of CO2, and the failure of those estimates to converge.
If climate models were capable of producing accurate predictions, if they showed any sign of converging on a reasonable climate sensitivity estimate, if predicted secondary phenomena such as the tropospheric hotspot and sea level rise acceleration were readily observable, there would be a lot less resistance to Trenberth’s apparent demand that climate model projections be accepted as somehow equivalent to empirical observations.
It should be obvious to anyone there are way too many loose ends to even come close to such acceptance.
According to the Oxford English Dictionary, the scientific method is A method of procedure that has characterized natural science since the 17th century, consisting in systematic observation, measurement, and experiment, and the formulation, testing, and modification of hypotheses.
Any suggestion that model projections should be accepted as a substitute for systematic observation and experiment, any suggestion that model output from models which have failed several key tests can be relied upon, any suggestion that defective model output constitutes proof of human influence on global climate, in my opinion utterly violates any reasonable understanding of what the scientific method should be.

Yes. I’m sure Lysenko would have defended his “methods”
The only reason this is relevant is because politicians and special interests have chosen to use raw science as if it is verified data that is safe to use.
Climate science is very much like Cosmology. But I don’t see cosmologists demanding that we dramatically change how we live because there may or may not be more dark matter than we thought. Or that dark matter even exists.
As a discipline it also appears too casual in the way that the fundamental data is subject to significant revisions with a concomitant disregard for how the previous ‘understandings’ were modeled on the now revised data.
The term experiment has a specific meaning in science and running computational models is not covered by it. Therefore what they actually carry out is anything but an experiment.
Terrestrial climate is a non equilibrium irreproducible (chaotic) closed thermodynamic system, with only radiative exchange with its environment. It is far too big to be replicated in the lab, so at this level we are trying to understand a single run of a unique physical entity, an impossible task.
However, we could create other non equilibrium irreproducible (chaotic) closed thermodynamic systems in the lab at will, for example by putting a semitransparent container with a fluid in it onto a thermally insulated rotating table, enclosed in a vacuum chamber with walls cooled by liquid nitrogen and irradiate it with light. This kind of system belongs to one of the last uninvestigated fields of classical physics.
Then set up a computational model of that system and try to predict the effect of changing the infrared absorptivity of the fluid in it or whatever. That’s an experiment.
You can run it as many times as you wish and set its parameters at each run to any specific value, then observe the ensuing state.
I often look into a crystal ball to carry out my experiments. It has the same predictive power as a climate model.
ehhh
scratch
I think Trenberth models still have not captured the true nature of what happens TOA…
https://wattsupwiththat.com/2017/04/07/questions-on-the-rate-of-global-carbon-dioxide-increase/comment-page-1/#comment-2474983
I was just talking about that.
It’s a travesty.
Sure you can, you just have to look in the right place, which isn’t after averaging all of the data away.
MEASUREMENTS 🙂
Actually, you just touched on a core truth
I found no warming in the Sh and more warming in the Nh including more ice melt in the arctic…
So, to me, it seems earth’s core has been moving, especially North east, going by the movement of earth’s magnetic north pole. The elephant in the room was all but forgotten…
Come down 1 km into a gold mine here and when you start sweating, you realize how big this elephant really is…
I mentioned curve-fitting data with computer models using a fudge factor in an earlier post. I was surprised that there were no comments. After thinking about it, however, I realize that there are extremely few people, even scientists, who understand what the term curve-fitting means when making models of a chaotic system.
This is the problem. Even most scientists have no idea what the pitfalls are when it comes to modeling a complex system. The people in climate science who do understand – will not talk about it. And, it is obvious why they won’t talk about it. I believe it can be called willful ignorance.
Kermit Johnson:
You wrote
With respect, I suggest the reason nobody commented is because everybody agreed.
However, since you want comments, I support your post with the following two points.
The fudge factor is assumed values of negative forcing from aerosols. refs.
Courtney RS, ‘An assessment of validation experiments conducted on computer models of global climate using the general circulation model of the UK’s Hadley Centre’. Energy & Environment, Volume 10, Number 5, pp. 491-502, September 1999
and
Kiehl JT, ‘Twentieth century climate model response and climate sensitivity’. GRL vol.. 34, L22710, doi:10.1029/2007GL031383, 2007
There are many variables in climate models and that provides problems when curve fitting because as John von Neumann said of curve fitting
Richard
Why do you say ‘may’ have strayed outside of scientific method?
Call it how it is sir.
Enrico Fermi was once asked if he followed the scientific method. “Yes, if I can’t think of someting better”. 🙂
https://en.m.wikipedia.org/wiki/List_of_things_named_after_Enrico_Fermi
The scientific method, as widely described and promoted, is based on experiments to test hypotheses.
Many branches of natural science can only observe the world and construct theories and hypotheses, but experimental verification can be difficult to impossible (or it might be theoretically possible, but practically impossible).
Making a climate prediction about effects of CO2 in the atmosphere on future temperature, based on educated guesses, and waiting 25 years to see if your prediction is correct isn’t even an experiment, really because there’s no way of controlling any of the multiple conditions, all of which are varying all the time.
Laboratory simulation would be a valid, scientific approach to testing climate hypotheses or theories. With all the money that’s been thrown at collecting data and making computer models, it would surely be possible to build an atmospheric laboratory where you could simulate observed atmospheric conditions and start varying input conditions, one at a time, and measure their effects. Of course you could only simulate small parts of the atmosphere at any one time, but you could integrate multiple tests into a simulation of the whole atmosphere.
That would be a genuinely scientific approach to climate science. That it hasn’t been done is
disappointingpatheticoutrageous.Or has it been done, the results didn’t support AGW, and they got disappeared? In the present-day “climate” of opinion, that would not be surprising.
Of course it would be. And you don’t even need to emulate the atmosphere or a part of it, any non-equilibrium irreproducible closed thermodynamic system would suffice.
BTW, a system is irreproducible, if microstates belonging to the same macrostate can evolve into different macrostates in a short time. Chaotic systems, including terrestrial climate, belong to this class.
Physics of these systems is unknown, because not even Jaynes entropy can be defined on them.
see more here
Smart Rock:
Though there are many independent variables, each of them varying continuously it is possible in concept to create a statistically validated model. It is possible to do so today though this was impossible 5 decades ago as 5 decades ago we did not have information theory at our disposal but today we do. Information theory makes it possible for a model builder to deal successfully with missing information. Professional climatologists seem to be five decades out of date in their grasp of model building technology.
Smart Rock:
You say
Sorry, but such long-term prediction is a valid test of an hypothesis.
For example, Edmund Halley’s prediction in 1705 that the comet now named after him would be seen in 1758. He made this prediction because he hypothesised that the comets seen in 1531, 1607 and 1682 were the same comet and it had a regular orbit which was disturbed by the gravitational attractions of Jupiter and Saturn. His prediction proved correct (after his death in 1742) so his hypothesis was then elevated to a theory which has subsequently obtained much confirming evidence.
Richard
Eric writes: “This failure of climate science to follow the normal scientific progression to more accurate estimates should be a serious concern. This lack of convergence on a central climate sensitivity estimate, after decades of research effort, strongly suggests something is missing from the climate models.”
I disagree. This failure to converge on a narrow estimate for climate sensitivity tells us that IPCC scientists are accurately reporting the uncertainty in their understanding of climate sensitivity. There are a wide variety of parameterizations of climate models that provide equally good (or bad, if you prefer) representations of current climate. The IPCC’s wide confidence interval for climate sensitivity recognizes that they don’t know which parameterization is best.
However, by reporting QUANTITATIVE estimates of projected WARMING related climate change derived ONLY from a selected subset of climate models (an “ensemble of opportunity” chosen by governments), the IPCC is underestimating the uncertainty associated with these projections. Even then, they use their “expert judgment” to report projected warming that formally qualifies as being “very likely” according to models as merely “likely” – not that the public understands the difference.) Therefore, the IPCC acknowledges more uncertainty in equilibrium climate sensitivity than in projected warming. This is partially because there is less uncertainty in TCR than in ECS. As long as CO2 is rising, TCR is a more relevant measure of climate sensitivity that ECS and CO2 rises for most of the century in some scenarios.
Eric continues: “Whatever the missing or mishandled factor is, it has a big influence on global climate. The evidence for this is the embarrassingly broad range of estimates for climate sensitivity to a doubling of CO2, and the failure of those estimates to converge.”
I disagree here also. Models make different projections mostly because they use different parameterizations, not because something is missing from some models. If we knew something was missing from some models, we would simply include all the right things in one model. The problem is that the process of tuning parameters one-by-one can be done in many different ways and does not lead to a unique optimal set of parameters.
A weak analogy: The equation KE + PE = Total energy applies to some physics problems, but there are many different ways total energy can be partitioned. There are many equally good ways to parameterize climate models and we don’t know which is “right”.
Eric concludes: If climate models were capable of producing accurate predictions, if they showed any sign of converging on a reasonable climate sensitivity estimate, if predicted secondary phenomena such as the tropospheric hotspot and sea level rise acceleration were readily observable, there would be a lot less resistance to Trenberth’s apparent demand that climate model projections be accepted as somehow equivalent to empirical observations.
Trenberth has never demanded that “that climate model projections be accepted as somehow equivalent to empirical observations.” There are serious problems and limitations with our models AND with our empirical observations.
The hot-spot: The satellite record shows that the troposphere is warming more slowly than the surface, which is inconsistent with our understanding of the factors that control the lapse rate. One of the following three is therefore incorrect: 1) surface warming, 2) troposphere warming or 3) “our understanding”. The absence of a hot-spot is based depends on the satellite record being correct.
The putative absence of acceleration in SLR: Your link points to data showing that global SLR has varied in the 20th-century. Variation in a rate demands that acceleration and de-acceleration have occurred. Your link shows only that the current rate of SLR is not unprecedented – not that it hasn’t accelerated recently.
Do climate models predict that we should have already been able to unambiguously detect an acceleration in SLR? Actually, climate models predict a wide range of acceleration in the rate of SLR. For RCP 6.0, the rate of SLR in 2100 is projected to be from 4-10 mm/yr (or cm/decade) At the lower end, this is almost NO acceleration from today’s rate of SLR. The midpoint represents an increase of 4 cm/decade or 0.5 cm/decade/decade. That is about a 15% increase per decade, not a big change. (For RCP 8.5, acceleration needs to be twice as big.) The acceleration in the satellite record is not quite statistically significant, but the central estimate for the increase over the past 24 years is 0.66 mm/yr or about a 20% in the current rate. These observations are COMPLETELY CONSISTENT with climate models, they don’t invalidate them.
The upper limit for SLR in the IPCC’s models gets all of the headlines. The lower limit requires very little acceleration. Lack of acceleration will never invalidate AOGCMs. However, the hindcast SLR for the 20th century from all models could be inconsistent with observations. The IPCC publicizes the agreement between project and observed warming, but never the agreement between observed and projected SLR.
Frank:
The equilibrium climate sensitivity (TECS) is the ratio of two numbers. The numerator is the change in the surface air temperature at equilibrium aka steady state. The denominator is the change in the logarithm of the atmospheric CO2 concentration. The numerator is insusceptible to measurement. Thus when a numerical value is assigned to TECS this value is not falsifiable. As it is not falsifiable, TECS is not a “scientific” concept.In particular, to assign a value to TECS is not to gain any information that is pertinent to regulation of Earth’s climate.
Terry: ECS is a falsifiable “theory”. In its most simple form. we simply need to wait a century or so to measure the numeration of this ratio. It is admittedly hard to do a well controlled experiment on our planet, but we are nearly at the equivalent of a doubling of CO2 from the combined forcing of all rising GHGs. The problem is that rising aerosols have complicated this “experiment”. Energy balance models predict from observations of dF and dT that the best estimate for ECS is about 1.5-2.0 K/doubling, but the confidence interval is wide.
There is another way of approaching ECS and that is to ask how much more heat leaves that planet for every degC the planet warms. That is sometimes called the climate feedback parameter. It is the reciprocal of ECS (measured in W/m2/K). You don’t need to wait a century or more to reach equilibrium when you try to measure the climate feedback parameter from observations.
Frank:
The global temperature fluctuates. Thus, by the definition of terms, Earth does not reach an equilibrium temperature.
I used the slope of temperature as the extratropics go through the seasons. It’s straight forward to calculate a toa energy value for that location, and you have the surface response in temperature.
This does not work for the tropics, at least not as it’s written, so I only run it outside the tropics.
https://micro6500blog.wordpress.com/2016/05/18/measuring-surface-climate-sensitivity/
By the definition of ECS, the Earth reaches an equilibrium temperature when the long-term average radiative imbalance at the TOA is zero (or is negligible compared with the forcing that produced a temperature change). That means that the atmosphere and ocean on the average are neither warming nor cooling.
If you want to get picky, the radiative imbalance at the TOA is expected to become negligible before ice caps have fully responded to the new equilibrium temperature. To deal with this problem, climate scientists have created the concept of an “earth system sensitivity” which encompasses millennial changes in ice caps and the temperature change that follows this change in surface albedo. It took about 10 millennia for the rate of sea level rise to slow after the end of the last ice age. On the millennial time scale, Milankovic changes in the Earth’s orbit also become important. ESC has been defined in such a way that these millennial issues are irrelevant.
Global temperature fluctuates. Weigh an object with an accurate scale or balance and the result fluctuates too – from motion, air currents and static electricity. We average measurements of both weight and temperature. The fluctuations in temperature have do have somewhat different causes than the fluctuation in weight: seasons, chaotic fluctuations in wind, water currents, clouds, the 11-year solar cycle, etc. Nevertheless averaging gives a useful answer.
Frank:
Your understanding of the operative principles is not exactly correct. First of all, the “concrete” Earth (the one on which you and I live) possesses a field of temperatures such that at each space point in this field the temperature is generally different. By the definition of terms each such temperature is an “equilibrium temperature” if and only if the magnitude of the heat flux at the associated space point is 0. If the magnitude of the heat flux is 0 at every space point in the field then the field of temperatures is a field of equilibrium temperatures but not otherwise. Thus, given that every temperature is an equilbrium temperature the temperatures at the various space points generally vary.
The “concrete” Earth is never in a state in which every temperature is an equilibrium temperature and each such temperature is identical. It is a kind of “abstract” Earth that is capable of being in this state. One of the many errors in thinking that are made by the global warming climatologists is to confuse the abstract with the concrete Earth. To confuse the two Earth’s is to “reify” the abstract Earth by treating it as if it is the concrete Earth. Reification is regarded as a fallacy.
Terry: The fluctuations are individual locations are unimportant. Equilibrium is reached when the global radiative imbalance is zero (or negligible compared with the forcing causing warming.
From a practical point of view, since 93% of any radiative imbalance goes into the ocean, we could monitor our approach to equilibrium with the ARGO array. Current anthropogenic forcing is something like 2.5 W/m2 and the current radiative imbalance according to ARGO is about 0.7 W/m2. Say we follow RCP 6.0. The imbalance presumably is going to rise. When it drops below 0.6 W/m2 (averaged over a decade), we would be 90% of the way to equilibrium warming. That should be good enough to estimate where 100% of equilibrium warming lies and calculate ECS.
how do they get do a radiation imbalance from ARGO buoys floating in the ocean?
Frank:
You seem to think that TECS has a point value. What’s your argument?
Frank:
You contradict yourself when you write
You saying you “disagree” does not refute Eric’s correct statement.
You claiming “they don’t know which parameterization is best” is an assertion that the models don’t include knowledge of “which parameterization is best” (i.e. that knowledge “is missing from the climate models”.
And you are asserting self-delusion when you claim without evidence that you know the “something” which is probably “missing from the climate models”. In reality, all we know is that the “lack of convergence on a central climate sensitivity estimate, after decades of research effort, strongly suggests something is missing from the climate models.”
Richard
I believe Eric was stating his OPINION and I expressed my contradictory opinion. My opinion is even based on some facts; in particular the ensembles of perturbed parameter models described by Stainforth et al and the climateprediction.net group in England. They tested thousands of variations of a simplified model where six (or more?) parameters were chosen at RANDOM from within a physically plausible range. ECS among the ensemble ranged from 1.5 to 11.5 K/doubling. Later they used a panel of eight climate observations (temperature, rainfall, albedo, etc) to systematically pick the best set of parameters. No global optimum could be found: Parameter sets that were good for precipitation would be inferior for albedo or temperature and vice versa. They also tried and failed to find a portion of the physically plausible range for any parameter that consistently gave inferior results (so it could be discarded). Worse of all, they found that parameters interacted in unexpected ways, making one-by-one tuning of parameters in more sophisticated models a dubious process that is unlikely to discover a global optimum. Change the order in which parameters are tuned and you probably will reach a different local optimum.
This work demonstrates that many different future climates are consistent with the laws of physics, an emissions scenario, and a set of parameters that reproduce today’s climate reasonably well.
More sophisticated models may not behave this badly, but it is too computationally expensive to thoroughly explore the parameter space of the sophisticated models used by the IPCC. However, the GFDL group has multiple variation of its basic model with different climate sensitivity. They recently found they could reduce the climate sensitivity of one model by 1 K/doubling by using the entrainment parameterization scheme from a lower-sensitivity model – apparently without reducing the model’s ability to accurately reproduce current climate. As best I can tell, there are likely to be dozens of different parameterizations for a given climate model that are equally good at representing today’s climate and having a wide range of climate sensitivity.
The “something” that is missing from today’s climate models is unambiguous evidence that the parameter set chosen for IPCC reports is superior to other possible parameter sets. Without a way to generate or identify a superior set of parameters, AOGCMs won’t be inconsistent with an ECS of 1.5 or 4.5 K/doubling. This wide range doesn’t invalidate AOGCMs, but it does mean that they don’t provide the useful narrow range of projections policymakers need.
A debate elsewhere I am having with someone about SLR may provide a useful analogy. He is fitting exponential and quadratic models to sea level data and projecting more than 1 m of SLR by the end of the century. However, these models and a linear model all fit the data equally well (R2 of 0.98+). And the 95% confidence interval for the acceleration coefficient for the quadratic model ranges from zero to twice the best estimate for that parameter. Though the three models produce very different central estimates for SLR, the range of futures they project is very wide and overlapping. Part of the problem is here is that simple curve fitting does not model all of the physics needed to explain why sea level is rising. Climate models have a similar problem, they replace cloud microphysics and turbulent fluxes within a grid cell with parameters.
Frank:
Eric stated his evidence-based judgement of what is probably true; i.e. he stated a scientific conclusion. I see no reason to doubt his conclusion.
Richard
Richard, Eric wrote: “This failure of climate science to follow the normal scientific progression to more accurate estimates should be a serious concern. This lack of convergence on a central climate sensitivity estimate, after decades of research effort, strongly suggests something is missing from the climate models.”
As best I can tell, neither your nor Eric’s words show any understanding about the problem of parameterization and the fundamental reasons why AOGCMs haven’t converged on a narrow range of ECS. He has provided no scientific evidence. He has expressed the OPINION that this is because AOGCM physics is wrong or incomplete. I’m commenting because my reading indicates he is wrong. Some links are given below.
If some models were more complete than others, it would be trivial to include all of the same physics in every model or simply report results only from the models that were “complete”. For examples, only some AOGCMs include the interaction between aerosols and could droplet size and reflectivity – the indirect aerosol effect. However, the aerosol indirect effect is too small to account for the vast differences in model climate sensitivity and those that don’t include it believe it is negligible. If it were a big factor, every model would include it.
The laws of physics are not wrong, but cloud formation and turbulent flow occur on scales much too small to be calculated for each grid of an AOGCM. That means these phenomena must be represented by tunable parameters. The vast differences between models arise from this parameterization. We know that because changing the parameters of one model can change its ECS dramatically without always interfering with its ability to represent current climate accurately. The current method by which models are tuned does NOT yield a set of parameters which represent today’s climate better than any other possible sets. This has been proven by studies where the parameters of models were systematically varied.
http://www.climateprediction.net/climate-science/climate-ensembles/perturbed-physics-ensembles/
http://www.climateprediction.net/wp-content/publications/nature_first_results.pdf
http://www.climateprediction.net/wp-content/publications/ClimateDynamics_Feb2008.pdf
Frank:
From behind anonymity you write to me
I refer you to this post I made in this thread earlier today.
Get back to me when you have been studying and publishing on the matter for as long as I have.
Richard
Richard wrote: “Get back to me when you have been studying and publishing on the matter for as long as I have.”
Despite your experience, I suggest that you reply only after you have read the links I provided concerning perturbed parameter ensembles. The comments you linked are totally irrelevant to what what has been learned for systematically exploring model parameterization.
As best I can tell from the abstract alone, your E&E paper has nothing to do with model parameterization.
The paper by Kiehl discusses the fact that different models produce different amounts of forcing from what should be the same inputs of aerosol and GHG change. This indeed may be part of the reason why different models produce different climate sensitivity. This is one reason why Hansen invented the concept of effective radiative forcing. Despite the fact that we commonly believe that doubling CO2 instantaneously slows radiative cooling to space by 3.7 W/m2, different models produce different quantities for this value. However, a model that produces a forcing for doubled CO2 that is bigger or smaller can produce a proportionally bigger or smaller equilibrium warming and therefore have exactly the same ECS.
Among the CMIP3 models, high climate sensitivity was associated with high sensitivity to aerosols cooling, but this is not true for the CMIP5 models.
However, when you take ONE model with ONE input of aerosols and GHGs and then change the model parameters (perturbed parameter ensembles), you get different ECSs. And the modified model parameterization won’t necessarily produce an inferior representation of current climate. IMO, this is the fundamental reason models haven’t converged on a single value for ECS. The compromises that must be made to model climate and weather in grid cells that are large enough to be practical computationally force modelers to use parameters they can’t optimize systematically. And for which an optimum may not exist.
The refusal of the climate model keepers to correlate their models to measured data tells us all that the purpose of these models is not to predict the future, their purpose is to create alarmism and generate support for more funding of climate change studies and projects. The model keepers could adjust the unknowns in their climate models, like feedback, so their predicted temperature responses match measured data over the last 30 to 50 years, but they don’t. They insist on keeping the parameters in their model like feedback that they can not possibly know or measure at the values they are, even though adjusting them would result in more accurate predictions, because they gain nothing from having more accurate, less alarming model predictions. If they were trying to sell these models as a tool for predicting future temperatures, they would adjust their unknowns completely differently, they would adjust them so that the models make more accurate predictions. But the money generated from these models is from the alarmism they create, so there is no motivation to make them accurate.
The Scientific Method must start with a necessary and sufficient falsifiable hypothesis statement, to wit:
1) a list of observations, which if observed, mean a hypothesis is false;
2) a logical argument that the lack of those falsifications means that a hypothesis must be favored over all others (including the null).
Translation into plain english:
1) tell me what would change your mind;
2) tell me why those if the things that would change your mind aren’t there, the only explanation left is yours.
Uh, oh. There’s that word “distinguished” again – attached as part of Trenberth’s title, just as it is to Mann’s and McKibben’s.
In other words, a movement is afoot that represents a fake scientist to be the real thing. Hear. Hear!
Here’s a crude 1 D “model” of the Earth the cold of space ,
T0on one side and the heat of the SunT1on the other . Take them as Planck power spectra for the respective temperature , eg : 3 and 5800 .Collapse the spectral filtering of the atmosphere ( a simple
*/, in Iverson’s not Moore’s notation ) across the atmospheric spectral layers to a single spectrum ,A, over transparency and absorptivityemissivity by wavelength ,( Transparency ; ae ).Sis an opaque surface with a spectrum( 0 ; ae ). Probably it’s as simple to implement the full Schwarzschild differential and understand it .I just want to see the spectral equations , equivalent to what you’d find Intro Heat Transfer if it covered radiant transfer between surfaces of arbitrary spectra , which show how and by how much hotter
Sbecomes than the value computed for the the lumped “seen from the outside”( A S A ).This is obviously a rather simple experimentally quantitatively testable configuration . And experiment trumps all of us .
I’m starting a http://cosy.com/Science/ComputationalEarthPhysics.html page building the computational “audit trail” between parameters at an APL level upon this core . I invite comments over their and subscription to the discussion for those interested in this dimension of the problem .
“With climate models as tools, we can carry out “what-if” experiments. What if the carbon dioxide in the atmosphere had not increased due to human activities? What if we keep burning fossil fuels and putting more CO2 into the atmosphere?”
Projections are just a bunch of “what if” statements. There is no reason to believe that any one of them will actually happen. Assuming the conditions are properly addressed, and ALL conditions are met, then yes, one will come true. Assuming no future where ALL the conditions in the projection are met, then no, they’re basically just science fiction. The IPCC makes projections and pretends they are something more certain. They are not.
Predictions are based on initial conditions and often assume to changes when they produce a trend line or whatever is being predicted. Initial conditions matter very much in predictions.
As far as I can tell, these are the definitions used by the IPCC and others in climate science. However, many do not use these definitions. While it’s not just semantics, to a large degree, it’s the complete failure of the scientists to accurately state what they are doing. Trenbreth seems correct in his usage, but fails to note that his “projections” are little more than science fiction. It seems very much like using a video game and changing the factors, which then change the outcome. Few would argue that playing a video game and using its outcome is useful science. Yet projections appear to be basiclly just that. People have been totally mislead as to what “science” is involved.
It’s not okay to behave professionally in some paragraphs of an IPCC final report and unprofessional in summary paragraphs that ignore the uncertainty of science. It’s also unprofessional and non-science to look the other way on important documents with that mix the agendas being expressed. Show some backbone.
Trenberth has some valid thoughts, and some very not valid thoughts. There really is no way of “testing” prediction, so “prediction” needs to be considered as “almost data” until disproven. There is a real need to consider the consequences of what will happen if they are even partially right.
The problem is that the alarmist community has settled on a single facet and said this is the only factor that is immutable. If they opened their minds, then they could start to find that “convergence” spoken of in the article. But when you barricade yourself behind an immutable factor, you are not capable of objective or subjective change.
This is, of course, the essence of either a religious concern or a dogmatic scam. There are those that are wiling to allow this to be a religious thing. I personally think this is a coldly calculated method to depopulate the world through starvation and hypothermia. And the same group that are running this scam are going to go to plan NW if they fail to depopulate the world in this manner. And you probably can guess that plan NW is nuclear war.
“These are just projections! They can’t be evaluated like scientific predictions!”
“We absolutely must spend trillions of dollars because of these projections!”
Some excellent comments here. But why aren’t you making them over at the Conversation, where Trenberth and his coauthor Reto Knutti (who replies to comments) might read them?
I’m banned from commenting at the Conversation, ever since I quoted an article from WUWT and they changed the rules to ban quotations from “sources considered unreliable.” But my good friend Ming Fangjian linked to this article in a comment 15 hours ago, and his comment has stayed up. If everyone here aird their opinions there it would do wonders for Dr Trenberth’s hit rate as a Conversation author, and might enlighten the readers of the Conversation.
Geoff Chambers:
You asked
Then you answered that yourself when you wrote
And why do you think anybody here (whose comments would probably be “banned” there) would want to “do wonders for Dr Trenberth’s hit rate as a Conversation author”?
Richard
Re: Scientific method. One alarmist argument goes like this: Over time, each scientific discipline becomes ever more specialized. Scientists are no longer able to crossover disciplines. Scientific practice within a discipline becomes ever more specialized. So only those within that discipline have the in depth knowledge and experience to decide what is and is not acceptable scientific method within, say, ‘climate science’. As such Popper is old hat. He’s a universalist and a world of particularisms. I suppose the post-modern version of this argument celebrates the multiplicity of sciences and philosophies of science.
That was how the argument was put to me. I would counter it by saying that all science is interlinked and disciplines depend on each other. The apparent atomicity of disciples is just a function of how scientific research goes into ever more detailed terrain. Because science is still one thing, Popper is still valid.
There is no arcane part of so-called “climate science” requiring initiation into its cultic practices. It’s all worth than worthless, made up GIGO modeling and false assumptions, lacking real science.
Anyone with an undergrad degree in any scientific or engineering degree can easily show the whole corrupt enterprise hopelessly flawed and false.
Hence the need for appeal to Druidical authority and the ludicrous 97% lie.
If the IPCC really knew what they were doing then they would have only one climate model to contend with rather than a plethora of models. At the very least they would have by now thrown out the worst of the models but they have not done that either. To simulate climate they have started with a weather simulation and have increased spatial and temperal sampling intervals and have hard coded in the concept that adding CO2 to the atmosphere causes warming. Their begging the question makes their simulations worthless. Another concern is that increaseing spatial and temperal sampling intervals may make the simulation slightly unstable so that the results are more a function of the induced numerical instability then of anything else.
The most important thing for the IPCC to do is to make an accurate determination of the climate sensivity of CO2 yet after more than two decades of effort they have been unable to reduce the range of their guesses one iota. It is really a matter of politics and not science. One researcher has pointed out that the oroginal calculatons of the Planck climate sensivity of CO2 is two great by a factor of more than 20 because the calculations neglect that a doubling of CO2 will cause a slight decrease in the dry lapse rate in the troposphere which is a cooling effect. So instead of 1.2 degrees C the climate sensivity of CO2 should be less than .06 degrees C. Then there is the issue of H2O feedback. H2O is a net coolant in the Earth’s atmosphere as evidenced by the fact that the dry lapse rate is significantly less than the dry lapse rate so that rather than amplifying the effect of CO2 by a factor of 3, H2O attenuates the effect of CO2 by a factor of 3 yielding a climate sensivity of less than .02 degrees C for a doubling of CO2. But the IPCC will never consider such low numbers for fear of losing their funding.
The reality is that the radiant greenhouse effect has not been observed anywhere in the solar system. The radiant greenhouse effect is science fiction as is the AGW conjecture.
They code in the added all the spectrums.
That’s not where they fix it, it in
3.3.6 Adjustment of specific humidity to conserve water
http://www.cesm.ucar.edu/models/atm-cam/docs/description/node13.html#SECTION00736000000000000000
This is for the CAM model 3, the NASA’s Model D and I’m pretty sure E had a similar piece of code. They parameterize the code to make sure they get the evaporation they expect. My understanding of old literature I read is the GCM’s, once this was added went from running cold, to running warm. And they used aerosols to tune the runs down. This worked until about 5 years ago when we got good aerosol data, and the tuning they used was not close to reality.
They code to add in the energy from all of the different spectrums. They have to.
But if the models match observations, they to have to end up with cooling at night following dew points in the early morning. It’s why the tropics don’t drop much in temp at night, and deserts do.
micro6500:
You say
Yes, none of the climate models – not one of them – could match the change in mean global temperature over the past century if it did not utilise a unique value of assumed cooling from aerosols. So, inputting actual values of the cooling effect (such as the determination by Penner et al.
http://www.pnas.org/content/early/2011/07/25/1018526108.full.pdf?with-ds=yes )
would make every climate model provide a mismatch of the global warming it hindcasts and the observed global warming for the twentieth century.
This mismatch would occur because all the global climate models and energy balance models are known to provide indications which are based on
1.
the assumed degree of forcing resulting from human activity that produce warming
and
2.
the assumed degree of anthropogenic aerosol cooling input to each model as a ‘fiddle factor’ to obtain agreement between past average global temperature and the model’s indications of average global temperature.
In 1999 I published a peer-reviewed paper that showed the UK’s Hadley Centre general circulation model (GCM) could not model climate and only obtained agreement between past average global temperature and the model’s indications of average global temperature by forcing the agreement with an input of assumed anthropogenic aerosol cooling.
The input of assumed anthropogenic aerosol cooling is needed because the model ‘ran hot’; i.e. it showed an amount and a rate of global warming which was greater than was observed over the twentieth century. This failure of the model was compensated by the input of assumed anthropogenic aerosol cooling.
And my paper demonstrated that the assumption of aerosol effects being responsible for the model’s failure was incorrect.
(ref. Courtney RS An assessment of validation experiments conducted on computer models of global climate using the general circulation model of the UK’s Hadley Centre Energy & Environment, Volume 10, Number 5, pp. 491-502, September 1999).
More recently, in 2007, Kiehle published a paper that assessed 9 GCMs and two energy balance models.
(ref. Kiehl JT,Twentieth century climate model response and climate sensitivity. GRL vol.. 34, L22710, doi:10.1029/2007GL031383, 2007).
Kiehl found the same as my paper except that each model he assessed used a different aerosol ‘fix’ from every other model. This is because they all ‘run hot’ but they each ‘run hot’ to a different degree.
Kiehl says in his paper:
And, importantly, Kiehl’s paper says:
“
And the “magnitude of applied anthropogenic total forcing” is fixed in each model by the input value of aerosol forcing.
Kiehl’s paper can be read here.
Please note Figure 2 in Kiehl’s paper showing data for 9 GCMs and 2 energy balance models.
It shows that
(a) each model uses a different value for “Total anthropogenic forcing” that is in the range 0.80 W/m^2 to 2.02 W/m^2
but
(b) each model is forced to agree with the rate of past warming by using a different value for “Aerosol forcing” that is in the range -1.42 W/m^2 to -0.60 W/m^2.
In other words the models use values of “Total anthropogenic forcing” that differ by a factor of more than 2.5 and they are ‘adjusted’ by using values of assumed “Aerosol forcing” that differ by a factor of 2.4.
So, each climate model emulates a different climate system. Hence, at most only one of them emulates the climate system of the real Earth because there is only one Earth. And the fact that they each ‘run hot’ unless fiddled by use of a completely arbitrary ‘aerosol cooling’ strongly suggests that none of them emulates the climate system of the real Earth.
Richard
@richardscourtenay
Good points. Climate models need to be evaluated ‘out of sample’, that is on data not used to calibrate the model parameters. A model can always be fitted to data, but does it stand up against data not used in the estimation of its parameters? A simple concept which seems lost on people like Trenberth, who proceed straight from fitted model to prediction/projection. They also have the gall to claim that having enough parameters to fit some historical data proves their model ‘right’. A possible reason for not appreciating this issue may be the experimental nature of the physical sciences. A theory/model can often be tested by designed experiment, so the idea of having to make provision to test against existing data is less appreciated. But climate research has limited scope for designed experiments (just a big undesigned one!)
basicstats:
You say that the models need to be “evaluated” on out-sample-data. Actually, there is no such sample as the population underlying each of the climate models does not exist. You may be confusing the idea of a global temperature time series with the idea of a statistical population. The time series exists but it is not a population.
@basicstats
Evaluating a model on out-of-sample data is, as you say, necessary. I would only add that, if the model works well on this data, it does not mean that the model will necessarily continue to work well in real time. It does not mean that all independent variables are known and are accurately represented in the model. Of course, the more data available the better chance the model is a good model, and, in fact, the less need for a validation set also. But, isn’t this the one very big problem with climate models – not anywhere near enough data compared to the complexity?
But you have to pay attention to what out of band testing is done. For instance if you’re testing measurements against a theoretical climate field, what exactly do you compare? What process do you have to apply to your measurements prior to doing the test? If you do the same processing to your data you built into the model, you’re not testing anything.
Kermit Johnson:
There can be no no out-of-sample data but there can also be no in-sample data as the statistical population is not identified. climatologists eliminate their need for probability theory and statistics through the unwarranted claim that the equilibrium climate sensitivity (TECS) is a constant.
humans are NOT changing the climate…….most people making that claim dont even know what the climate is.
” It is de ned as the change in global mean surface temperature”
–>
It is defined as the change in global mean surface temperature
_______________________________________
“This assessment re ects improved understanding,”
–>
This assessment reflects improved understanding,
“The evidence for this is the embarrassingly broad range of estimates for climate sensitivity to a doubling of CO2, and the failure of those estimates to converge.”
Yes, The evidence for this is the embarrassingly broad range of estimates for climate sensitivity to a doubling of CO2, and the failure of those estimates to converge + the failure of climate scientists to ignore differing assessments.