Steve McIntyre writes: IOP: expecting consistency between models and observations is an “error”
The publisher of Environmental Research Letters today took the bizarre position that expecting consistency between models and observations is an “error”.
The publisher stated that the rejected Bengtsson manuscript (which, as I understand it) had discussed the important problem of the discrepancy between models and observations had “contained errors”.
But what were the supposed “errors”? Bengtsson’s “error” appears to be the idea that models should be consistent with observations, an idea that the reviewer disputed.
The reviewer stated that IPCC ranges in AR4 and AR5 are “not directly comparable to observation based intervals”:
One cannot and should not simply interpret the IPCCs ranges for AR4 or 5 as confidence intervals or pdfs and hence they are not directly comparable to observation based intervals (as e.g. in Otto et al).
Later he re-iterated that “no consistency was to be expected in the first place”:
I have rated the potential impact in the field as high, but I have to emphasise that this would be a strongly negative impact, as it does not clarify anything but puts up the (false) claim of some big inconsistency, where no consistency was to be expected in the first place.
Read Steve’s entire post here: http://climateaudit.wordpress.com/2014/05/16/iop-expecting-consistency-between-models-and-observations-is-an-error/
==========================================================
Wow, he’s basically saying “models have no inconsistency with reality”. Damn, I guess we just aren’t as qualified as members of the sacred order to read graphs like these:
Ross McKittrick writes in comments at CA:
I have no idea if Bengtsson et al. is a good paper, not having seen it. But the topic itself is an important one, and notwithstanding those attempts at gatekeeping mentioned above, there’s no stopping the flow at this point because the model/observational discrepancies are so large and growing. A few recent examples in print include:
– Fyfe, J.C., N.P. Gillett and F.W. Zwiers, 2013: Overestimated global warming over the past 20 years. Nature Climate Change, 3, 767-769, doi:10.1038/nclimate1972
– Swanson, K.L., 2013: Emerging selection bias in large-scale climate change simulations. Geophysical Research Letters, 40, DOI: 10.1002/grl.50562.
– McKitrick, Ross R. and Lise Tole (2012) Evaluating Explanatory Models of the Spatial Pattern of Surface Climate Trends using Model Selection and Bayesian Averaging Methods. Climate Dynamics DOI 10.1007/s00382-012-1418-9.
– Fildes, Robert and Nikolaos Kourentzes (2011) “Validation and Forecasting Accuracy in Models of Climate Change International Journal of Forecasting 27 968-995.
– Anagnostopoulos, G. G., D. Koutsoyiannis, A. Christofides, A. Efstratiadis & N. Mamassis (2010). “A comparison of local and aggregated climate model outputs with observed data.” Hydrological Sciences Journal, 55(7) 2010.
– McKitrick, Ross R., Stephen McIntyre and Chad Herman (2010) “Panel and Multivariate Methods for Tests of Trend Equivalence in Climate Data Sets”. Atmospheric Science Letters, DOI: 10.1002/asl.290
And I know of another one nearly accepted that continues the theme. It may be that Bengtsson et al. had some flaws, though I agree that the reviewer didn’t point to any. Instead the reviewer tries to argue that models and observations are not meant to be compared, and the editor swallowed this nonsensical argument, no doubt happy for a straw to clutch at.
But nobody should be surprised that ERL has the slant that it does: This is a journal with Peter Gleick, Stefan Rhamstorf and Myles Allen on its editorial board:
http://iopscience.iop.org/1748-9326/page/Editorial%20Board
You can’t advertise a hard-line editorial stance any better than that. Well, maybe they could: they list as their #1 Highlight publication of 2013… Cook, Nucitelli et al.
http://iopscience.iop.org/1748-9326/page/Highlights-of-2013
Strangely, they really seem to be objecting that the Times had the nerve to run the story after all the work that’s been done to convince the press about the supposed dangers of “false balance”:
With current debate around the dangers of providing a false sense of ‘balance’ on a topic as societally important as climate change, we’re quite astonished that The Times has taken the decision to put such a non-story on its front page.
Evidently they too subscribe to that editorial position: don’t print anything that might give the impression there’s actually a range of scientific views out there.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.


Make that “just five”: LB has four co-authors.
You’re right. I was thinking only of his recently rejected paper.
But the journal rejected his paper, not because its findings were “questionable,” but because they were old hat. The reviewer wrote, “The overall innovation of the manuscript is very low, as the calculations made to compare the three studies are already available within each of the sources, most directly in Otto et al.”
Strawman. Bengtsson didn’t “deny” “the role of human-caused CO2 emissions in current climate change”: He, like others recently, is placing less relative weight on it (a lower climate sensitivity).
So…. They are saying that reality and real empirical measured data is completely unimportant when it comes to climate alarm and we should strip trillions of dollars out of the global economy to mitigate what the models are suggesting could happen?
Here is a cheaper alternative. Re code the models so that they do NOT show catastrophic warming and that is all the mitigation we need.
It’s unlikely there were 2000 climate Attribution studies published last year. Probably 80% of them were Impact or Mitigation studies. Their authors have no expertise or presumptive scientific authority on “the role of human-caused CO2 emissions in current climate change.”
rogerknights says…It’s unlikely there were 2000 climate Attribution studies published last year. Probably 80% of them were Impact or Mitigation studies. Their authors have no expertise or presumptive scientific authority on “the role of human-caused CO2 emissions in current climate change.”
========================================================
Exactly, most studies are by some one who say studies , (pick your critter), goes to a drought regions, reports the impact of drought on said critter, then extrapolates from the climate models the terrible terrible critter harm of future CO2 caused droughts, and then applies for another grant.
I think your 80% may be way way low. I doubt there was even 400 studies on the all important climate sensitivity to CO2
David A:
Despite the 400 or so studies, the climate sensitivity is a myth.
The entire UNFCCC – IPCC CAGW meme is based on their model forecasts. It has always been clear that these model outputs have no connection to the real world because models containing such a large number of variables are inherently incomputable and in addition these models are structurally flawed being purpose built to produce the answers desired by the funding organisations. The whole global warming scare is a great scientific fiasco which eventually will destroy public confidence in science in general.
Another method of forecasting must be adopted.
For estimates of the probable coming cooling based on the natural quasi periodicities in the temperature data and using the neutron count and the 10Be record as a proxy for solar “activity ” see several posts over the last could not years at
http://climatesense-norpag.blogspot.com
Dr Norman Page
That its variables are of large number is no longer a barrier to the construction of a statistically validated model. That we do not yet have a statistically validated climate model is a product of ineptitude on the part of the designers of the study.
the obligation to make a systematic effort at mutual understanding among all the collectives, peoples, societies and communities in this global world
Getting individuals to understand each other is too easy? Or too hard?
Collectives? Well I’m not a joiner.
richardscourtney on May 17, 2014 at 2:35 am
Richard, what you describe is the difference between inductive and deductive science.
The rotting corpse of CAGW will be sealed in a mausoleum with the label “inductive science”. Under the corpse’s folded arms will rest the last edition of the Guardian.
Terry writes “sometimes possible to build a statistically validated model of a system which, like the climate, is “complex.” While it is available, this technology is not used in modern climatology.”
Statistical Validation of Complex Computer Models
Rima Izem, Harvard University
http://wenku.baidu.com/view/fbae99c14028915f804dc292.html
To validate a computer model, i.e., determine the degree to which the computer model is an accurate representation of the real world, results of computer model experiments need to be compared to real data. The comparison could be between the output of the model and past data, as for example evaluating a weather simulation model by comparing its output to past weather data (Covey et al. (2003))
Can you see the problem GCMs have when predicting unknown and different-to-anything-we’ve-ever-seen future climate states?
TimTheToolMan:
I see your point. Thank you for giving me the opportunity to address it.
In logic, the pertinent comparison is between between the predicted and observed relative frequencies of the outcomes of the events underlying the model. If there is not a match the model is “falsified.” Otherwise, it is “validated.”
No events underlie the current crop of climate models. Hence, these models are susceptible to neither falsification nor validation. However, they are susceptible to IPCC-style “evaluation.” In logical terms, an “evaluation” is an example of an equivocation, that is, it is an example of an argument from which a conclusion may not logically be drawn. Further information on the equivocation fallacy in global warming arguments is available at http://wmbriggs.com/blog/?p=7923 .
You claim that future climate states differ from anything we’ve ever seen in past climate states. To address this claim I need to amend it to state that “future climate microstates differ from anything we’ve every seen in past climate microstates.” In the construction of a statistically validated climate model, the model builder would abstract (remove) the state-descriptions from the details that distinguished the various microstate-desciptions thus producing “macrostate-descriptions” that were not unprecedented. “Microstate” and “macrostate” are terms from statistical physics.
It is a Cult and “Models” are a sacrament.
To summarize the notes from the reviews:
“This paper is trying to compare satellite, energy budgets and surface temperatures which are derived in entirely different ways, and even though they all are trying to describe the same thing it is perfectly natural that they all disagree. Settled Science! DENIED!! Now excuse us while we go and graft observational records on to the end of dubious climate models. TTFN!!”
Terry writes “To address this claim I need to amend it to state that “future climate microstates differ from anything we’ve every seen in past climate microstates.””
and “In the construction of a statistically validated climate model, the model builder would abstract (remove) the state-descriptions from the details that distinguished the various microstate-desciptions thus producing “macrostate-descriptions” that were not unprecedented.”
Its simply not true Terry. For example no model is capable of resolving future ENSO activity and so we cannot abstract those details from the model because they matter – unless you assume they dont matter. No model is capable of resolving future impacts due to differing cloud albedo or DLR increases so again they cant be abstracted away unless again you assume they dont matter.
I’m sure you can see where this is going.
TimTheToolMan:
Thank you for taking the time to respond. Though the details matter, in constructing a model of the climate the model builder must respond to his/her lack of information about these details. One of the possible responses is to fabricate information; this is the response that has been taken. Another is abstraction that optimizes the missing information; this is the response that should be taken. Optimization of the missing information has produced a number of well known natural laws, one of which is thermodynamics.
I don’t know if anyone saw this, but the learned Gavin Schmidt at Real Climate pontificates that it’s OK for models to not match reality, and still not be refuted. His prime example: that “faster than light neutrinos” do not invalidate Einstein’s relativity.
http://www.realclimate.org/index.php/archives/2013/09/on-mismatches-between-models-and-observations/
Earth to Gavin: the “faster than light neutrino” would have invalidated Relativity, at least in part, ifi it were correct, which it is not. Sheesh!
http://www.nature.com/news/neutrinos-not-faster-than-light-1.10249
I recommend that people look at this link quickly: as soon as somebody at Real Climate realizes how stupid this is, they will take it down.
Just related link:
http://www.dailymail.co.uk/news/article-2631477/Revealed-How-green-zealots-gagged-professor-dared-question-global-warming.html
Terry writes “Another is abstraction that optimizes the missing information; this is the response that should be taken. ”
You cant optimize that which you dont know. We are nowhere near to being able to abstract our models to the point where they’re modelling molecular interactions to bypass any need for understanding coarse physical process and yet that may be what is needed to truly model our climate.
So whilst I appreciate your idea it really does come back to the “sometimes” you initially mentioned because it doesn’t apply to our GCMs. Not today. Probably not in our lifetimes and certainly not in a time frame that is relevant to the AGW debate.
TimTheToolMan:
Actually, we can optimize that which we do not know. The advance that makes this possible is information theory. The quantity which in information theory is called the “entropy” is the missing information per event; it is the information that would have to be supplied in order for one to reach a deductive conclusion about the outcomes of the events. As you may know, the second law of thermodynamics states that the entropy is maximized under the constraint of energy conservation. This is an example of an optimization.
Regarding richardscourtney says:
May 17, 2014 at 2:35 am
As several commentators (including Steve McIntyre) have pointed out, the recommended rejection of Bengtsson’s paper says the differences between the model results and reality are not novel information and have no importance: the Review Comment says;
“One cannot and should not simply interpret the IPCCs ranges for AR4 or 5 as confidence intervals or pdfs and hence they are not directly comparable to observation based intervals (as e.g. in Otto et al).”
“I have rated the potential impact in the field as high, but I have to emphasize that this would be a strongly negative impact, as it does not clarify anything but puts up the (false) claim of some big inconsistency, where no consistency was to be expected in the first place.”
Indeed, the AR4 repeatedly asserted that reality should be ignored if it did not concur with model results
———————————————————————————————————-
Richard, thank you for showing the historical record of IPCC and climate sciences non-science, and this abuse of science continuing in the current reviews. It is my view that such general statements about models, being cogent ONLY to the narrow understanding with regard to models, in effect stating that one should not expect models to be a perfect reflection of complex chaotic process, and then MISAPPLYING THAT STATEMENT to in affect say that we can then ALWAYS ignore the failures of the models with regard to observations, have much in common with certain troll comments, where the attempt to confuse is deliberate.
Of course as I and others have pointed out, the “no consistency was to be expected in the first place.” is indeed wrong. The models are consistent, amazingly so. Consistently wrong in one direction. (Which , if they were real scientist, would be HIGHLY informative.)
To borrow from your analogy; if I were to try to shoot a gazelle, which unlike real gazelles never moved, and I shot my arrow 40 times, each time missing the target within a range of 5′ to 25′ to the RIGHT (think to the warm side of error) I would NOT then aim 15′ to the right, (the mean distance from the real target) and then claim that all humanity should accept my shots as being a bull eye.
Yet this is exactly what the IPCC did by discussing the mean of the ensemble of models as a reference point of the expected future harms of CO2.
This, like a blog troll, is not an accidental scientific error, but a deliberate attempt to confuse for the purpose of serving their political master’s, who want a certain result for political reasons.
Terry Oldberg says:
May 17, 2014 at 7:45 am
David A:
Despite the 400 or so studies, the climate sensitivity is a myth.
============================================================
Well, Tim, I cannot quite agree with you. CO2 both cools and warms, yet I doubt that it is perfectly Newtonian and the cooling and warming are exactly equal. It is in the real atmosphere, and it does cool, relative to receiving conducted energy which could not otherwise exit the atmosphere, and zipping it out to space, and it warms, taking some radiant energy which is zipping off to space, and redirecting it back to the earths system, either through redirected radiation or conducting same energy to a non GHG molecule via collision.. (In affect it can increase or decrease the residence time of some energy.) if it increases the residence time, then it warms because the incoming energy keeps incoming, if it decreases residence time, then it cools. CO2 does both.
However I think we can agree that thus far the observations clearly indicate that climate sensitivity to CO2 is far lower then the IPCC admits, and most importantly the “C” in CAGW is entirely MIA.
David A:
In IPCC-AR4 and IPCC-AR5 there is no evidence of the existence of events underlying any of the referenced climate models. If present, these events would support measurement of the counts of independent observed events that are called “frequencies” in statistics and the ratios of frequencies that are called “relative frequencies.” Testing of a model would be possible in which the predicted relative frequencies were compared to the observed relative frequencies. If there were a match, the model would be “statistically validated.” Otherwise the model would be “falsified by the evidence.”
In designing their field of study, climatologists replaced climatological events by scientifically and logically illegitimate innovations. One of these was the equilibrium climate sensitivity. Another was model “evaluation.”
Those X-Y plots that compare projected to observed global temperatures are examples of evaluations. An evaluation can be conducted in lieu of events making it popular among event-bereft climatologists.
If climatologists were to identify the the events underlying a model they would start by dividing the time-line into non-overlapping parts. Each such part would have a starting time and an ending time and would provide a portion of the description of an independent event. The next step would be to identify the complete set of possible outcomes of an event. To define the outcome of an event as a global temperature when it is averaged over the duration of this event is worth thinking about. However, this alternative suffers from the drawback that no more than one event with given outcome would ever occur. Thus, the model would be insusceptible to falsification by the evidence; however, falsifiability is a requirement of the scientific method of investigation. To avoid this pitfall, climatologists would have to define the outcomes in the complete set of possible outcomes more abstractly.
BTW, I would love to know how many annual climate studies actually attempt to determine the climate sensitivity. As I said, I do not think anywhere near 20% of the annual studies address this.
This would be another important and overlooked rebuttal to the 97% meme.
Terry states, …”No events underlie the current crop of climate models”
————————————————————————————————
I do not understand this. The models indicate many climatic events in the real world. The real world has climate. Is that climate not events?
Previously I asked, “BTW, I would love to know how many annual climate studies actually attempt to determine the climate sensitivity. As I said, I do not think anywhere near 20% of the annual studies address this.”
While not a direct answer, ( I do not know if all of these studies directly attempted to address the physical question of climate sensitivity to CO2) Christopher Monckton’s latest post states this…”However, Legates et al. (2013) have demonstrated that just 0.5% of 11,944 climate science abstracts published from 1991-2011 state that we are the major cause of recent warming. “Consensus” is lacking. The IPCC is wrong.”
So .05% of 2000 studies is 10 studies (not 400) that indicate human caused warming, a luke warmist position that in no way translates to CAGW regardless.
Please define “events” vs “evaluations”. Two or three examples of both should suffice to both define and differentiate between them.. (all of the above within the field of climate studies please)
If you are willing to further demonstrate, respond to my earlier response to your assertions that there is no such thing as “climate sensitivity”. ( see David A says: May 18, 2014 at 1:43 am )
David A:
I’ve already taken a stab at defining “event” and “evaluation” in the text that lies immediately above your most recent post. I’ll be happy to clarify if necessary.
Regarding the equilibrium climate sensitivity (TECS), it is the ratio of the change in the global temperature at equilibrium to the change in the logarithm to the base 2 of the CO2 concentration. As the global temperature at equilbrium is not an observable feature of the real world, when a numerical value is assigned to TECS this assignment is not falsifiable. Thus, it is unscientific and illogical.
A consequence from belief in the existence of TECS is for policy makers to think they have information about the outcomes from their policy decisions when they have no information. By creating belief in the existence of TECS, climatologists fabricate information. To fabricate information is unscientific and unethical.
David A:
I write to offer a warning and a suggestion.
Several people (including me, Willis Eschenbach, Robert G Brown, etc.) have attempted to get Oldberg to define the “events” which he claims are needed but do not exist. Nobody has managed to get him to define these “events” but several WUWT threads have been completely sidetracked by the attempts.
So, I suggest that you avoid much waste of effort (and tearing of hair) by ignoring Oldberg’s assertions.
Richard
richardscourtney:
Actually, I have already defined what I mean by “event” and in this thread. Also, as “event” is a commonly used term in the literature of mathematical statistics it needs no definition by me. Perplexity over the definition of “event” suggests a lack of background in mathematical statistics by the person who is perplexed by it. Among those who are thusly perplexed is evidently you. For the future, you could avoid wasting the time of your fellow bloggers by reading up on the topic of thread before entering the conversation.
Terry Oldberg:
At May 18, 2014 at 9:52 am you say to me
Really? You have? At long, long last you have done that?
Strange that you did not cite the definition, did not quote it, and did not copy it when claiming you have provided it.
Please state your definition of “events” which you claim are needed but do not exist.
Any reply other than a clear and succinct definition in response to my request will be a public declaration by you that you are (yet again) wasting space on the thread with meaningless nonsense.
Richard
PS And your claims to your statistical abilities are laughable.
richardscourtney:
As attempts at discourse with you are invariably unproductive, I’ll exit this conversation.