By Christopher Monckton of Brenchley
This time last year, as the honorary delegate from Burma, I had the honor of speaking truth to power at the Doha climate conference by drawing the attention of 193 nations to the then almost unknown fact that global warming had not happened for 16 years.
The UN edited the tape of my polite 45-second intervention by cutting out the furious howls and hisses of my supposedly grown-up fellow delegates. They were less than pleased that their carbon-spewing gravy-train had just tipped into the gulch.
The climate-extremist news media were incandescent. How could I have Interrupted The Sermon In Church? They only reported what I said because they had become so uncritical in swallowing the official story-line that they did not know there had really been no global warming at all for 16 years. They sneered that I was talking nonsense – and unwittingly played into our hands by spreading the truth they had for so long denied and concealed.
Several delegations decided to check with the IPCC. Had the Burmese delegate been correct? He had sounded as though he knew what he was talking about. Two months later, Railroad Engineer Pachauri, climate-science chairman of the IPCC, was compelled to announce in Melbourne that there had indeed been no global warming for 17 years. He even hinted that perhaps the skeptics ought to be listened to after all.
At this year’s UN Warsaw climate gagfest, Marc Morano of Climate Depot told the CFACT press conference that the usual suspects had successively tried to attribute The Pause to the alleged success of the Montreal Protocol in mending the ozone layer; to China burning coal (a nice irony there: Burn Coal And Save The Planet From – er – Burning Coal); and now, just in time for the conference, by trying to pretend that The Pause has not happened after all.
As David Whitehouse recently revealed, the paper by Cowtan & Way in the Quarterly Journal of the Royal Meteorological Society used statistical prestidigitation to vanish The Pause.
Dr. Whitehouse’s elegant argument used a technique in which Socrates delighted. He stood on the authors’ own ground, accepted for the sake of argument that they had used various techniques to fill in missing data from the Arctic, where few temperature measurements are taken, and still demonstrated that their premises did not validly entail their conclusion.
However, the central error in Cowtan & Way’s paper is a fundamental one and, as far as I know, it has not yet been pointed out. So here goes.
As Dr. Whitehouse said, HadCRUTt4 already takes into account the missing data in its monthly estimates of coverage uncertainty. For good measure and good measurement, it also includes estimates for measurement uncertainty and bias uncertainty.
Taking into account these three sources of uncertainty in measuring global mean surface temperature, the error bars are an impressive 0.15 Cº – almost a sixth of a Celsius degree – either side of the central estimate.
The fundamental conceptual error that Cowtan & Way had made lay in their failure to realize that large uncertainties do not reduce the length of The Pause: they actually increase it.
Cowtan & Way’s proposed changes to the HadCRUt4 dataset, intended to trounce the skeptics by eliminating The Pause, were so small that the trend calculated on the basis of their amendments still fell within the combined uncertainties.
In short, even if their imaginative data reconstructions were justifiable (which, as Dr. Whitehouse indicated, they were not), they made nothing like enough difference to allow us to be 95% confident that any global warming at all had occurred during The Pause.
If one takes no account of the error bars and confines the analysis to the central estimates of the temperature anomalies, the HadCRUt4 dataset shows no global warming at all for nigh on 13 years (above).
However, if one displays the 2 σ uncertainty region, the least-squares linear-regression trend falls wholly within that region for 17 years 9 months (below).
The true duration of The Pause, based on the HadCRUT4 dataset approaches 18 years. Therefore, the question Cowtan & Way should have addressed, but did not address, is whether the patchwork of infills and extrapolations and krigings they used in their attempt to deny The Pause was at all likely to constrain the wide uncertainties in the dataset, rather than adding to them.
Publication of papers such as Cowtan & Way, which really ought not to have passed peer review, does indicate the growing desperation of institutions such as the Royal Meteorological Society, which, like every institution that has profiteered by global warming, does not want the flood of taxpayer dollars to become a drought.
Those driving the scare have by now so utterly abandoned the search for truth that is the end and object of science that they are incapable of thinking straight. They have lost the knack.
Had they but realized it, they did not need to deploy ingenious statistical dodges to make The Pause go away. All they had to do was wait for the next El Niño.
These sudden warmings of the equatorial eastern Pacific, for which the vaunted models are still unable to account, occur on average every three or four years. Before long, therefore, another El Niño will arrive, the wind and the thermohaline circulation will carry the warmth around the world, and The Pause – at least for a time – will be over.
It is understandable that skeptics should draw attention to The Pause, for its existence stands as a simple, powerful, and instantly comprehensible refutation of much of the nonsense talked in Warsaw this week.
For instance, the most straightforward and unassailable argument against those at the U.N. who directly contradict the IPCC’s own science by trying to blame Typhoon Haiyan on global warming is that there has not been any for just about 18 years.
In logic, that which has occurred cannot legitimately be attributed to that which has not.
However, the world continues to add CO2 to the atmosphere and, all other things being equal, some warming can be expected to resume one day.
It is vital, therefore, to lay stress not so much on The Pause itself, useful though it is, as on the steadily growing discrepancy between the rate of global warming predicted by the models and the rate that actually occurs.
The IPCC, in its 2013 Assessment Report, runs its global warming predictions from January 2005. It seems not to have noticed that January 2005 happened more than eight and a half years before the Fifth Assessment Report was published.
Startlingly, its predictions of what has already happened are wrong. And not just a bit wrong. Very wrong. No prizes for guessing in which direction the discrepancy between modeled “prediction” and observed reality runs. Yup, you guessed it. They exaggerated.
The left panel shows the models’ predictions to 2050. The right panel shows the discrepancy of half a Celsius degree between “prediction” and reality since 2005.
On top of this discrepancy, the trends in observed temperature compared with the models’ predictions since January 2005 continue inexorably to diverge:
Here, 34 models’ projections of global warming since January 2005 in the IPCC’s Fifth Assessment Report are shown an orange region. The IPCC’s central projection, the thick red line, shows the world should have warmed by 0.20 Cº over the period (equivalent to 2.33 Cº/century). The 18 ppmv (201 ppmv/century) rise in the trend on the gray dogtooth CO2 concentration curve, plus other ghg increases, should have caused 0.1 Cº warming, with the remaining 0.1 ºC from previous CO2 increases.
Yet the mean of the RSS and UAH satellite measurements, in dark blue over the bright blue trend-line, shows global cooling of 0.01 Cº (–0.15 Cº/century). The models have thus already over-predicted warming by 0.22 Cº (2.48 Cº/century).
This continuing credibility gap between prediction and observation is the real canary in the coal-mine. It is not just The Pause that matters: it is the Gap that matters, and the Gap that will continue to matter, and to widen, long after The Pause has gone. The Pause deniers will eventually have their day: but the Gap deniers will look ever stupider as the century unfolds.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
I see you’ve resorted to being pedantic.
on an absolute scale against GAST on an absolute scale. It is why no one ever presents the full climate record of the Holocene with sufficient error bars in both directions, to correctly account for proxy uncertainty and errors in projecting what is inevitably a tiny sampling of the globe to a temperature anomaly on the whole thing in any given timeslice and the fact that time resolution of the time slices goes from as fast as hourly in the modern era to averaging over centuries per observation in proxy samples representing the remote past. We can, perhaps, speak of monthly anomaly peaks and troughs over the last 30 years with similar resolution, but there is no possible way to assert some particular value, reliably, for monthly anomalies in 1870. The error bars in the latter are so large that the numbers are meaningless. It isn’t even clear that the modern numbers on a monthly scale are meaningfull. The system has a lot of natural noise, and then there are the systematic measurement errors.
If you want to disprove something statistically, you have to adopt the null hypothesis that it is true, and then show that that has to be rejected.
Sorry but if I were being pedantic I would have to correct you here; you don’t have to adopt anything, the null hypothesis is the default position. That default position is either accepted or rejected after experimentation via statistical inference (as stated). The converse must therefore also be true (relating to the hypothesis or alternative hypothesis).
To be picky about this, you are both right, but you both need to define the hypothesis in question:
http://en.wikipedia.org/wiki/Null_hypothesis
Nick is precisely correct in that one can state a hypothesis — “AGW is true” or (in my own work) “The RAN3 random number generator generates perfectly random numbers”. This then becomes the null hypothesis — the thing you wish to disprove by comparing its predictions with the data.
In the case of the random number generator a test is simple. Generate some statistic with a known distribution from a series of supposedly perfectly random values produced by the generator in question. Compute the probability of getting the empirical result for the statistic given the assumption of perfect randomness. If that probability — the “p-value” — is very low, reject the null hypothesis. You have grounds for believing that RAN3 is not a perfect random number generator (and, in fact, this generator fails certain tests in exactly this way!)
In the case of AGW, each model in CMIP5 constitutes a separate null hypothesis — note well separate. We should then — one at a time, for each model — compare the distribution of model predictions (given small perturbations in initial conditions to allow for the phase space distribution of possible future climates from any initial condition in a chaotic nonlinear system) and compare them to actual measurements, and compute the fraction of those climate trajectories that “encompass” the observation and/or are in “good agreement” with the observation. This process is somewhat complicated by the fact that both the prediction and the observation have “empirical” uncertainties. Still, the idea is the same — models that produce few trajectories in good agreement with the actual observation are in some concrete sense less likely to be correct in a way that must eventually converge to certainty as more data is accumulated (lowering the uncertainty in the data being compared) or the divergence in prediction and observation widens.
This is only one possible way to test hypotheses — Jaynes in his book Probability Theory, the Logic of Science suggests another, Bayesian approach, which is to begin with a hypothesis (based on a number of assumptions that are themselves not certain, the Bayesian priors). One then computes the posterior distribution (the predictions of the theory) and as new data comes in, uses the posterior distribution and Bayes’ formula to transform the posterior distribution into new priors. In Bayesian reasoning, one doesn’t necessarily reject the null hypothesis, one dynamically modifies it on the basis of new data so that the posterior predictions remain in good agreement with the data. Bayesian statistics describes learning and fits perfectly with (indeed, can be derived from) computational information theory. If one applied Bayesian reasoning to a GCM that gave poor results when its posterior prediction was compared to reality, one would modify its internal parameters (the priors) until the posterior prediction was in better agreement.
This isn’t a sufficient description of the process, because one can weight the hypothesis itself with a degree of belief in the various priors, making some of them much more immune to shift (because of separate observations, for example). There are some lovely examples of this kind of trade-off reasoning in physics — introducing a prior assumption of dark matter/energy (but keeping the same old theory of Newtonian or Einsteinian gravitation) versus modifying the prior assumption of Newtonian gravitation in order to maintain good agreement between certain cosmological observations and a theory of long range forces between massive objects. People favor dark matter because the observations of (nearly) Newtonian gravitation have a huge body of independent support, making that prior relatively immune to change on the basis of new data. But in truth either one — or an as-yet unstated prior assumption — could turn out to be supported by still more observational data, especially from still more distinct kinds of observations and experiments.
Although there exist some technical objections to the application of the Cox axioms to derive Bayesian statistics in cases where probabilities are non-differentiable, the Cox/Jaynes’ general approach to probability theory as the basis for how we accrue knowledge is almost certainly well-justified as “knowledge” in the human brain is in some sense differentiable, degree of belief of propositions concerning the real world is — after factoring in all the Bayesian priors for those propositions — inevitably not sharply discrete. In the case of climate science, one would interpret the failure of the posterior predictions of climate models as sufficient reason to change the model assumptions and parameters to get better agreement, retaining aspects of the model that we have a very strong degree of belief in in favor of those that we cannot so strongly support by means of secondary evidence, to smoothly avoid the failure of a hypothesis test based on faulty priors.
Yet, you are correct as well. One can always formulate as a null hypothesis “the climate is not changing due to Anthropogenic causes”, for example, and seek to falsify that using the data (not alternative models such as GCMs). This sort of hypothesis is very difficult to falsify, as one is essentially looking for variations in the climate that are correlated with anthropogenic data and that did not occur when the anthropogenic data had very different values. Humans very often use this, the assumption that their current state of belief about the world is “normal”, as their null hypothesis. Hence we assume that a coin flip is an unbiased 50-50 when we encounter a new coin rather than a biased 100-0 tails to heads, even though we are aware that one can construct two-headed coins as easily as coins with one head and one tail, or coins that have a weighting and construction that biases them strongly towards heads even though they have two sides.
This is why the term “unprecedented” is arguably the most abused term in climate science. It is inevitably used as pure argumentation, not science, trying to convince the listener to abandon the null hypothesis of nothing to see here, variation within the realm of normal behavior. It is why climate scientists almost without exception make changes in methodology or in the process that selects observations for inclusion to exaggerate warming trends, never reduce them. It is why Mann’s hockey stick was so very popular in the comparatively brief period before it was torn to shreds by better statisticians and better science. It is why no one ever includes the error bars in presentations of e.g. GASTA, and why GASTA is always presented, never the actual GAST. It is why no one ever presents
In actual fact, little of the modern climate record is unprecedented. It has been as warm or warmer (absolute temperature or anomaly, take your pick) in the past without ACO_2, on any sort of 1000 to 3000 year time scale, and much warmer on time scales longer than that stretching back to e.g. the Holocene optimum or the previous interglacials. There is no perceptible change in the pattern of warming over the last 160 years of thermometric records (e.g. HADCRUT4) that can be confidently attributed to increased CO_2 by means of correlation alone. The rate of warming in the first half of the 20th century is almost identical to the rate of warming in the second half, so that is hardly unprecedented, yet the warming in the first half cannot be reasonably attributed to increases in CO_2. The periods of neutral to cooling in the last 160 years of data are not unprecedented (again, they are nearly identical and appear to be similarly “periodically” timed) and in both cases are difficult to reconcile with models that make CO_2 the dominant factor in determining GASTA over all forms of natural variation.
Indeed, the null hypothesis of warming continuing from the LIA due to reasons that we understand no better than we understand the cause of the LIA in the first place simply cannot be rejected on the basis of the data. While warming in the second half of the data could be due to increased CO_2 and the warming in the first half could be due to other causes, until one can specify precisely what those other causes are and quantitatively predict the LIA and subsequent warming, we won’t have a quantitative basis for rejection. It is not the case that mere correlation — the world getting warmer as CO_2 concentrations increase — are causality, and nothing at all here is unprecedented in any meaningful sense of the term.
I won’t address the issue of null hypothesis and alternate hypothesis, as things start to get very complicated there (as there may be more than one alternate hypothesis, and evidence for things may be overlapping. For example, there is a continuum of hypotheses for CO_2-linked AGW, one for each value of climate sensitivity treated as a collective parameter for simplicity’s sake. It isn’t a matter of “climate sensitivity is zero” vs “climate sensitivity is 2.3C”, it is “climate sensitivity is anywhere from slightly negative to largely positive”, and the data (as we accumulate it) will eventually narrow that range to some definite value. Bayesian reasoning can cope with this; naive hypothesis testing has a harder time of it. According to Bayes, climate sensitivity, to the extent that it is a Bayesian prior not well-established by experiment and different in nearly every climate model should be in free-fall with every year without warming pulling its most probable value further down. And to some extent, that is actually happening in climate science, although “reluctantly”, reflecting a too-great weight given to high sensitivity for political, not scientific, reasons from the beginning of this whole debate.
rgb
Nick Stokes says:
November 20, 2013 at 2:50 am
TLM says: November 20, 2013 at 2:27 am
“Now please enlighten me how you measure “warming” without measuring the temperature?”
AGW has been around since 1896. Arrhenius then deduced that CO2 would impede the loss of heat through IR, and would cause temperatures to rise. There was no observed warming then. AGW is a consequence of what we know about the radiative properties of gases.
AGW predicted that temperatures would rise, and they did. You can’t do better than that, whether or not the rise is “statistically significant”.
—————————–
Temperatures have gone up & down. They were already rising in 1896, as indeed had been the temperature trend since c. 1696, ie the depths of the LIA. (The longer term trend since c. 2296 BC however remains down.) The warming trend in the 1920s to ’40s reversed to cooling in the 1940s to late ’70s, then reversed again back to warming c. 1977. It appears to be in the process of returning to cooling.
So you could in fact do much better than what has actually happened since 1896. If CACA were valid, ie 90% of observed warming in this century supposedly caused by human-released GHGs, then temperature would have gone up in lock step with CO2, but it hasn’t. Human activities may have some measurable effect, but natural fluctuations still rule.
Russ R. says:
November 20, 2013 at 6:38 am
The long run trend remains the only thing that matters.
Which one? Remember, the ‘Larmists like to only blame warming after 1950 on man, since most of the increase in CO2 occurred after that. Therein lies their big problem; how to explain the 17-year stop in warming now, with CO2 levels at their highest levels and continually increasing. The real reason is simple; CO2 wasn’t driving temperatures up in the first place. But “Larmies are if nothing else, slow learners.
rgbatduke says:
November 20, 2013 at 9:15 am
I know Ive written this before but…
jeez are your students lucky.
Jim says: November 20, 2013 at 7:23 am
Silver ralph says November 20, 2013 at 6:00 am
I will see your “differential temperatures” factor and raise you with a “divergent jet” overhead (in an affected area) … often we get HUGE swings in the nature of airmasses out here are the ‘great plains’ with LITTLE in the way of precip even …
___________________________________
Perhaps no precipitation where you are, but somewhere else…….?
A warm and a cold airmass residing side by side causes massive pressure differentials at high altitude. Those pressure differences cause the high altitude jetstreams, and the Earth’s spin will quickly have them moving in an easterly direction. And it is the jetstreams, and their massive movements of air, that drive the surface pressure differences that produce surface cyclonic conditions.
Thus no temperature differentials = no jetstreams = no cyclones. So a uniformly warm planet will produce …. not a lot really (in terms of weather). (And a waving jet stream produces the most active weather.)
rgbatduke: “This isn’t a sufficient description of the process, because one can weight the hypothesis itself with a degree of belief in the various priors, making some of them much more immune to shift ”
I’ve weighted the prior belief in Bayesian testing, such that I can categorically state it’s absurd. And being asymptotically derived it wholly free of modifications of belief until every other source of probability becomes, to a one uniformly, absurd with respect to measurements. Then and only then must I reject the whole tapestry. And on the next use my ‘principle of sufficient bias’ restates that Bayesian hypothesis testing is absurd.
There are a lot of interesting things to say about Bayesian notions. Not the least of which is that ridiculously simple networks of neurons can be constructed as a Bayesian consideration. And, indeed, there are good reasons to state that it is a primary mode of statistically based learning in humans. Which, if you consider at all, is exactly where we get confirmation bias from. When something is wholly and demonstrably false, but we have prior and strongly held beliefs, *nothing changes* despite that the new information shows that the previous information is wholly and completely absurd.
But of another note, the ability to smoothly avoid falsification that you mention with regards to AGW, is precisely the use of weighted priors in a Bayesian scheme. That is, they are doing exactly what Bayes would have of them if they are stating anything other than a ‘principle of insufficient reason’ for anything not already and independently established. Bayes is not simply belief formation and learning, it is belief retention and absurdity as well.
The problem here is a rather old and basic one. Until you’ve proved your premises: Nothing follows.
Yet another breathtaking post from Professor Brown, whose contributions are like drinking champagne to the sound of trumpets. His profound knowledge, always elegantly expressed, is a delight. His outline of the purpose, methods and value of Bayesian probability is one of the clearest I have seen. And the economy with which he points out the fundamental statistical inadequacies in the case for worring about CO2 is valuable. A few more like him and scientists would once again be treated with respect and admiration.
Mr. Bofill queries my conclusion that the Cowtan and Way paper does not establish to 95% confidence that the Pause has not happened, on the unusual basis that the authors themselves allowed the possibility that that conclusion was true. However, I was careful not to say that they had themselves allowed for the possibility that there has been a Pause:, I said that on the basis of their work WE could not do so.
Their paper concluded that the terrestrial datasets exhibit a cooling bias in recent decades and that, therefore, the Pause might not have happened. That is what has been published in too many places, and the authors have not demurred.
I had hoped I had demonstrated that no such conclusion that which they had drawn could legitimately be drawn from the patchwork of techniques they deployed. The fundamental mistake they made, which the reviewers should not in my submission have allowed, was to assume that their techniques constrained, rather than widening, the uncertainties in the surface temperature record.
Another commenter asserts, as trolls and climate extremists so often do, that in an unrelated discussion on a different blog on the other side of the world I had persisted in asserting something that anyone with elementary algebra ought to have accepted was incorrect. However, as so often, the troll in question did not specify what point he thought I had misunderstood. This is a particularly furtive instance of the ignoratio elenchi fallacy in two of its shoddy sub-species: ad hominem and irrelevancy to the matter at hand. If the troll would like to instruct me on a point that has little or nothing to do with this thread, let him not pollute this thread by fabricating a smear: let him write to me and say what it is he challenges and why.
Finally, there has been some discussion in this thread about my use of the word “prestidigitation”. I use it in its original meaning, sleight of hand, and in its metonymic derivative, trickery, with an implication of less than honest dealing.
OssQss says:
November 20, 2013 at 4:29 am
So,,,,,, who are these individuals that have written this paper?
What is their history in climate science?
What else have they written? …
Here’s a brief answer to some of your questions:
“Dr Kevin Cowtan is a computational scientist at the University of York, and Robert Way is a cryosphere specialist and PhD student at the University of Ottawa. … Dr Cowtan, whose speciality is crystallography, carried out the research in his spare time. This is his first climate paper.”
With respect, Nick Stokes and others have inverted null hypothesis and hypothesis to be tested in order to favor the AGW, especially the CAGW, view.
From Wikipedia: http://en.wikipedia.org/wiki/Null_hypothesis
Please note: refers to a general or default position: that there is no relationship between two measured phenomena With respect to AGW this means no relationship between anthropogenic carbon dioxide increase and global warming.
In science, logic and calculations prove nothing; they only show consistency with assumptions. The same is true of models which are nothing but complicated logic and calculation. In science the only things which provisionally prove anything are experiments and observations of the actual phenomena.
We are in the Holocene interglacial, a period of warm climate between glacials. There have been several previous interglacials. Whatever caused the other interglacials to warm probably caused the Holocene interglacial to warm. Natural warming is thus the null (default) hypothesis. It is totally intellectual dishonest to claim ones newly invented preferred hypothesis is the null (default) hypothesis. AGW is what is being tested against the null (default) hypothesis of natural interglacial warming.
If it takes thirty years to define climate, then thirty years of warming proportional, as defined by the models, to anthropogenic carbon dioxide increase over previous levels, and outside the warming limits defined by Holocene and earlier interglacials is required to provisionally prove AGW, or at least highly statistically significant proportional warming outside the previously established norms for over fifteen years.
The next step is to prove it will be catastrophic or at least dire enough to support prevention rather than adaptation.
The entire CAGW community seems to me to have done this intellectually dishonest inversion of null hypothesis from the beginning, thus requiring skeptics to prove the actual null (default) hypothesis.
Nick – The problem with using Arrhenius in a pro-AGW discussion is, even he changed his mind regarding the amount of heating CO2 could be responsible for.
Lord Monckton,
I haven’t been following the hype or the authors handling of it, but I suspected this might be the case. Ok, as I noted, my position may be naive.
You are quite correct that my criticism entirely misses the thrust of your argument. 🙂 Thank you for pointing that out, I was indeed sidetracked on a minor detail, and thanks so much for your response sir.
_Jim 7.09 am.
The Conservation of Energy Law between kinetic energy change and EM radiative flux is:
qdot = -Div Fv where qdot is the monochromatic heat generation rate of matter per unit volume and Fv is the monochromatic radiation flux density. Integrate this over all wavelengths and the physical dimensions of the matter under consideration and you get the difference between two S-B equations.
This gives the vector sum of the two radiation fields. You can easily prove that Climate Alchemy’s version at the surface, add as scalars net radiation flux not convected away to ‘back radiation’, increases local energy generation rate by 333 W/m^2, or twice the 160 W/m^ from the Sun.
The models then assume that Kirchhoff’s Law of Radiation applied to ToA offsets about half of this extra flux, leaving Ramanathan’s ‘Clear Sky Atmospheric Greenhouse factor 157.5 W/m^2. This is just less than 6.85x reality 23W/m^2) and heats up the hypothetical seas, more evaporation; this is the phoney 3x feedback.
It’s phoney because 1981_Hansen_etal.pdf falsely claimed the GHE is the (lapse rate) difference of temperature between the Earth’s surface at a mean +15 deg C and the -18 deg C ‘zone in the upper atmosphere’ in radiative equilibrium at 238.5 W/m^2.
Two problems here: firstly there is no -18 deg C zone, it’s the flux weighted average of -1.5 deg C H2O band emission (2.6 km where T and pH2O are falling fast, so it’s that spectral temperature), the -50 deg C 15 micron CO2 band and the 15 deg C atmospheric window IR; secondly, the real GHE is the temperature difference if you take out the water and CO2 so no clouds or ice, 341 W/m^2 radiative equilibrium for which surface temperature in radiative equilibrium would be 4-5 deg C, a GHE of ~11 K. The ration 33/11 = 3 is the phoney positive feedback.
To offset the extra atmospheric heating in the models, they are hindcast using about twice real low level cloud optical depth. This is a perpetual motion machine of the 2nd kind, the lower atmosphere using its own heat to cause itself to expand. No professional engineer, and I am one who has measured and modelled coupled convection and radiation many times, and made processes actually work can accept this juvenile nonsense.
It’s time this farrago was ended.
Nick – The problem with using Arrhenius in a pro-AGW discussion is, even he changed his mind regarding the amount of heating CO2 could be responsible for.
No, the problem with using Arrhenius in any discussion of climate is that it is 2013 and we’ve done a few things in physics since 1906. Like invent quantum field theory and electrodynamics. Ditto Fourier. I’m just sayin’…
The modern theory of atmospheric radiation owes almost nothing to Arrhenius or Fourier, almost everything to statistical mechanics, thermodynamics in general, and things such as the Planck distribution and quantum radiation processes that Arrhenius had at best a dim grasp of. Postulating that absorptive gases interpolated between a warm surface and a cold surface will have a warming effect on the warm surface — that’s simple first law energy balance for almost any radiative model of the interpolated gas. So the idea of the GHE can be attributed to him. However, there is nothing remotely useful in his quantitative speculations given that he was completely ignorant of all of the details of the thermal radiative process, and mostly ignorant of the full spectral details of the atmosphere, its internal thermodynamics/statistical mechanics, the details of the DALR that is currently held to be an important aspect of the process, and so on.
I’m not sure what the point of any of this discussion is. Correlation is not causality. A simple one-slab model is not the GHE, especially when it isn’t even parametrically sophisticated. The climate is described by a coupled set of Navier-Stokes equations with nonlinear, complex couplings (including both radiation and substantial latent heat transport) and numerous persistent features we cannot predict, arguably cannot accurately or consistently compute at all, and do not understand, insofar as attempts to compute their solution do not agree with one another or the observed climate. Outside of this, everything is mere speculation.
rgb
Nick Stokes says:
November 20, 2013 at 2:55 am
robinedwards36 says: November 20, 2013 at 2:39 am
“So, what role do the temperature records actually play in model simulation? Nick’s answer seems to be “None”.”
Yes, that’s essentially true. GCM’s solve the Navier-Stokes equations, with transport of materials and energy, and of course radiation calculations. A GCM requires as input a set of forcings, which depend on scenario. GISS forcings are often cited. But a model does not use as input any temperature record.
———————————————————
Baloney. They use the temperature records to “train” the simulators. This is the only reason that the models track the temperatures in considerable detail up until the end of the training period. Later model editions reset the starting point of projections. That is the only reason that their discrepancies with real world data do not show up as glaring. Anyone want to guess what the results would be if they truncated their training period in, say, 1970?
Robert Brown says: November 20, 2013 at 6:46 am
“So the big question for Nick is: Just how long does the pause have to continue for you to reject, or modify, the null hypothesis that the models are accurate predictors of future climate?”
Well, you’ve made my point, and put the null hypothesis the right way around. You can test whether the models predictions, taken as null, should be rejected. That’s what Lucia has been doing. I don’t think she has succeeded yet, but it’s the way to go.
But you’ve put in this framing “how long does the pause have to continue”. That’s irrelevant to the test, and in important ways. A period of inadequate rise could invalidate the models. A period of decline would do so faster. You test the discrepancy, not zero slope.
Isn’t the Gap actually even worse than the 0.22 C number? As I understand it, the range of model predictions includes models with a range of different assumptions about how much CO2 would rise. Since we now have actual data on how much CO2 has risen since 2005, the only models whose predictions are relevant in a comparison with the actual data are those whose assumptions matched the actual CO2 rise. The average temperature rise predicted by those models will be *larger* than 0.2 C, meaning the true Gap between the relevant models and the actual data is larger than 0.22 C, correct?
Monckton of Brenchley: “[T]he troll in question did not specify what point he thought I had misunderstood..”
Lord M. is correct; I did not in this thread specify what I (“the troll in question”) thought he had misunderstood. In an attempt to return his attention–with as little distraction as possible from this thread–to a subject about which he should be tightening up his game, I instead provided a link to a blog comment in which he gave an inapposite response to three different commenters who had separately attempted to disabuse him of the same misapprehension: that the residence time of 14CO2 after the bomb tests can inform us much about how the CO2-concentration increase caused by a temporary emissions-rate bulge would take to dissipate.
I would be happy to discuss it separately if he would specify the venue. (Clicking on this name above only sent me to wnd.com.) Realistically, though, I think that my goal–which is to help him make his presentations more bullet-proof–would best be served in a forum in which others could show him that they, too, think he should reconsider his position. (Or maybe I’ll be educated instead.) Joanne Nova’s blog, where he prematurely broke off the discussion, would seen appropriate–particularly since, if memory serves, Dr. Evans has made (what several of us believe is) the same mistake.
Monckton of Brenchley: “Another commenter asserts . . . that I had persisted in asserting something that anyone with elementary algebra ought to have accepted was incorrect.” Actually, I don’t think that “anyone with elementary algebra” ought to have accepted it, at least without some reflection. I merely meant that it was the type of thing that doesn’t require a lot of background to appreciate; that’s not the same as saying it should be immediately apparent; it certainly wasn’t to me. (And, strictly speaking, a couple of differential equations would actually be involved; it’s just that the salient part is simply algebra.)
Monckton of Brenchley says: November 20, 2013 at 5:37 am
“For instance, it is one of the inputs that they use in their attempts to quantify the water vapor and other temperature feedbacks.”
GCMs don’t use either feedbacks or climate sensitivity. It’s not part of the way they work. You can try to deduce feedback from GCM results.
“Arrhenius, whom Mr. Stokes cites with approval, did indeed change his mind about the central question in the climate debate”
Well, as you say later, he revised his estimate of sensitivity. That’s hardly a complete change of mind.
“Mr. Stokes would earn more respect if he conceded that the discrepancy between what was predicted and what is observed is material”
The discrepancy is material, and is what should be tested. The appropriate test is whether the observations are an improbable outcome given the model. That would invalidate the model. But you keep talking about whether the observed trend is significantly different from zero. Statistical testing could affirm that is true, as it is for recent long periods, but I don’t think that’s what you want. Failing to show that it could not be zero doesn’t prove anything.
Finally, there has been some discussion in this thread about my use of the word “prestidigitation”. I use it in its original meaning, sleight of hand, and in its metonymic derivative, trickery, with an implication of less than honest dealing.
The modern use of the word “prestidigitation” is a polite way of calling someone a liar.
prestidigitation = deception.
Sleight of hand. Feel better now? Great word. Similar to prevarication, sleight of tongue.
oops, forgot to blockquote.
The modern use of the word “prestidigitation” is a polite way of calling someone a liar.
prestidigitation = deception.
rgbatduke says:
November 20, 2013 at 10:38 am
“I’m not sure what the point of any of this discussion is. Correlation is not causality. A simple one-slab model is not the GHE, especially when it isn’t even parametrically sophisticated. The climate is described by a coupled set of Navier-Stokes equations with nonlinear, complex couplings (including both radiation and substantial latent heat transport) and numerous persistent features we cannot predict, arguably cannot accurately or consistently compute at all, and do not understand, insofar as attempts to compute their solution do not agree with one another or the observed climate. Outside of this, everything is mere speculation.”
pearls before swine.
the analysis is, however succinct and spot on.
youve been active posting today. thanks.
Jim Rose says: November 20, 2013 at 5:51 am
“@Nick Stokes
Serious question for information. Do the GCMs have any adjustable parameters? If so are these parameters fit to the prior history? By contrast, are the GCMs first principle models with well established inputs from known physical measurements?”
bones says: November 20, 2013 at 11:12 am
“Baloney. They use the temperature records to “train” the simulators. This is the only reason that the models track the temperatures in considerable detail up until the end of the training period.”
GCMs are first principle models working from forcings. However, they have empirical models for things like clouds, updrafts etc which the basic grid-based fluid mechanics can’t do properly. The parameters are established by observation. I very much doubt that they fit to the temperature record; that would be very indirect. Cloud models are fit to cloud observations etc.
The reason that models do quite well with backcasting is that they use known forcings, including volcanoes etc.
Of course people compare their results with observation (not just temperature), and if they are failing, try to do better, as they should. But that’s different to ‘use the temperature records to “train” the simulators’.
Steve Oregon says: November 20, 2013 at 8:42 am
“I presume Nick also believes ocean acidification is not deduced from alkalinity records?”
Yes. The reason that ocean acidification is expected is that CO2 in the air has increased, and we know the chemistry (there’s a calculator here). Observations may provide confirmation. If they turn out to be noisy, or hard to get, that doesn’t invalidate the expectation.
“But you’ve put in this framing “how long does the pause have to continue”. That’s irrelevant to the test, and in important ways. A period of inadequate rise could invalidate the models. A period of decline would do so faster. You test the discrepancy, not zero slope.” — Stokes
For the first installment on Dancing with Sophists: Note here that Mr. Stokes has not stated that the pause is a zero slope, or that a zero slope is not an ‘inadequate rise.’ Specifically he has introduced self-contradictory red herrings for people to chase, such as: “Are you claiming that some cooling is an inadequate rise?”
As Stokes has answered a question by not answering, he his hoping that the less cautious interlopers will complete his Red Herring for him by taking the discourse on a tangent. This is the sort of misdirection used by stage magicians, where they hope to distract the audience with production values and the choreography of their assistants.
@Stokes: So we’re all still curious — how long was it then?
rgbatduke says: November 20, 2013 at 10:38 am
“No, the problem with using Arrhenius in any discussion of climate is that it is 2013 and we’ve done a few things in physics since 1906.”
I used Arrhenius to show that theories of global warming are not deduced from the temperature record. In his day, there was no reliable global record available.