By Christopher Monckton of Brenchley
This time last year, as the honorary delegate from Burma, I had the honor of speaking truth to power at the Doha climate conference by drawing the attention of 193 nations to the then almost unknown fact that global warming had not happened for 16 years.
The UN edited the tape of my polite 45-second intervention by cutting out the furious howls and hisses of my supposedly grown-up fellow delegates. They were less than pleased that their carbon-spewing gravy-train had just tipped into the gulch.
The climate-extremist news media were incandescent. How could I have Interrupted The Sermon In Church? They only reported what I said because they had become so uncritical in swallowing the official story-line that they did not know there had really been no global warming at all for 16 years. They sneered that I was talking nonsense – and unwittingly played into our hands by spreading the truth they had for so long denied and concealed.
Several delegations decided to check with the IPCC. Had the Burmese delegate been correct? He had sounded as though he knew what he was talking about. Two months later, Railroad Engineer Pachauri, climate-science chairman of the IPCC, was compelled to announce in Melbourne that there had indeed been no global warming for 17 years. He even hinted that perhaps the skeptics ought to be listened to after all.
At this year’s UN Warsaw climate gagfest, Marc Morano of Climate Depot told the CFACT press conference that the usual suspects had successively tried to attribute The Pause to the alleged success of the Montreal Protocol in mending the ozone layer; to China burning coal (a nice irony there: Burn Coal And Save The Planet From – er – Burning Coal); and now, just in time for the conference, by trying to pretend that The Pause has not happened after all.
As David Whitehouse recently revealed, the paper by Cowtan & Way in the Quarterly Journal of the Royal Meteorological Society used statistical prestidigitation to vanish The Pause.
Dr. Whitehouse’s elegant argument used a technique in which Socrates delighted. He stood on the authors’ own ground, accepted for the sake of argument that they had used various techniques to fill in missing data from the Arctic, where few temperature measurements are taken, and still demonstrated that their premises did not validly entail their conclusion.
However, the central error in Cowtan & Way’s paper is a fundamental one and, as far as I know, it has not yet been pointed out. So here goes.
As Dr. Whitehouse said, HadCRUTt4 already takes into account the missing data in its monthly estimates of coverage uncertainty. For good measure and good measurement, it also includes estimates for measurement uncertainty and bias uncertainty.
Taking into account these three sources of uncertainty in measuring global mean surface temperature, the error bars are an impressive 0.15 Cº – almost a sixth of a Celsius degree – either side of the central estimate.
The fundamental conceptual error that Cowtan & Way had made lay in their failure to realize that large uncertainties do not reduce the length of The Pause: they actually increase it.
Cowtan & Way’s proposed changes to the HadCRUt4 dataset, intended to trounce the skeptics by eliminating The Pause, were so small that the trend calculated on the basis of their amendments still fell within the combined uncertainties.
In short, even if their imaginative data reconstructions were justifiable (which, as Dr. Whitehouse indicated, they were not), they made nothing like enough difference to allow us to be 95% confident that any global warming at all had occurred during The Pause.
If one takes no account of the error bars and confines the analysis to the central estimates of the temperature anomalies, the HadCRUt4 dataset shows no global warming at all for nigh on 13 years (above).
However, if one displays the 2 σ uncertainty region, the least-squares linear-regression trend falls wholly within that region for 17 years 9 months (below).
The true duration of The Pause, based on the HadCRUT4 dataset approaches 18 years. Therefore, the question Cowtan & Way should have addressed, but did not address, is whether the patchwork of infills and extrapolations and krigings they used in their attempt to deny The Pause was at all likely to constrain the wide uncertainties in the dataset, rather than adding to them.
Publication of papers such as Cowtan & Way, which really ought not to have passed peer review, does indicate the growing desperation of institutions such as the Royal Meteorological Society, which, like every institution that has profiteered by global warming, does not want the flood of taxpayer dollars to become a drought.
Those driving the scare have by now so utterly abandoned the search for truth that is the end and object of science that they are incapable of thinking straight. They have lost the knack.
Had they but realized it, they did not need to deploy ingenious statistical dodges to make The Pause go away. All they had to do was wait for the next El Niño.
These sudden warmings of the equatorial eastern Pacific, for which the vaunted models are still unable to account, occur on average every three or four years. Before long, therefore, another El Niño will arrive, the wind and the thermohaline circulation will carry the warmth around the world, and The Pause – at least for a time – will be over.
It is understandable that skeptics should draw attention to The Pause, for its existence stands as a simple, powerful, and instantly comprehensible refutation of much of the nonsense talked in Warsaw this week.
For instance, the most straightforward and unassailable argument against those at the U.N. who directly contradict the IPCC’s own science by trying to blame Typhoon Haiyan on global warming is that there has not been any for just about 18 years.
In logic, that which has occurred cannot legitimately be attributed to that which has not.
However, the world continues to add CO2 to the atmosphere and, all other things being equal, some warming can be expected to resume one day.
It is vital, therefore, to lay stress not so much on The Pause itself, useful though it is, as on the steadily growing discrepancy between the rate of global warming predicted by the models and the rate that actually occurs.
The IPCC, in its 2013 Assessment Report, runs its global warming predictions from January 2005. It seems not to have noticed that January 2005 happened more than eight and a half years before the Fifth Assessment Report was published.
Startlingly, its predictions of what has already happened are wrong. And not just a bit wrong. Very wrong. No prizes for guessing in which direction the discrepancy between modeled “prediction” and observed reality runs. Yup, you guessed it. They exaggerated.
The left panel shows the models’ predictions to 2050. The right panel shows the discrepancy of half a Celsius degree between “prediction” and reality since 2005.
On top of this discrepancy, the trends in observed temperature compared with the models’ predictions since January 2005 continue inexorably to diverge:
Here, 34 models’ projections of global warming since January 2005 in the IPCC’s Fifth Assessment Report are shown an orange region. The IPCC’s central projection, the thick red line, shows the world should have warmed by 0.20 Cº over the period (equivalent to 2.33 Cº/century). The 18 ppmv (201 ppmv/century) rise in the trend on the gray dogtooth CO2 concentration curve, plus other ghg increases, should have caused 0.1 Cº warming, with the remaining 0.1 ºC from previous CO2 increases.
Yet the mean of the RSS and UAH satellite measurements, in dark blue over the bright blue trend-line, shows global cooling of 0.01 Cº (–0.15 Cº/century). The models have thus already over-predicted warming by 0.22 Cº (2.48 Cº/century).
This continuing credibility gap between prediction and observation is the real canary in the coal-mine. It is not just The Pause that matters: it is the Gap that matters, and the Gap that will continue to matter, and to widen, long after The Pause has gone. The Pause deniers will eventually have their day: but the Gap deniers will look ever stupider as the century unfolds.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
” The parameters are established by observation.” — Stokes in reponse to a question about adjustable parameters.
@jim Rose: As their are a plethora of IPCC models, each with wide disagreement on the parameters, then yes. De facto they are tuned. Given the wide disagreements it would be impossible for them to stay close in relation to each other unless they were not. But, of course, the temperature readings themselves are adjustable parameters as the choice of interpolation and infilling to produce data that does not exist is — by definition — a tunable parameter.
Known forcings? Yuk-yuk. The post-facto fitting of values to said “forcings” reduces them to arbitrary unknown inputs, an ensemble of WAGs. Calling them SWAGs would be too kind.
rgbatduke says: November 20, 2013 at 9:15 am
“In the case of AGW, each model in CMIP5 constitutes a separate null hypothesis — note well separate. We should then — one at a time, for each model — compare the distribution of model predictions…”
I disagree. Earth and models both are systems of chaotic weather, for which after a period of time a climate can be discerned. The timing of GCM weather events is not informed by any observations of the timing of Earth events; they are initialized going way back and generate synthetic weather independently. This is true of medium term events like ENSO; if the Earth happens to have a run of La Nina’s (as it has), there is no way a model can be expected to match the timing of that.
The only thing we can expect them to have in common is that long run climate, responding to forcings. If you test models independently, that will take a very long time to emerge with certainty. If you aggregate models, you can accelerate the process.
Stokes;
Nope, sorry. Unless the existence and values of the forcings can be deduced deterministically from first principles (physical law), they are pure plugs plucked from presumed possibilities. The Parameters of Pseudo-Scientific Pretense.
Nick please give this a read. The comments section has links to the actual research papers
http://stevengoddard.wordpress.com/2011/05/26/1979-before-the-hockey-team-destroyed-climate-science/
Here we have real science in action. A prediction made in the mid to late 1970s that the following would occur:
1) The cold would continue until the early to mid 1980s
2) It would then warm until the end of the century
3) The warming would then stop
4) A drop of 1-2 degrees C would then occur.
3 out of 3 so far makes me a lot more confident in the 4th prediction that models that have never been right.
I have a simple question for you. How long with rising CO2 and flat or falling temperatures before you admit that CO2 doesn’t control the climate? 20 years? Almost there now. 30? Never?
” Earth and models both are systems of chaotic weather, for which after a period of time a climate can be discerned. ” — Stokes
“The only thing we can expect them to have in common is that long run climate, responding to forcings. If you test models independently, that will take a very long time to emerge with certainty.” — Stokes
On this installment of Dance with Sophists you will note that Stokes has confessed that Global Climate Models are Local Weather Models. And that, as it will take a long time to test the model, that either the null hypothesis has not been discharged, and so there has never been a test for Global Warming. Or that the null hypothesis has been discharged, in which case he knows how long it takes to reject the same if the correlation from the models is spurious.
“If you aggregate models, you can accelerate the process.” — Stokes
But in the very next sentence he states that if a classroom of students is given the math question 1 + 1, then the average answer of the students wrong results will produce the correct answer. Such that if 32 students are in the class, and at most one states ‘2’ that the average of all other results is ‘2.’
The Sophist here is attempting, as Sophists do, to prevent judgement on any measure of metric by introducing nonsense. For if the Sophist believed this position credibly, then the average of all the wrong papers about the failures of the AGW hypothesis have certainly converged on the answer that the AGW hypothesis failed. Or, to have some sport with Einstein:
“A large amount of failed experiments prove me right, but no amount of failed experiments prove me wrong.” — Einstein
Brian H says: November 20, 2013 at 1:35 pm
“Stokes;
Nope, sorry. Unless the existence and values of the forcings can be deduced deterministically from first principles (physical law)”
I think you have the wrong idea about what the forcings are. The main ones are measured GHG gas concentrations (esp CO2), volcano aerosols and TSI changes. Not much doubt about their existence and values (well, OK, maybe aerosols are not so easy).
TRM says: November 20, 2013 at 1:53 pm
“A prediction made in the mid to late 1970s that the following would occur:
1) The cold would continue until the early to mid 1980s
2) It would then warm until the end of the century
3) The warming would then stop”
3 out of 3? The cold didn’t continue – people talk of the 1976 climate shift. It warmed. But they didn’t say the warming would stop; they predicted a severe cold snap after 2000, while we had the warmest year ever in 2005, then 2010.
Nick Stokes: “3 out of 3? The cold didn’t continue – people talk of the 1976 climate shift. It warmed. But they didn’t say the warming would stop; they predicted a severe cold snap after 2000, while we had the warmest year ever in 2005, then 2010.”
So 10 years early, or 10 years late is a suitable disproof. So you have now committed to 10 years as suitable for the purpose, and as the answer for rgb that you have avoided providing.
So by your self-professed metric, then 17 years of failure is nearly twice the failure you use in disproving a hypothesis. The question then is whether you state AGW has failed as it’s been nearly twice as long as the 10 year mark, or whether it has not failed as 17 is adequately less than 10 + 10.
Jquip says:
November 20, 2013 at 9:53 am
“There are a lot of interesting things to say about Bayesian notions. Not the least of which is that ridiculously simple networks of neurons can be constructed as a Bayesian consideration. And, indeed, there are good reasons to state that it is a primary mode of statistically based learning in humans. Which, if you consider at all, is exactly where we get confirmation bias from. When something is wholly and demonstrably false, but we have prior and strongly held beliefs, *nothing changes* despite that the new information shows that the previous information is wholly and completely absurd.”
A simple Bayesian probability computation is stateless and therefore without memory. The memory or learned content of a neuronal network constructed from Bayesian computations must either be set in a form of back propagation learning; i.e. set from the outside during a training phase as fixed constants (or parameters; as when we fix the parameters for a GCM during curve fitting); or the network retains information through some form of feedback (as a Flipflop does in the digital world).
The properties of such a special algorithmic implementation cannot be used to say anything general about the field of Bayesian probabilities at all. In other words, Bayesian probability computations as such have nothing to do with strongly held beliefs in the face of new information.
bones says:
November 20, 2013 at 11:12 am
“Anyone want to guess what the results would be if they truncated their training period in, say, 1970?”
Which is exactly what they should do as a standard validation test. And yet, I have never heard of any attempt by the worlds best GCM modelers of building a high quality standard validation suite.
GCM’s are the healthcare.gov of the scientific world. (Same attention to testing)
Thank you all for this thick post of sharing. There is more than a days worth of reading and contemplation contained here!
The one and most astonishing things that I take away from all of the comments made came from Mr. Stokes with respect to the timing of temperature records in our past.
“In his day, there was no reliable global record available.”
Now I ask this,,,,,,,,,,
Is not this entire post and associated paper about compensating for deficiencies in exactly the same thing over 100 years later?
Maybe its just me,,,,, just sayin {©¿©}
“””…..Hyperthermania says:
November 20, 2013 at 1:30 am
“prestidigitation” – I quite like your writing normally, but that is a step too far. I can’t say that word, I’ve no idea what it means and I’m not going to bother to look it up in the dictionary on the basis that I’d never be able use it in a sentence anyway ! It is nice that you push our limits, but come on, give us a chance. I read it over and over again, then just when I think I’ve got the hang of it, I try to read the whole sentence again, and bam ! tongue well and truly twisted……”””””
Try “sleight of hand” as a pedestrian alternative. Personally, I like his Lordship’s choice of word.
OOoops!! Read Everything, before doing anything.
Alternate titles for this paper:
An Inconvenient Pause
Infusion of Data Confusion.
“Whack-a-Mole”: mole is in the Arctic.
I Reject Your Reality and Replace it With My Krigings
Mr. Donis asks whether the Gap between models’ prefictions and observed reality is greater than the 0.22 K shown in the latest monthly Global Warming Prediction Index. He wonders whether the models had correctly predicted the rate of increase in CO2 concentration since 2005. The models had gotten that more or less right (a little on the high side, but not much): and, in any event, over so short a period as eight and a half years a small variation in the CO2 estimate would not make much difference to the temperature outturn in the models.
However, one could add the 0.35 K offset at the beginning of 2005 in Fig. 11.33ab of IPCC (2013) to the 0.22 K divergence since 2005, making the Gap almost 0.5 K. However, the divergence on its own is more interesting, and I suspect it will continue. Indeed, the longer it continues the less likely it will be that the rate of global warming since January 2005 will ever reach, still less exceed, the 2.33 K/century rate that is the mid-range estimate of the 34 climate models relied upon by the IPCC.
Mr. Stokes says the models “solve the Navier-Stokes equations”. They may try to do so, but these equations have proven notoriously refractory. If I remember correctly, the Clay Institute has offered $1 million for anyone who can solve them. Mr. Stokes may like to apply (subject to my usual 20% finder’s fee). He also makes the remarkable assertion that the models do not attempt to quantify feedbacks. Of course they do. See Roe (2009) for an explanation. Gerard Roe was a star pupil of the formidable Dick Lindzen.
Briefly, the models begin by determining the Planck or instantaneous or zero-feedback climate-sensitivity parameter. The only reliable way to do this is to start with the fullest possible latitudinally-distributed temperature record (which Mr. Stokes incorrectly states is not used in the models). The models do it the way I did it after consulting Gerard Roe, one of the very few scientists who understands all this. John Christy kindly supplied 30 years of latitudinally-distributed satellite mid-troposphere temperature data, and I spent a happy weekend programming the computer to do the relevant radiative-transfer and spherical-geometry and solar-azimuth calculations, latitude by latitude. I determined that, to three decimal places, the value used in the models is correct, at 0.313 Kelvin per Watt per square meter (the reciprocal value is given as 3.2^-1 Watts per square meter per Kelvin in a more than usually Sibylline footnote in IPCC, 2007, at p. 631, but Roe says the Planck parameter should really be treated as part of the climatic reference frame and not as a mere feedback, so he prefers 0.313 K/W/m2.
The Planck parameter, which, as I have explained, is indeed temperature-dependent, is used twice in the climate-sensitivity equation. First, it is used to determine the instantaneous or zero-feedback climate sensitivity, which (absent any change in the non-radiative transports, whose absence cannot at all be relied upon: see Monckton of Brenchley, 2010, Annual Proceedings, World Federation of Scientists) is 1.2 K per Co2 doubling. Then the product of the unamplified feedback sum and the Planck parameter is taken, for that constitutes the closed-loop feedback in the climate object.
The individual temperature feedbacks whose sum is multiplied by the Planck parameter to yield the loop gain are each also temperature-dependent, being denominated in Watts per square meter per Kelvin of temperature change over some period of study.
In my expert review of the IPCC’s Fifth Assessment Report, I expressed some disquiet that the IPCC had not produced an explicit curve showing its best estimate (flanked by error-bars) of the evolution of the Planck parameter from its instantaneous value (0.3 K/W/m2) to its eventual equilibrium value 0.9 K/W/m2 3500 years later. The shape of the curve is vital. Deduction based on examination of the models predictions under the then six standard emissions scenarios (IPCC, 2007, p. 803, fig. 10.26) indicates that the value of the climate-sensitivity parameter rises to 0.44 K/W/m2 after 100 years and to 0.5 K/W/m2 (on all scenarios) after 200 years. That implies quite a rapid onset of the feedbacks, which, however, is not observed in reality, suggesting either that the shape of the curve of the evolution of the climate-sensitivity parameter is not as the models think it is or that the feedbacks are not at all as strongly net-positive as the models need to imagine in order to maintain that there may be substantial global warming soon.
Frankly, the continuing absence of the time-curve of the climate-sensitivity parameter is a scandal, for it is chiefly in the magnitude of feedbacks that the models’ absurd exaggerations of global warming occur, and this, therefore, is the chief bone of contention between the climate extremists and the skeptics.
There are several excellent theoretical reasons for considering that feedbacks are likely to be net-negative, or at most very weakly net-positive. Not the least of these is the presence of a singularity in the curve of climate sensitivity against loop gain, at the point where the loop gain reaches unity. This singularity has a physical meaning in the electronic circuits for which the Bode feedback-amplification was derived, but – and this point is crucial – it has no meaning in the physical climate, and it is necessary to introduce a damping term into the equation to prevent the loop gain from reaching the singularity. The models, however, have no damping term. Any realistic value for such a term would reduce climate sensitivity by three-quarters.
Empirical confirmation is to be found in the past 420,000 years of temperature change. In all that time, global temperatures have varied by only 1% in absolute terms either side of the long-run median. Since something like today’s conditions prevailed during the four previous interglacials over that period, the implication is that feedbacks in the climate system simply cannot be as strongly net-positive as the modelers imagine: for otherwise global temperatures could not have remained as remarkably stable as they have.
The reason for the self-evident temperature homeostasis in the paleotemperature record is not hard to find. For the atmosphere is bounded by two vast heat-sinks, the ocean (5000 times denser than the atmosphere) and outer space (an infinite heat-sink). So one would not expect much perturbation of surface temperature: nor has it in fact occurred; nor is it at all likely to occur. Since the climate has been temperature-stable for almost half a billennium, it would be a rash modeler who predicted major temperature change today. Yet that is what the modelers predict. And, though these obvious points are really unassailable, the IPCC simply looks the other way and will not address them.
It is the intellectual dishonesty behind the official story-line that concerns me. Science is supposed to be a search for truth, not a propaganda platform for international socialism or communism. In a rational world, the climate scare would have died before it was born.
Talking of “born”, Mr. Born says I am wrong to think that the removal of carbon-14 from the atmosphere after the bomb tests is any guide to the removal of carbon-12 and carbon-13 emitted by us in admittedly larger and more sustained quantities. But, apart from a dopey paper by Essenhigh, I do not know of anyone who claims that carbon-14 will pass out of the atmosphere any more quickly than carbon-12 or carbon-13. If Mr. Born would like to write to me privately he can educate me on what I have misunderstood; but he may like to read Professor Gosta Pettersson’s three thoughtful papers on the subject, posted here some months ago, before he does.
I like the move to the “gap” concept although “chasm” may become more descriptive. I’m doubtful we will see another El Niño anytime soon. The warming in the summer of 2012 appeared to be heading towards one but it fell apart. This could have taken enough energy out of the system to delay the next event until 2015-16. If that is the case there will be a lot fewer CAGW supporters.
Monckton of Brenchley says: November 20, 2013 at 6:10 pm
“Mr. Stokes says the models “solve the Navier-Stokes equations”. They may try to do so, but these equations have proven notoriously refractory.”
I have spent much of my working life solving the Navier-Stokes equations. CFD is a basic part of engineering. Here is a Boeing presentation on how they design their planes. N-S solution is right up there on p 2.
“He also makes the remarkable assertion that the models do not attempt to quantify feedbacks. Of course they do. See Roe (2009) for an explanation.”
Roe does not say anything about GCMs using feedbacks. Here is a well-documented GCM – CAM 3. It describes everything that goes in. You will not find any reference to sensitivity, feedbacks or the temperature record.
“Briefly, the models begin by determining the Planck or instantaneous or zero-feedback climate-sensitivity parameter….”
I’m sure there are simplified models that do this. But not GCMs.
Er … Hypothermia, I knew that word when I was a boy. But then, in those days we learned words from books and looked them up in the dictionary (Oxford, of course) when we didn’t understand them. Here’s the link.
http://www.oxforddictionaries.com/definition/english/prestidigitation
And here’s the thesaurus, so you can add legerdemain and thaumaturgy to your vocabulary.
http://thesaurus.com/browse/prestidigitation
Unfortunately, AGWs have not done due diligence in listing all the possible causes of temperature trend aside from CO2. We know that oceanic warming due to albedo (an indication of the amount of SW IR that gets through to the ocean surface) can and does eventually affect temperature when all that heat layers itself on the top surface, causing temperature trends up, down, and stable. These trends can be every bit as powerful, in fact more so, than greenhouse warming or cooling. It is also true that releasing this heat can be jarringly disrupted by windy conditions, mixing the warmth again below the surface, holding it away from its ability to heat the air. It is easy to see the result in the herky jerky stair steps up and down in the temperature series.
All scientists should examine all possible causes of data trends. The messy nature of the temperature trend matches the messy nature of oceanic conditions. It does not at all match the even measured rise in CO2. That a not small cabal of scientists jumped on that wagon anyway is food for thought.
Monckton of Brenchley: “Mr. Born says I am wrong to think that the removal of carbon-14 from the atmosphere after the bomb tests is any guide to the removal of carbon-12 and carbon-13 emitted by us in admittedly larger and more sustained quantities.”
of CO2 in the atmosphere equals the mass
of 12CO2 and 13CO2 plus the mass
of 14CO2:
, where
. Let’s also assume that CO2 is being pumped into the atmosphere at a rate
and sucked out at a rate
and that the concentration of 14CO2 in the CO2 being pumped in is
. Then the rate of CO2-mass change is given by:


exceeds the sink rate, the total mass of atmospheric CO2 will rise until such time, if any, as the sink rate catches up, and, unless the sink rate thereafter exceeds the emission rate, the mass M will remain elevated forever.
were to remain equal to the sink rate
and thereby keep the total CO2 concentration constant, the difference
between the (initially bomb-test-elevated) 14CO2 concentration and the ordinary, cosmogenic 14CO2 concentration–i.e., the “excess” 14CO2 concentration–would still decay with a time constant
. That time constant therefore tells us nothing about how long the total CO2 concentration would remain at some elevated level to which it may previously have been raised by elevated emissions; in this scenario, for example, the level remains elevated forever even though the excess 14CO2 concentration decays.
It’s not an issue of which carbon isotopes we’re talking about. The issue is the difference between CO2 concentration, on the one hand, and residence time in the atmosphere of a typical CO2 molecule, of whatever isotope, on the other. The bomb tests, which tagged some CO2 molecules, showed us the latter, and I have no reason to believe that the residence time of any other isotope would be much different. But you’re trying to infer the former from the latter, which, as I’ve resorted to math below to explain, can’t be done:
Let’s assume that the total mass
whereas the the rate of 14CO2-mass change is given by:
The first equation says that the total mass, and thus the concentration, of CO2 varies as the difference between source and sink rates. So, for example, if the source and sink rates are equal, the total mass remains the same–even if few individual molecules remain in the atmosphere for very long. Also, if the emission rate
The second equation tells us that, even if the emission rate
In summary, the decay rate of the excess 14CO2 tells us the turnover rate of carbon dioxide in the atmosphere. It does not tell us how fast sink rate will adjust to increased emissions.
@Joe Kirklin Born
This may be obvious and included and not noted, but does the cosmogenic-sourced 14CO2 production rate increase with an increased atmospheric concentration of target CO2 molecules? If it does, the decrease in 14CO2 concentration will be delayed and give the appearance of a longer turnover time.
And if the sun’s heliosphere has a significant indirect impact on the formation of 14CO2 that influence should first be subtracted lest the accidental or deliberate selection of ‘convenient’ starting and end points influence the calculation. During the coming cooling and the shrinking of the heliosphere we will have a chance to falsify one or more posits on this matter.
Crispin in Waterloo but really, etc.: “[D]oes the cosmogenic-sourced 14CO2 production rate increase with an increased atmospheric concentration of target CO2 molecules?”
Although I have an opinion (i.e., no), you should accord that opinion no more weight than that of the next guy at the bar; I’m a layman.
But in my youth I did take some required math courses (as presumably did most of this site’s other habitues), and you can witness here: http://wattsupwiththat.com/2013/07/01/the-bombtest-curve-and-its-implications-for-atmospheric-carbon-dioxide-residency-time/#comment-1352996 my conversion last July by math alone from Lord Monckton’s position (which I had earlier espoused on that page) regarding theoretical consistency between the bomb-test results and the Bern model to the one I expressed above (in a considerably condensed form so as not to tax Lord M’s patience) .
Again, though, I don’t profess to be knowledgeable about carbon-14 generation, so I have no clue about whether additional information could be gleaned from secondary effects such as those about which you speculate.
Professor Brown,
“There are some lovely examples of this kind of trade-off reasoning in physics — introducing a prior assumption of dark matter/energy (but keeping the same old theory of Newtonian or Einsteinian gravitation) versus modifying the prior assumption of Newtonian gravitation in order to maintain good agreement between certain cosmological observations and a theory of long range forces between massive objects. People favor dark matter because the observations of (nearly) Newtonian gravitation have a huge body of independent support, making that prior relatively immune to change on the basis of new data. But in truth either one — or an as-yet unstated prior assumption — could turn out to be supported by still more observational data, especially from still more distinct kinds of observations and experiments.”
In your amazing silent-assassin way, did you just trash Dark Matter/Energy? I fervently hope so, I hate them both. Sorry to be slightly off-topic, but Professor Brown is so erudite I almost assume that whatever he says, must be true!
Lord Monckton: “[Mr. Donis] wonders whether the models had correctly predicted the rate of increase in CO2 concentration since 2005.”
That’s not quite what I was wondering: as I understand it, the rate of increase of CO2 concentration is an *input* to the models, not a prediction of them. Different models make different assumptions about the rate of CO2 increase, and that difference in input makes some contribution to the difference in output.
What I was wondering was, *how much* contribution? If the Gap is 0.22 degrees looking at all the models, how much larger would it be if we only looked at the models whose assumptions about the rate of CO2 increase matched reality? I agree that difference wouldn’t be much over 8 years, but even a few hundredths of a degree would be a significant fraction of the total Gap.
I also think it’s worth bringing up this issue because when the IPCC draws its spaghetti graphs of model predictions, it *never* mentions the fact that many of those models made assumptions about the rate of CO2 rise that did *not* match reality, so they are irrelevant when comparing predictions to actual data.