Cowtan & Way off course

By Christopher Monckton of Brenchley

This time last year, as the honorary delegate from Burma, I had the honor of speaking truth to power at the Doha climate conference by drawing the attention of 193 nations to the then almost unknown fact that global warming had not happened for 16 years.

The UN edited the tape of my polite 45-second intervention by cutting out the furious howls and hisses of my supposedly grown-up fellow delegates. They were less than pleased that their carbon-spewing gravy-train had just tipped into the gulch.

The climate-extremist news media were incandescent. How could I have Interrupted The Sermon In Church? They only reported what I said because they had become so uncritical in swallowing the official story-line that they did not know there had really been no global warming at all for 16 years. They sneered that I was talking nonsense – and unwittingly played into our hands by spreading the truth they had for so long denied and concealed.

Several delegations decided to check with the IPCC. Had the Burmese delegate been correct? He had sounded as though he knew what he was talking about. Two months later, Railroad Engineer Pachauri, climate-science chairman of the IPCC, was compelled to announce in Melbourne that there had indeed been no global warming for 17 years. He even hinted that perhaps the skeptics ought to be listened to after all.

At this year’s UN Warsaw climate gagfest, Marc Morano of Climate Depot told the CFACT press conference that the usual suspects had successively tried to attribute The Pause to the alleged success of the Montreal Protocol in mending the ozone layer; to China burning coal (a nice irony there: Burn Coal And Save The Planet From – er – Burning Coal); and now, just in time for the conference, by trying to pretend that The Pause has not happened after all.

As David Whitehouse recently revealed, the paper by Cowtan & Way in the Quarterly Journal of the Royal Meteorological Society used statistical prestidigitation to vanish The Pause.

Dr. Whitehouse’s elegant argument used a technique in which Socrates delighted. He stood on the authors’ own ground, accepted for the sake of argument that they had used various techniques to fill in missing data from the Arctic, where few temperature measurements are taken, and still demonstrated that their premises did not validly entail their conclusion.

However, the central error in Cowtan & Way’s paper is a fundamental one and, as far as I know, it has not yet been pointed out. So here goes.

As Dr. Whitehouse said, HadCRUTt4 already takes into account the missing data in its monthly estimates of coverage uncertainty. For good measure and good measurement, it also includes estimates for measurement uncertainty and bias uncertainty.

Taking into account these three sources of uncertainty in measuring global mean surface temperature, the error bars are an impressive 0.15 Cº – almost a sixth of a Celsius degree – either side of the central estimate.

The fundamental conceptual error that Cowtan & Way had made lay in their failure to realize that large uncertainties do not reduce the length of The Pause: they actually increase it.

Cowtan & Way’s proposed changes to the HadCRUt4 dataset, intended to trounce the skeptics by eliminating The Pause, were so small that the trend calculated on the basis of their amendments still fell within the combined uncertainties.

In short, even if their imaginative data reconstructions were justifiable (which, as Dr. Whitehouse indicated, they were not), they made nothing like enough difference to allow us to be 95% confident that any global warming at all had occurred during The Pause.

clip_image002

If one takes no account of the error bars and confines the analysis to the central estimates of the temperature anomalies, the HadCRUt4 dataset shows no global warming at all for nigh on 13 years (above).

However, if one displays the 2 σ uncertainty region, the least-squares linear-regression trend falls wholly within that region for 17 years 9 months (below).

clip_image004

The true duration of The Pause, based on the HadCRUT4 dataset approaches 18 years. Therefore, the question Cowtan & Way should have addressed, but did not address, is whether the patchwork of infills and extrapolations and krigings they used in their attempt to deny The Pause was at all likely to constrain the wide uncertainties in the dataset, rather than adding to them.

Publication of papers such as Cowtan & Way, which really ought not to have passed peer review, does indicate the growing desperation of institutions such as the Royal Meteorological Society, which, like every institution that has profiteered by global warming, does not want the flood of taxpayer dollars to become a drought.

Those driving the scare have by now so utterly abandoned the search for truth that is the end and object of science that they are incapable of thinking straight. They have lost the knack.

Had they but realized it, they did not need to deploy ingenious statistical dodges to make The Pause go away. All they had to do was wait for the next El Niño.

These sudden warmings of the equatorial eastern Pacific, for which the vaunted models are still unable to account, occur on average every three or four years. Before long, therefore, another El Niño will arrive, the wind and the thermohaline circulation will carry the warmth around the world, and The Pause – at least for a time – will be over.

It is understandable that skeptics should draw attention to The Pause, for its existence stands as a simple, powerful, and instantly comprehensible refutation of much of the nonsense talked in Warsaw this week.

For instance, the most straightforward and unassailable argument against those at the U.N. who directly contradict the IPCC’s own science by trying to blame Typhoon Haiyan on global warming is that there has not been any for just about 18 years.

In logic, that which has occurred cannot legitimately be attributed to that which has not.

However, the world continues to add CO2 to the atmosphere and, all other things being equal, some warming can be expected to resume one day.

It is vital, therefore, to lay stress not so much on The Pause itself, useful though it is, as on the steadily growing discrepancy between the rate of global warming predicted by the models and the rate that actually occurs.

The IPCC, in its 2013 Assessment Report, runs its global warming predictions from January 2005. It seems not to have noticed that January 2005 happened more than eight and a half years before the Fifth Assessment Report was published.

Startlingly, its predictions of what has already happened are wrong. And not just a bit wrong. Very wrong. No prizes for guessing in which direction the discrepancy between modeled “prediction” and observed reality runs. Yup, you guessed it. They exaggerated.

clip_image006clip_image008

The left panel shows the models’ predictions to 2050. The right panel shows the discrepancy of half a Celsius degree between “prediction” and reality since 2005.

On top of this discrepancy, the trends in observed temperature compared with the models’ predictions since January 2005 continue inexorably to diverge:

clip_image010

Here, 34 models’ projections of global warming since January 2005 in the IPCC’s Fifth Assessment Report are shown an orange region. The IPCC’s central projection, the thick red line, shows the world should have warmed by 0.20 Cº over the period (equivalent to 2.33 Cº/century). The 18 ppmv (201 ppmv/century) rise in the trend on the gray dogtooth CO2 concentration curve, plus other ghg increases, should have caused 0.1 Cº warming, with the remaining 0.1 ºC from previous CO2 increases.

Yet the mean of the RSS and UAH satellite measurements, in dark blue over the bright blue trend-line, shows global cooling of 0.01 Cº (–0.15 Cº/century). The models have thus already over-predicted warming by 0.22 Cº (2.48 Cº/century).

This continuing credibility gap between prediction and observation is the real canary in the coal-mine. It is not just The Pause that matters: it is the Gap that matters, and the Gap that will continue to matter, and to widen, long after The Pause has gone. The Pause deniers will eventually have their day: but the Gap deniers will look ever stupider as the century unfolds.

Advertisements

  Subscribe  
newest oldest most voted
Notify of
Otter (ClimateOtter on Twitter)

Here’s hoping the next El-Nino is at Least 3 years away.

Hyperthermania

“prestidigitation” – I quite like your writing normally, but that is a step too far. I can’t say that word, I’ve no idea what it means and I’m not going to bother to look it up in the dictionary on the basis that I’d never be able use it in a sentence anyway ! It is nice that you push our limits, but come on, give us a chance. I read it over and over again, then just when I think I’ve got the hang of it, I try to read the whole sentence again, and bam ! tongue well and truly twisted.

substitute “sleight of hand”

“The fundamental conceptual error that Cowtan & Way had made lay in their failure to realize that large uncertainties do not reduce the length of The Pause: they actually increase it.”
I’d like to see a quote where C&W are making that conceptual error. In fact, the “length of the Pause” as formulated here is a skeptics construct, and you won ‘t see scientists writng about it. The period of “no statistically significant increase” is a meaningless statistical test. Rejecting the null hypothesis can lead to useful conclusions; failing to reject does not. It means the test failed.
Yes, HADCRUT takes account of the missing data in its uncertainty estimate, but does not correct for the bias in the trend. That’s what C&W have done.

M Courtney

Nick Stokes,

Rejecting the null hypothesis can lead to useful conclusions; failing to reject does not. It means the test failed.

So it doies have meaning then. It means that there is no reason to reject the null hypothesis. And the null hypothesis is that there is no significant warming for the period under review.
So you accept that greater uncertainty leads to weker statistical tests…
And so leads to less ability to detect changes in the measured temperature… Hmm.
But, from that, do you see any evidence at all that the models (which predicted a measurable change in temperature) are not failed and so shoud not be rejected?

Kon Dealer

Nick, I just love your ability to see the one tree (Yamal?) in the forest that just might prop up the failing theory of AGW.
I bet you can bake good (cherry) pies.

Mind the gap

cd

Lord Monckton
I agree to some degree but I think you might talking around the point rather than hitting it the nail on the head. I could be completely wrong…
If they grid the data they grid the data. Their value, whatever interpolation method they used is still just an estimate. The question is whether the method, and associated artifacts, creates something more or less reliable than other methodologies.
I’m probably teaching you to suck eggs here but in order to be comprehensive…
Kriging as absolutely fine so long as you can remove any structural component (such as trend) in order to produce a stationary data set. There are a variety of kriging methods that implicitly deal with any structural component, but in my experience, the best way is: to assume a variable trend (structural component is “non-linear”); create the structural surface using some fit such as a B-Spline; remove this; then Krige using the residuals and finally add the structural component back into our gridded data to give us the temperature map.
The issue as I see it…
The important point here, is that it is implicit within the Kriging algorithm, that there is 100% confidence in any structural surface that we might use; but of course one now has the problem that our structural component relies on very sparse data, therefore our trend (structural component) is worthless and any Kriged surface will have much larger uncertainties than the Kriging variance would suggest – they have likely been superficially deflated due to the use of an expansive trend.

M Courtney says: November 20, 2013 at 1:54 am
“But, from that, do you see any evidence at all that the models (which predicted a measurable change in temperature) are not failed and so should not be rejected?”

Well, that’s the point. You can adopt the null hypothesis that the models are right, and if you can reject that, you’ve proved something. But AGW isn’t deduced from the temperature record, so isn’t dependent on rejecting a null hypothesis of zero warming.

TLM

Nick Stokes, you are like the shop keeper in the Monty Python dead parrot sketch. “That global warming ain’t dead, its just sleeping”.
The “null hypothesis” is the default position: that there is no relationship between two measured phenomena.
Phenomena 1: CO2 increasing in the atmosphere.
Phenomena 2: Global mean surface temperature rises.
So the “null hypothesis” is that as CO2 in the atmosphere rises the mean surface temperature fails to show any relationship to that rise – that is it either stays the same, falls or randomly changes in a way that does not suggest any linkage to rising CO2.
The results of this experiment from the last 10 to 18 years, depending on which data set you use, is that the mean global surface temperature has not risen.
As time goes by the idea that the null hypothesis has been disproved by the climate scientists looking for signs of AGW looks less and less tenable.

TLM

Nick Stokes, I just hit the floor in gales of laughter when I read this:

“But AGW isn’t deduced from the temperature record”

Now let us just remind ourselves what the letters AGW stand for:
“Athropogenic Global Warming
Now please enlighten me how you measure “warming” without measuring the temperature?

robinedwards36

Nick says that AGW is not deduced from the temperature record. Now there’s a surprise, to me at least. I’ve understood that the output of the numerous “climate models” AGW is invariably expressed as potential warming (in degrees Celsius), typically up to the end of the century.
So, what role do the temperature records actually play in model simulation? Nick’s answer seems to be “None”. The inference to be drawn is that that models rely /entirely/ on proxy measurements, and that their outcome is translated magically, after they have been run, into the familiar scale of temperatures as we experience them. I’d like a bit more instruction on this from someone who really knows what they are talking about.

Me thinks Nick Stokes just pops in near the front of any thread discussion to try and get folk off on a tangent and to try and wreak the thread by uttering confusing nonsense! Ignore him is the best thing to do imo

geronimo

It looks as though the warmies haven’t learned the lessons of the past. We seem to be entering something akin to the hockeystick wars where any old rubbish that supported the disappearance of the MWP was immediately feted as great science. I believe this is the first shot in the “pause war” and will be followed with a plethora of faux papers demonstrating there is no pause, each one greeted with swooning and adulation by the climate science community and each one destroyed on the blogosphere by the so called “citizen scientists” until, like the hockeystick the warmies will try to quietly drop it.

me

[Snip. Bad email address. — mod.]

Bloke down the pub

Nick Stokes says:
November 20, 2013 at 2:10 am
As nothing that that warmists claim ever seems to be falsifiable, I suppose they don’t need to concern themselves with trivialities such as the temperature record.
By the way Lord M. Is it just a coincidence that since you started representing the Mayanmar government, they’ve been welcomed back into the international community?

TLM says: November 20, 2013 at 2:27 am
“Now please enlighten me how you measure “warming” without measuring the temperature?”

AGW has been around since 1896. Arrhenius then deduced that CO2 would impede the loss of heat through IR, and would cause temperatures to rise. There was no observed warming then. AGW is a consequence of what we know about the radiative properties of gases.
AGW predicted that temperatures would rise, and they did. You can’t do better than that, whether or not the rise is “statistically significant”.

robinedwards36 says: November 20, 2013 at 2:39 am
“So, what role do the temperature records actually play in model simulation? Nick’s answer seems to be “None”.”

Yes, that’s essentially true. GCM’s solve the Navier-Stokes equations, with transport of materials and energy, and of course radiation calculations. A GCM requires as input a set of forcings, which depend on scenario. GISS forcings are often cited. But a model does not use as input any temperature record.

me

[Snip. Bad email address. — mod.]

John Law

“later, Railroad Engineer Pachauri, climate-science chairman of the IPCC,”
This shows at least some good practice within the IPCC.
In the nuclear construction industry, we attach great importance to people being suitably qualified and experienced (SQEP) for the task/ role they are performing.
Mr Pachauri sounds eminently qualified for running a “gravy train”

RichardLH

Nick Stokes says:
November 20, 2013 at 2:10 am
“But AGW isn’t deduced from the temperature record”
And as the “pause” either does not exist (Cowtan & Way) or is not (yet) long enough to actually invalidate the models then AWG is still a potentially valid argument?
Is that your true position or have I misstated you?

cd

Nick
Your points sort of jump around the place.
Arrhenius then deduced that CO2 would impede the loss of heat through IR
No that it should. And in single phase system where CO2 is the only variable then yes. But that is not a good description of the atmosphere.
AGW predicted that temperatures would rise, and they did.
Firstly, temperature can only do three things go up, stay the same or go down. Now climate is never static so it’s a fifty-fifty chance that it will go up/down. Making a prediction that it will go up and it does, does not mean that you understand why it does. You’re assuming correlation is causation it is not.
You can’t do better than that, whether or not the rise is “statistically significant”
You’re easily impressed. If you claim to understand what causes climate change then make predictions that their will be statistically significant warming with increasing CO2, and at a particular rate, and then it fails to materialise then by all scientific standards the null hypothesis is accepted.

Lord Monckton-
I think the RMS allowing this paper needs to be put into the context of the RSA’s pursuit in earnest of the Social Brain Project. It also needs to be linked with the sponsorship last week of Roberto Mangabeira Unger to speak on “Freedom, Equality and a Future Political Economy: the Structural Change We Need.”
Listening to that sent me looking for Unger’s book and the democratic experimentalism being pursued in both the UK and the US hiding under federal agency spending but quite systematic. The bad science you so ably dissect is just the excuse to make the experimentation seem necessary and justified.
The “We want equity now” formal campaign I attended last night is closely related with the same funders but it doesn’t play well in the suburbs. Yet.

AlecM

Well said Lord M.
Climate Alchemy crystallised its false predictions in 1981_Hansen_etal.pdf (Google it). In this paper they changed the IR emission from the Earth’s surface from Manabe and Strickland’s correct but vastly exaggerated ~160 W/m^2 (SW thermalised at the surface) to ‘black body’. To do so, they assumed the ‘two-stream approximation’ applies at an optical heterogeneity, the surface. You can’t do that. Thus in 1988, Congress was misled. We know this from experiment, the real temperature record.
The key issue is from when did ‘the team’ realise it was wrong? It seems to be1997 when it was proved that CO2 follows warming at the end of ice ages. This begat the ‘Hockey Stick’ to get AR3.
In 2004, Twomey’s partially correct aerosol optical physics was substituted by the Sagan-origin claim of ‘more reflection from the higher surface area of smaller droplets’, untrue. This begat AR4.
In 2009, the revised Kiehl-Trenberth ‘Energy Budget’ introduced 0.9 W/m^2 ‘abyssal heat’, what I call ‘Pachauri’s Demon’, the magick whereby hotter than average sea surface molecules are miraculously transported below 2000 m, where you can’t measure the extra heat, without heating the first 2000 m of ocean! This begat AR5.
That suggests 16 years of knowing prestidigitation by people paid to be scientists when they weren’t following its most absolute condition – never deceive the punters.

RichardLH

Nick Stokes says:
November 20, 2013 at 2:50 am
“AGW predicted that temperatures would rise, and they did. You can’t do better than that, whether or not the rise is “statistically significant”.
Assuming that there are no other reasons for the temperatures to rise over the same period, such as natural variability.

Konrad

Nick Stokes says:
November 20, 2013 at 2:10 am
———————————————-
“You can adopt the null hypothesis that the models are right, and if you can reject that, you’ve proved something.”
Nice try Mr Stokes , but it won’t wash. Despite “travesty” Trenberth’s crazed proposal to the AMS that the null hypothesis should be reversed in the case of AGW, the null hypothesis “AGW is utter tripe” remains in place. And the null hypothesis still stands for not just AGW but the radiative greenhouse hypothesis underlying it.
“But AGW isn’t deduced from the temperature record, so isn’t dependent on rejecting a null hypothesis of zero warming.”
No, that won’t work either. AGW has not been “deduced”. It has been proposed, rejected, reanimated, hyped and used for blatantly political purposes. Anyone with any reasoning ability should be able to deduce that adding radiative gases to the atmosphere will not reduce the atmospheres radiative cooling ability.

The paragraph, “Those driving the scare have by now so utterly abandoned the search for truth that is the end and object of science that they are incapable of thinking straight.They have lost the knack.” brought to mind a quote of Alvin Toffler that was posted today on a FB science page that, quite ironically, is very pro-CAGW:
“The illiterate of the 21st century will not be those who cannot read and write, but those who cannot learn, unlearn, and relearn.”
It seems that many of the climate ‘science’ practitioners fail Toffler’s literacy test.

AlecM

RicharLH: I believe that excess warming, including the rise in OHC, in the 1980s and 1990s was because of Asian industrialisation and forest burning. The extra aerosols reduced cloud albedo.
The effect saturated about 1999 when the ‘Asian Brown Cloud’ appeared. this seems to have been the ‘false positive’ which encouraged ‘the team’ to continue its serious dissembling.
PS the physics behind this is the correction to Sagan’s incorrect aerosol optical physics. He misinterpreted the work of van der Hulst.

Lewis P Buckingham says:
November 20, 2013 at 1:18 am
“When Cowtan and Way infilled the Arctic temperature data, did they also calculate error bars inherent in that infilling?”
Did Cowan and Way “infill” the Arctic temperature data? To me it looks as if Cowan and Way made retrospective predictions about what Arctic temperatures would have been, rather than providing “actual data”. There is a lamentable tendency to treat predictions as “actual data”. By “actual data” I mean real temperature measurements made, and recorded, by flesh and blood people. Cowan and Way have not provided any new “actual data.” There is another point. How does anyone know whether or not temperatures in the Arctic are rising rapidly, if, as is generally admitted, there is a scarcity of “actual data” for the Arctic?

Ken Hall

Nick Stokes, I have heard of “moving the goal posts” to win an argument, but you have taken the goal posts off the pitch entirely.
Look up the scientific method, or re-take science 101!
The CAGW hypothesis, as demonstrated by models, has been entirely and completely falsified, by true, unadjusted, untampered, real, empirical scientific measurements. The prediction of the accepted and established hypothesis, (that a doubling in CO2 will result in a warming rate with a central estimate of 3 degrees warming) has NOT happened. the prediction is false, the hypothesis is falsified. Go back to the drawing board and find a hypothesis which is validated by empirical evidence. Stop adjusting the evidence to fit the hypothesis!

tom0mason

Bottom line –
CO2 has risen, is still rising, and temperatures are not.

TLM

Nick Stokes, I cannot believe you are even trying to defend your laughable position, particularly with this classic:

AGW predicted that temperatures would rise, and they did.

OK right, so if a theory is accepted by a scientific consensus as “proved” then it stays proven regardless of all subsequent evidence to the contrary? Gallileo and Einstein might have something to say on that idea.

David Riser

Well nick is right, in that AGW is not deduced from the temperature record, its a failed hypothesis that persists despite solid evidence to the contrary….IE models designed with AGW in mind invariably overestimate warming by large amounts. Natural variability is the primary driver of climate and if you can’t see that by now, you should really take a close look at why you believe in CAGW.

ColdinOz

NicK Stokes says “AGW has been around since 1896. Arrhenius then deduced that CO2 would impede the loss of heat through IR, and would cause temperatures to rise. There was no observed warming then.”
While the IPCC likes to show warming only from 1850, 46 years before 1896; longer time series show warming since the LIA. How much of that if any is AGW is yet to be demonstrated.

Alan the Brit

@ Nick Stokes:
AGW has been around since 1896. Arrhenius then deduced that CO2 would impede the loss of heat through IR, and would cause temperatures to rise. There was no observed warming then. AGW is a consequence of what we know about the radiative properties of gases.
One wee flaw, he completely reversed his opinion about 10 years later! AGW is still just a hypothesis, not even a theory, but once the all encompassing Precautionary Principle is invoked, anything is possible, even fairies at the bottom of your garden! 🙂

Nick Stokes

cd says: November 20, 2013 at 2:59 am
“If you claim to understand what causes climate change then make predictions that their will be statistically significant warming with increasing CO2, and at a particular rate, and then it fails to materialise then by all scientific standards the null hypothesis is accepted.”

No, statistical testing never leads to the null hypothesis being accepted. The outcomes are reject or fail to reject.
If you want to disprove something statistically, you have to adopt the null hypothesis that it is true, and then show that that has to be rejected.

Nick Stokes

Alan the Brit says: November 20, 2013 at 3:33 am
“One wee flaw, he completely reversed his opinion about 10 years later!”

I’d like to see a citation for that.

As Dr Roy Spencer has stated, the climate Null Hypothesis has never been falsified.
The Null Hypothesis is a corollary of the Scientific Method. Because it has never been falsified, it means that the current climate remains well within historical norms. There is nothing either unusual or unprecedented happening, therefore all the arm-waving to the contrary is simply Chicken Little-type alarmism.

son of mulder

In a way Nick Stokes is correct in saying “AGW isn’t deduced from the temperature record”. The debate is actually about whether and, if so, how, when and where climate becomes significantly more dangerous overall having taken account of demographic changes in the broadest sense”.
Global average temperature is used as a simplistic proxy by both sides of the debate to try and justify political actions.
Taking the focus to the real issue and looking at existing data , apart from sea-level behaviour there is as yet nothing to indicate how, when and where things will get more dangerous..

steverichards1984

Nick Stokes says:
November 20, 2013 at 2:55 am
“But a model does not use as input any temperature record.”
I find this difficult to accept!
Do people write simulations with many variables and not seed the variables at the start of simulation?
Surely every simulation run ought to be preceded by an initialization step?

AlecM

To Alan the Brit: the Arrhenius hypothesis is based on the assumption of ‘black body’ surface-emitted real energy being absorbed by GHGs in the atmosphere with that energy being thermalised in the atmosphere.
Only one of these assumptions is valid; if there were IR emission in the self-absorbed GHG IR bands, that energy would be absorbed. However, anyone with sufficient statistical thermodynamics’ knowledge knows that this energy cannot be thermalised in the gas phase (assumes higher or equal temperature surface).
The bottom line is that it all comes down to Tyndall’s experiment having been seriously misunderstood. The GHGs absorb IR energy but the thermalisation has to be at optical heterogeneities, the interface with condensed matter for which the vibrationally activated density of states is much broader.
As for surface emission: the most basic radiative physics is that radiation fields are added vectorially, so there can be no net surface IR in most H2O or CO2 bands. That so many physicists accept the IPCC case proves that modern physics’ education is rubbish. I forgive the climate people because they are taught incorrect physics No professional engineer with process engineering experience accepts this mistaken view because we have to get the right answer.

harbinger

“However, the world continues to add CO2 to the atmosphere and, all other things being equal, some warming can be expected to resume one day.”
Isn’t this conflation of the two and the implied cause and effect, an example of a logical fallacy? I’m sure it will get warmer again and it will get colder again, but without very much reference to CO2.

steverichards1984 says: November 20, 2013 at 3:42 am
“Do people write simulations with many variables and not seed the variables at the start of simulation?
Surely every simulation run ought to be preceded by an initialization step?”

Yes, they do initialize. But typically with a climate model, the initial state is set way back (many decades), and the model left to “wind up”. That’s an acknowledgement that the initial state is not well known, and probably contains unrealistic things that have to be left to settle down. The initial state would be based on climate norms.

Nick Stokes

dbstealey says: November 20, 2013 at 3:42 am
“As Dr Roy Spencer has stated, the climate Null Hypothesis has never been falsified.”

I presume that NH includes zero trend. And that just isn’t true. The fact that people are talking about 20 years or whatever without significant warming implies that the trend over longer times is significantly different from zero. Otherwise what does the number mean?

TLM says: November 20, 2013 at 3:25 am
“OK right, so if a theory is accepted by a scientific consensus as “proved” then it stays proven regardless of all subsequent evidence to the contrary?”

No, there’s a well established way of disproving it. Do it! If you want to do it statistically, posit some consequence of the theory as null hypothesis and try to reject it. Just saying that you have failed to disprove some alternative theory doesn’t work.

DJM

Surely whether there has been a pause in warming or not over the past 17 years is neither here nor there. The key point is that the GCM models predicted that temperatures would rise a lot faster than they have over the past 17 years due to increasing emissions/concentrations of CO2, and the fact that the temperature hasn’t risen as quickly as predicted, kind of suggests that the sensitivity of the surface temperature to CO2 concentrations is low, and hence future impacts will be less severe than currently predicted.
Surely that is a good thing? And thus climate change becomes something less to worry about? Or am I missing something?

Jim Clarke

Nick Stokes says:
November 20, 2013 at 2:55 am
“GISS forcings are often cited. But a model does not use as input any temperature record.”
The temperature record is used in the calculation of the most critical input of all in the models. And not the entire temperature record, but only a tiny fraction of the record that is extremely cherry-picked. The temperature record of late 20th Century warming is used in the calculation of climate sensitivity, which is the sole reason for any significant debate on AGW.
Now, the water vapor feedback hypothesis does not need a temperature record to become a hypothesis. One can hypothesize that the feedback is any number at all, from extremely negative to extremely positive. Yet it is absolutely critical that the feedback number be seen as potentially legitimate, and that it seems to equate with at least some actual temperature record. The late 20th century warming plus CO2 trend are the only time in history that we have any evidence that the current water vapor feedback hypothesis could be valid. Outside of this time, the hypothesis is falsified by evidence that is far more scientifically valid than Cowtan and Way’s Arctic temperatures.
Remove the cherry-picked temperature record from the assumption of a water vapor feedback and the AGW Theory becomes a 1 degree, largely beneficial, temperature rise in which the world can rejoice. So go ahead and remove the temperature record, Nick, and we can all go home pretend the last 25 years of fear mongering never happened. However you can not defend the temperature record as justification for the input assumptions and then deny that the temperature record is relevant to the output.
If the temperature record is not relevant, then the AGW theory is not relevant, from beginning to end.

Jim Clarke says: November 20, 2013 at 4:15 am
“The temperature record is used in the calculation of the most critical input of all in the models. And not the entire temperature record, but only a tiny fraction of the record that is extremely cherry-picked. The temperature record of late 20th Century warming is used in the calculation of climate sensitivity, which is the sole reason for any significant debate on AGW.”

Climate sensitivity is not an input to GCM’s. You can use a GCM to estimate CS. People also try to independently estimate CS from the temp record, but it isn’t easy.

Patrick

Great thread!
“AlecM says:
November 20, 2013 at 3:43 am”
Could not have said it better myself. Give this man a VB!

climatereason

Hi Nick
Have you ever done the exercise whereby you remove the Arctic stations/data from the equation and then graphed the results?
That is to say that what is produced is ;
A) A ‘global’ record excluding the Arctic
B) A NH record excluding the arctic
C) JUST the Arctic itself?
Tonyb

OssQss

So,,,,,, who are these individuals that have written this paper?
What is their history in climate science?
What else have they written?
What groups do they belong to?
What is their motivation for attempting to explain away the pause?
Who funds them?
Who reviews them?