Statistical proof of 'the pause' – Overestimated global warming over the past 20 years

Commentary from Nature Climate Change, by John C. Fyfe, Nathan P. Gillett, & Francis W. Zwiers

Recent observed global warming is significantly less than that simulated by climate models. This difference might be explained by some combination of errors in external forcing, model response and internal climate variability.

Global mean surface temperature over the past 20 years (1993–2012) rose at a rate of 0.14 ± 0.06 °C per decade (95% confidence interval)1. This rate of warming is significantly slower than that simulated by the climate models participating in Phase 5 of the Coupled Model Intercomparison Project (CMIP5). To illustrate this, we considered trends in global mean surface temperature computed from 117 simulations of the climate by 37 CMIP5

models (see Supplementary Information).

These models generally simulate natural variability — including that associated

with the El Niño–Southern Oscillation and explosive volcanic eruptions — as

well as estimate the combined response of climate to changes in greenhouse gas

concentrations, aerosol abundance (of sulphate, black carbon and organic carbon,

for example), ozone concentrations (tropospheric and stratospheric), land

use (for example, deforestation) and solar variability. By averaging simulated

temperatures only at locations where corresponding observations exist, we find

an average simulated rise in global mean surface temperature of 0.30 ± 0.02 °C

per decade (using 95% confidence intervals on the model average). The

observed rate of warming given above is less than half of this simulated rate, and

only a few simulations provide warming trends within the range of observational

uncertainty (Fig. 1a).

Ffe_figure1

Figure 1 | Trends in global mean surface temperature. a, 1993–2012. b, 1998–2012. Histograms of observed trends (red hatching) are from 100 reconstructions of the HadCRUT4 dataset1. Histograms of model trends (grey bars) are based on 117 simulations of the models, and black curves are smoothed versions of the model trends. The ranges of observed trends reflect observational uncertainty, whereas the ranges of model trends reflect forcing uncertainty, as well as differences in individual model responses to external forcings and uncertainty arising from internal climate variability.

The inconsistency between observed and simulated global warming is even more

striking for temperature trends computed over the past fifteen years (1998–2012).

For this period, the observed trend of 0.05 ± 0.08 °C per decade is more than four

times smaller than the average simulated trend of 0.21 ± 0.03 °C per decade (Fig. 1b).

It is worth noting that the observed trend over this period — not significantly

different from zero — suggests a temporary ‘hiatus’ in global warming. The

divergence between observed and CMIP5-simulated global warming begins in the

early 1990s, as can be seen when comparing observed and simulated running trends

from 1970–2012 (Fig. 2a and 2b for 20-year and 15-year running trends, respectively).

The evidence, therefore, indicates that the current generation of climate models

(when run as a group, with the CMIP5 prescribed forcings) do not reproduce

the observed global warming over the past 20 years, or the slowdown in global

warming over the past fifteen years.

This interpretation is supported by statistical tests of the null hypothesis that the

observed and model mean trends are equal, assuming that either: (1) the models are

exchangeable with each other (that is, the ‘truth plus error’ view); or (2) the models

are exchangeable with each other and with the observations (see Supplementary

Information).

Brief: http://www.pacificclimate.org/sites/default/files/publications/pcic_science_brief_FGZ.pdf

Paper at NCC: http://www.nature.com/nclimate/journal/v3/n9/full/nclimate1972.html?WT.ec_id=NCLIMATE-201309

Supplementary Information (241 KB) CMIP5 Models
0 0 votes
Article Rating
348 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
justsomeguy31167
September 5, 2013 12:07 am

Make this sticky? This is huge because of the journal and the authors. Maybe real scientists are seeing so much evidence against AGW that some will tell the truth.

The Ghost Of Big Jim Cooley
September 5, 2013 12:08 am

Does anyone know of any PRO-AGW websites that are commenting on the shift toward natural variability and the lack of warming? Or are they all turning a blind eye to it?

braddles
September 5, 2013 12:25 am

While the CMIP5 models may well have warming rates clustered around 0.3 degrees per decade, we shouldn’t forget that these are NOT the models that are being used to influence policy. The ones being use are much more extreme and should have been utterly discredited by now.
An example here in Australia is a CSIRO model that predicts ‘up to’ 5 degrees by 2070, almost one degree per decade. This was the figure quoted by (former) Prime Minister Gillard and used to justify the carbon tax introduced in 2012.
You can bet that President Obama does not read Nature Climate Change.
In short, the journals are comparing the milder models to the real world (and even then they are failing) while protecting from scrutiny the extreme models that are being presented to policy-makers.

Gösta Oscarsson
September 5, 2013 12:29 am

There are a few “model trends” which correctly describes “observed trends”. Wouldn´t it be intresting to analyse in what way they differ from the rest?

RMB
September 5, 2013 12:31 am

Try heating the surface of water with a heat gun. At 450degsC the surface should quickly boil, in fact it remains cool. You can not heat water through the surface and thats why they are all having a problem.

AndyG55
September 5, 2013 12:33 am

And that is compared to the highly manipulated trend created in HadCrud.
I wonder how the models perform against actual reality !

el gordo
September 5, 2013 12:51 am

‘Or are they all turning a blind eye to it?’
Deltoid is in a death spiral, the blogmasta (Tim Lambert) departed the scene months ago and slowly the place is being taken over by contrarians. Its also under a severe DoS attack.
The old warmist faithful are simply denying the new reality. They don’t even accept the hiatus, even after I pointed out that 97% of scientists agree that its real.

SideShowBob
September 5, 2013 1:00 am

RMB says:
September 5, 2013 at 12:31 am
“Try heating the surface of water with a heat gun. At 450degsC the surface should quickly boil, in fact it remains cool. You can not heat water through the surface… ”
Honestly that is such a moronic comment I think you were sent here to intentionally bring this website into disrepute !

richardscourtney
September 5, 2013 1:03 am

Friends:
The paper is reported to say

It is worth noting that the observed trend over this period — not significantly different from zero — suggests a temporary ‘hiatus’ in global warming.

NO! That is an unjustifiable assumption tantamount to a lie.
Peer reviewed should have required that it be corrected to say something like:
It is worth noting that the observed trend over this period — not significantly different from zero — indicates a cessation of global warming. It remains to be seen when and if warming will resume or will be replaced by cooling.
Richard

September 5, 2013 1:08 am

This ‘histogram’ is based on the actual temperatures
http://www.vukcevic.talktalk.net/CETd.htm

Rich
September 5, 2013 1:18 am

“This difference might be explained by … internal climate variability.” Surely if you’re modelling the climate you can’t say, “My model’s wrong because of internal climate variability” because that’s exactly what you’re supposed to be modelling. They keep saying this and I don’t get it.

Dr Darko Butina
September 5, 2013 1:24 am

It is amazing that all the ‘proofs’ of global warming trends are ‘validated’ by another model or miss-use of statistics and NOT by thermometer. The Vukcevic’s histogram is also based on the annual average and therefore not on ‘actual’ temperatures. The global temperature does not exist, it cannot be measured, not a single property of our atmosphere is global – all the properties are local and climate community should not ignore Essex et al (2007), Kramm-Dlugi (2001) and Butina (2012). Dr Darko Butina

Greg
September 5, 2013 1:35 am

“It is worth noting that the observed trend over this period — not significantly different from zero — suggests a temporary ‘hiatus’ in global warming. ”
What “suggests” that it is temporary?
Ah well we’re getting there slowly. No point in expecting a total and sudden 180. At least it does now seem to be polite to talk about it.

richardscourtney
September 5, 2013 1:55 am

Rich:
Your entire post at September 5, 2013 at 1:18 am says

“This difference might be explained by … internal climate variability.” Surely if you’re modelling the climate you can’t say, “My model’s wrong because of internal climate variability” because that’s exactly what you’re supposed to be modelling. They keep saying this and I don’t get it.

I will try to explain what they are saying, but please do NOT assume my attempt at explanation means I agree with the explanation because I don’t.
The models assume climate varies because of internal variability. This is “noise” around a stable condition.
The models calculate that climate varies in determined manner in response to “forcings”.
Thus, a change to a forcing causes the climate to adjust so a trend in climate parameter (e.g. global temperature) occurs during the adjustment.
If these assumptions are true then
(a) at some times internal variability will add to a forced trend
and
(b) at other times internal variability will subtract from a forced trend.
Until now the modellers have assumed effects of internal variability sum to insignificance over periods of ~15 years. But the ‘pause’ has lasted longer than that. So, internal variability must be significant to climate trends over periods of more than 15 years if the ‘pause’ is an effect of internal variability negating enhanced forcing form increased greenhouse gases (GHGs).
Unfortunately, this is a ‘double edged sword’.
If internal variability has completely negated GHG forced warming for the recent about two decades, then
internal variability probably doubled the warming assumed to have been GHG forced over the previous two decades.
And that ignores the fact that warming from the LIA has been happening for centuries so natural variability clearly does occur for much longer periods than decades (as is also indicated by ice cores). When that is acknowledged then ALL the recent global warming can be attributed to internal variability so there is no residual warming which can be attributed to GHG forced warming.
I hope this explanation is clear and helpful.
Richard

Crispin in Waterloo
September 5, 2013 1:58 am

“It remains to be seen when and if warming will resume or will be replaced by cooling. ”
Right on Richard. That is exactly how it should be phrased. There are multiple indications that it will be cooling. You are also correct that a style editor should have picked that up even if the reviewers did not. Adopting the term ‘hiatus’ was to allow wiggle room for doom-laden forecasters to maintain the story that the heating will come back with more vigour after the ‘pause’.
‘Pause’ implies that the tape will roll when ‘Play’ is pressed again.
By someone.
Or something.
Or not.

Ken Hall
September 5, 2013 1:59 am

Rich (1:18am). You are correct. They should honestly say, my model is wrong. I do not particularly care why it is wrong, as that is for the coders and theoreticians to figure out to try to create a better model. All I care about is the policies which are being implemented, which are hurting millions of families and starving them of energy and money because those models are wrong. I want the politicians to recognise that the models are wrong and to change policy and to throw the warmists out of work and to stop basing dangerously expensive policies on unproven theories backed by fearmongering.

Gail Combs
September 5, 2013 2:07 am

richardscourtney says: @ September 5, 2013 at 1:03 am
Richard, if they had modified their statement to say “It remains to be seen when and if warming will resume or will be replaced by cooling.” The paper would never have made it out of Pal- Review Errr Peer-Review. Heck they could have had something similar in the original submission and it got scrubbed.
What I find most intriguing is the admission:

The evidence, therefore, indicates that the current generation of climate models
(when run as a group, with the CMIP5 prescribed forcings) do not reproduce
the observed global warming over the past 20 years,
or the slowdown in global
warming over the past fifteen years.

So it is not just the last fifteen years it is the last twenty year that the models “do not reproduce”
EPIC FAIL! Now can we all go home and forget this nightmare?

Sleepalot
September 5, 2013 2:15 am

“Global mean surface temperature over the past 20 years (1993–2012) rose at a rate of 0.14 ± 0.06 °C per decade (95% confidence interval)1. ”
Bullshit.

Cheshirered
September 5, 2013 2:25 am

richardscourtney says:
September 5, 2013 at 1:03 am
Very good point. Another subtle way of presenting The Cause in a favourable light. ie this is only temporary and warming WILL start again soon, hence we cannot let up ‘tackling climate change’.
Translation: keep the funding flowing.

kadaka (KD Knoebel)
September 5, 2013 2:32 am

RMB spouted off on September 5, 2013 at 12:31 am:

Try heating the surface of water with a heat gun. At 450degsC the surface should quickly boil, in fact it remains cool. You can not heat water through the surface and thats why they are all having a problem.

This is standard “Sky Dragon Slayer” stuff you’re spewing, but, what the heck, tried it for myself.
Proposed: A heat gun applied to the surface of water cannot heat the water.
Experiment setup:
1 bowl of unknown plastic, semi-flexible, no recycling symbol indicating plastic type, 2 1/3 cups (US measure) capacity. Approximate dimensions: 5 1/2″ inside diameter top with 1/2″ wide rim, 2″ effective depth, circular arc curve (concave interior surface) to 2 7/8″ diameter flat bottom, with integral hollow cylindrical section base of 1/4″ height and 2 7/8″ diameter. Base design minimizes heat transfer with surface underneath. Usually used for cold to warm contents (ice cream to oatmeal) but not boiling hot items.
1 Master Forge Wireless Thermometer #0023557, originally purchased at Lowes, consists of display-less transmitting base with probe and receiving hand unit which displays temperature, set for °F. Normally used for grilling/roasting. Has timer count-up and count-down functions displaying minutes and seconds. Used for temperature readings and timing.
2 cups (US measure) room temperature tap water, from well.
1 Conair 1600W hair dryer, 125VAC, Model 064, used as heat gun.
Procedure:
Water in bowl, thermometer probe in water. Initial reading 74°F (no decimal), room temperature. Bowl resting on white porcelain-coated metal surface (stove top) at 74°F per probe, room temperature.
Heat gun on high, held by hand, outlet aimed at water surface of bowl, approximately 8 inches away at 45° from horizontal, aimed at center of surface. Water surface was notably agitated by the air flow, small quantity of water lost over edge of bowl.
Results in CSV format:
Time,Temperature
min:sec,°F
0:00,74
0:30,74
1:00,75
1:30,76
2:00,76
2:30,77
3:00,77
3:30,78
4:00,78
4:30,78
5:00,79
Discussion: Output of heat gun was applied to surface of water. Temperature of water increased.
Conclusion: A heat gun applied to the surface of water can heat the water. The proposition is falsified.
I tried it, showed to myself you were wrong. How should I have done the experiment so it will yield the result you are certain must happen?

Gail Combs
September 5, 2013 2:32 am

Rich says: @ September 5, 2013 at 1:18 am
…. Surely if you’re modelling the climate you can’t say, “My model’s wrong because of internal climate variability” because that’s exactly what you’re supposed to be modelling. They keep saying this and I don’t get it.
>>>>>>>>>>>>>>>>
What they are eluding to but dare not say is “climate variability” = Chaos
FROM the WUWT article:

First of all, what is Chaos? I use the term here in its mathematical sense….
Systems of forces, equations, photons, or financial trading, can exist effectively in two states: one that is amenable to mathematics, where the future states of the systems can be easily predicted, and another where seemingly random behaviour occurs.
This second state is what we will call chaos. It can happen occasionally in many systems….
There are, however, systems where chaos is not rare, but is the norm. One of these, you will have guessed, is the weather….
So, what does it mean to say that a system can behave seemingly randomly? Surely if a system starts to behave randomly the laws of cause and effect are broken?
Chaotic systems are not entirely unpredictable, as something truly random would be. They exhibit diminishing predictability as they move forward in time, and this diminishment is caused by greater and greater computational requirements to calculate the next set of predictions. Computing requirements to make predictions of chaotic systems grow exponentially, and so in practice, with finite resources, prediction accuracy will drop off rapidly the further you try to predict into the future. Chaos doesn’t murder cause and effect; it just wounds it!….

In other words this study shows that climate is a Chaotic System (DUH!) and therefore ” prediction accuracy will drop off rapidly the further you try to predict into the future.” However when the whole scam (and your gant money) is dependent on computer models ‘predicting’ catastrophic warming (Oh, My we must act NOW!) the last thing you are going to announce is that you have figured out the system is chaotic and therefore all that money for all those computers and models has been wasted.
Why the heck do you think there has been such a big fight over whether or not the IPCC makes ‘Predictions’ or ‘Projections’

Laurie
September 5, 2013 2:34 am

John C. Fyfe, Nathan P. Gillett, & Francis W. Zwiers
I tried to find CVs of these authors and found nothing. Also, I’m ignorant of “Nature Climate Change”. Can someone provide information please?

richardscourtney
September 5, 2013 2:34 am

Gail Combs:
re your post addressed to me at September 5, 2013 at 2:07 am
http://wattsupwiththat.com/2013/09/05/statistical-proof-of-the-pause-overestimated-global-warming-over-the-past-20-years/#comment-1408393
I agree both your points except that your second point is even stronger than you express.
Actually the true but unstated finding is that the models do not work for any length of time.
This is implicit because of the LIA issue I mention in my explanation for Rich at September 5, 2013 at 1:55 am
http://wattsupwiththat.com/2013/09/05/statistical-proof-of-the-pause-overestimated-global-warming-over-the-past-20-years/#comment-1408386
And it is why I said to him

I will try to explain what they are saying, but please do NOT assume my attempt at explanation means I agree with the explanation because I don’t.

Richard

richard verney
September 5, 2013 2:36 am

“Global mean surface temperature over the past 20 years (1993–2012) rose at a rate of 0.14 ± 0.06 °C per decade (95% confidence interval”
//////////////////////////
The fact is that during this 20 year period, the rise in temperature has not been linear (even if one applies some light smoothing to account for year to year variability).
All, or almost all, of the rise in temperature these past 20 years has been associated with a one off isolated event, namely the Super El Nino of 1998. Given the uncertainty 0.14 ± 0.06 °C per decade, we cannot be certain that all the rise in temperature is due to this ENSO event, but certainly the vast majority is. When this is taken into account, it is clear that the models are further off target than even this paper suggests.
As regards the hiatus, of course it is temporary only. Sooner or later, it is inevitable that temperatures will begin to change. But as Richard observes, we do not know in which direction that change will take place.
One further point on the pause, if the CO2 warming theory is sound, it becomes ever more difficult for there to be a pause in circumstances of elevated CO2 levels. It would easier for there to be a say 15 year pause (ie., when natural variability counteracts the warming effect of CO2) when CO2 levels are in the range of say 310 to 335ppm. It is more difficult when CO2 levels are in the range of 380 to 400ppm. It will be even more difficult should CO2 levels reach say 420ppm.
The higher the level of CO2 the greater the CO2 forcing. We are told (and, of course, this is a new development not mentioned in previous IPCC reports) that model runs do sometimes project lengthy pauses in the rise of temperature. However, we are not told at what level of CO2 this pause in the model projection occurs. Has any model shown a 17 or so year pause with CO2 levels in the range of 380 to 400ppm (and rising)?
I find it difficult to conceive how any model could project a lengthy pause when built on the assumption that CO2 is the dominant temperature driver and has dominion over natural variability. Of course they could contain a random number generator to input from time to time negative forcings from natural variability and another random number generator to input negative forcings from volcanoes and it is possible that these randomly generated negative forcings coincide to produce a pause, but this would only be short lived since the negative forcings claimed for volcanoes is only short lived. Ditto if they included a random generator to additionally throw La Nina into the mix.
Finally, this type of study is precisely the type of study which the IPCC itself should right from the early days have conducted when auditing the efficacy of its models and their projections. A reprot such as this should be included in AR5 irrespective of this type of paper.

Berényi Péter
September 5, 2013 2:44 am

The Ghost Of Big Jim Cooley says:
September 5, 2013 at 12:08 am
Does anyone know of any PRO-AGW websites that are commenting on the shift toward natural variability and the lack of warming? Or are they all turning a blind eye to it?

Note replies to comment #2, #6 & #11 by Dr. Gavin A. Schmidt under Unforced variations: Sept. 2013 at the RealClimate blog (Climate science from climate scientists).
1. Promises a future post on Fyfe & al. 2013 (as soon as it comes out will have to be addressed here)
2. Says that conflating model-observation mismatch to a contradiction “is a huge (and unjustified) leap” (whatever that’s supposed to mean)
3. Repeats old mantra “all theories are ‘wrong’ (as they are imperfect models of reality)” (therefore proving them wrong is not an issue, right?)
4. “Judging which one (or more) are falsified by a mismatch is non-trivial.”
5. Has “no problem agreeing that mismatches should be addressed”
6. Is a strong proponent of incorrect, but “useful” theories.
There you go.

Laurie
September 5, 2013 2:52 am

Nevermind 😉 I found what I was looking for concerning the authors. Is “Nature Climate Change” associated with the “Nature” journal?

Gail Combs
September 5, 2013 2:56 am

Sleepalot says: @ September 5, 2013 at 2:15 am
“Global mean surface temperature over the past 20 years (1993–2012) rose at a rate of 0.14 ± 0.06 °C per decade (95% confidence interval)1. ”
Bullshit.
>>>>>>>>>>>>>>>>>>
My thoughts exactly.

Australian temperature records shoddy, inaccurate, unreliable. Surprise!
….The audit shows 20-30% of all the measurements back then [before 1972] were rounded or possibly truncated. Even modern electronic equipment was at times, so faulty and unmonitored that one station rounded all the readings for nearly 10 years! These sloppy errors may have created an artificial warming trend. The BOM are issuing pronouncements of trends to two decimal places like this one in the BOM’s Annual Climate Summary 2011 of “0.52 °C above average” yet relying on patchy data that did not meet its own compliance standards around half the time. It’s doubtful they can justify one decimal place, let alone two….
It was the sharp eye of Chris Gillham who noticed the first long string of continuous whole numbers in a site record…. The audit team were astonished at how common the problem was. Ian Hill and Ed Thurstan developed software to search the mountain of data and discovered that while temperatures of .0 degrees ought to have been 10% of all the measurements, some 20 – 30% of the entire BOM database was recorded as whole number, or “.0″.…..

Anthony and his team of volunteers found problems with the US system. Since these two systems would be considered ‘Top of the Line’ the rest of the surface station data can only be a lot worse. A.J. Strata goes into an analysis of error in the temperature data based on information gleaned from the Climategate e-mails HERE.

kadaka (KD Knoebel)
September 5, 2013 2:56 am

Laurie said on September 5, 2013 at 2:34 am:

John C. Fyfe, Nathan P. Gillett, & Francis W. Zwiers
I tried to find CVs of these authors and found nothing. Also, I’m ignorant of “Nature Climate Change”. Can someone provide information please?

Sometime after Climategate, the previously well-respected journal Nature, while still somewhat respected, decided to divest itself of “climate science” and created the special Nature Climate Change journal, with the expected press release that this was done to highlight the global importance of the issue, give it the attention it is due, yada yada.
To search for published scholarly works, and from them discover the resumes of their writers, use Google Scholar: http://scholar.google.com/
The first name shows up as “JC Fyfe”. Looks like there’s two of them, one does biomedical. The other does climate science, here’s an example that was done for the American Meteorological Society (AMS):

Extratropical Southern Hemisphere Cyclones: Harbingers of Climate Change?
John C. Fyfe

Canadian Centre for Climate Modelling and Analysis, Meteorological Service of Canada, Victoria, British Columbia, Canada

Try Google Scholar for that and the other names.

johnmarshall
September 5, 2013 2:58 am

Or perhaps it is because the models have CO2 as an agent of warming when it cannot do this.

gnomish
September 5, 2013 2:59 am

kadaka
repeat the experiment with an infrared heater.
you may be able to heat the water slightly but you will find that the top layer absorbs most of the radiation and produces vapor in response – then the vapor absorbs the radiation and carries it off.
some numbers on that would make an interesting comparison

Nick Boyce
September 5, 2013 3:05 am

At the risk of repeating myself, in view of the admitted uncertainties in the global surface air temperature record, it is not at all clear how much, if any, global warming has taken place at the surface of the earth since about 1880.
http://lidskialf.blogspot.co.uk/p/global-warming-is-hoax-2.html

richardscourtney
September 5, 2013 3:20 am

Nick Boyce:
Your post at September 5, 2013 at 3:05 am says in total

At the risk of repeating myself, in view of the admitted uncertainties in the global surface air temperature record, it is not at all clear how much, if any, global warming has taken place at the surface of the earth since about 1880.
http://lidskialf.blogspot.co.uk/p/global-warming-is-hoax-2.html

Yes, I know. Indeed, I have been hammering the point in many places for many years; see e.g.
http://www.publications.parliament.uk/pa/cm200910/cmselect/cmsctech/memo/climatedata/uc0102.htm
So, I care about the inability to determine global temperature at least as much as you do.
However, that is NOT relevant to the discussion in this thread.
The climate models attempt to emulate climate change as indicated by global temperature (whatever that metric means). But the paper being discussed reports that the models fail in that attempt.
This failure is important because all IPCC predictions and projections are based on outputs of the climate models. Therefore, if the models do not emulate climate change – and the paper reports that they don’t – then everything the IPCC says is wrong so needs to be ignored.

Discussion of the failings of global temperature determination would disrupt the thread from its important subject. It should be avoided however much you, I or anyone else cares about the travesty which is determination of global temperature.
Richard

Gail Combs
September 5, 2013 3:45 am

richardscourtney says: @ September 5, 2013 at 2:34 am
…I agree both your points except that your second point is even stronger than you express.
Actually the true but unstated finding is that the models do not work for any length of time.
This is implicit because of the LIA issue I mention….
>>>>>>>>>>>>>>>
In the light of the geologic past the whole edifice crumbles. This study talks of “Global mean surface temperature over the past 20 years (1993–2012) rose at a rate of 0.14 ± 0.06 °C per decade”
That is a STABLE Climate and we should thank God for it.
From NOAA:

…Two different types of climate changes, called Heinrich and Dansgaard-Oeschger events, occurred repeatedly throughout most of this time. Dansgaard-Oeschger (D-O) events were first reported in Greenland ice cores by scientists Willi Dansgaard and Hans Oeschger. Each of the 25 observed D-O events consist of an abrupt warming to near-interglacial conditions that occurred in a matter of decades, and was followed by a gradual cooling….
http://www.ncdc.noaa.gov/paleo/abrupt/data3.html

(Note this not talking centuries but decades.)
How much of a ‘Warming?

Were Dansgaard-Oeschger events forced by the Sun?
Abstract
Large-amplitude (10–15 Kelvin), millennial-duration warm events, the Dansgaard-Oeschger (DO) events, repeatedly occurred during ice ages. Several hypotheses were proposed to explain the recurrence pattern of these events….

Not only were this drastic changes in temperature but they still do not know what caused them.
These abrupt warmings also occur during Interglacials.
Again from NOAA.

A Pervasive 1470-Year Climate Cycle in North Atlantic Glacials and Interglacials: A Product of Internal or External Forcing?
Gerard C. Bond (Lamont-Doherty Earth Observatory…
New evidence from deep sea piston cores in the eastern and western subpolar North Atlantic suggests that regional climate underwent rapid sub-Milankovitch variability, not only during the last glaciation, as has been previously documented on a global scale, but also during the present interglacial (Holocene) and the previous interglacial (stage 5e). The evidence consists of recurring shifts in lithic grain concentrations, lithic grain petrology and percentages of foraminiferal species. Amplitudes of this cycle during interglacials are much smaller than during glacials, typically by a factor of 2 to 3 in temperature and by more than one order of magnitude in amounts of ice rafted debris…
Three features are especially noteworthy in our records. First, we find a persistent quasi-periodic cycle with a mean pacing of 1470 years in both glacials and interglacials, demonstrating that climate on that time scale oscillated independently of ice volumes….
The origin of the 1470-year cycle is far from clear. Its persistence across glacial- interglacial boundaries is evidence that it cannot have been produced by any internal process involving ice-sheet instabilities. On the other hand, the cycle pacing is close to the overturning time of the ocean, raising the possibility that it arises from an internal oscillation within the ocean’s circulation. External processes, such as solar forcing and harmonics of the orbital periodicities cannot be ruled out, but are, at least presently, difficult to test.
http://www.ncdc.noaa.gov/paleo/chapconf/bond_abs.html

Even at a factor of 2 to 3 smaller (of the 10–15 Kelvin amplitude) that still gives a 1 to 3 Kelvin change “in a matter of decades” a far cry from the ‘Catastrophic’ 0.14 ± 0.06 per decade the Warmists are bleating on about.

steveta_uk
September 5, 2013 3:46 am

kadaka, you need to repeat this with an incandescant light heat source, and a dark base to the bowl, to verify that light cannot possibly heat water, as per another of the S*y Dr*gon rants.

Gail Combs
September 5, 2013 3:56 am

gnomish says: @ September 5, 2013 at 2:59 am
… repeat the experiment with an infrared heater.
you may be able to heat the water slightly but you will find that the top layer absorbs most of the radiation and produces vapor in response – then the vapor absorbs the radiation and carries it off.
some numbers on that would make an interesting comparison.
>>>>>>>>>>>>>>>>>>
Agreed.
A heat gun will ‘froth’ the water causing disruption of the surface boundary layer. (That type of disturbance is one of the arguments used by warmists to say the oceans absorb heat from CO2.)

richardscourtney
September 5, 2013 4:04 am

Gail Combs:
re your post at September 5, 2013 at 3:45 am
http://wattsupwiththat.com/2013/09/05/statistical-proof-of-the-pause-overestimated-global-warming-over-the-past-20-years/#comment-1408442
I fear we may be straying too far from the important subject of this thread. However, without meaning to start a side-track in the thread, I write to stress the importance of your point for the benefit of others.
The D-O Events indicate that the Earth has two stable conditions (i.e. glacial and interglacial). Transition between them consists of rapid ‘flickers’ between the two states until the climate stabilises in one of them.
This is consistent with the climate system being chaotic and having (at least) two strange attractors.
If that indication is correct then the fundamental assumption used in the climate models is wrong. The models assume climate change is driven by forcings.

However, the climate system has varying thermal input and varying temperature during each year so it is never in equilibrium. And, therefore, it oscillates (e.g. global temperature rises and falls by 3.8°C during each year).
If the chaotic climate system is constantly seeking its nearest strange attractor while constantly experiencing a changing equilibrium then ‘forcing’ is not relevant to climate change.
Richard

September 5, 2013 4:30 am

Gail Combs says:
September 5, 2013 at 2:07 am
So it is not just the last fifteen years it is the last twenty year that the models “do not reproduce”

An interesting and accurate description. I would like to see the push back from the warmists if the reality of your statement is shown to them.

KevinM
September 5, 2013 4:33 am

117 simulations
114 high
3 on target
+ 0 low.
—————————
Groupthink

September 5, 2013 4:38 am

This is what happens when you try to model 1/f noise believing it to be a signal.
As the IPCC kindly showed in their graph showing the frequency distribution of the temperature it is 1/f noise. Like “normal” noise this is completely random, but unlike normal noise although the value at any time is random, in 1/f noise there is a high correlation between successive time points.
So e.g. if it becomes “hot” … it stays hot (for a while). If it is cold … it stays cold and if there is a trend … it tends to stick around.
In other words to the naive academic who wants to mine data for their next paper … it is full of quirks that can be said to be “something” which are all just random noise.
The only reason they got away with it so long is that it takes so long for the climate to change … and before their bogus claims of finding “something” got tossed in the trash.

Steven Hill from Ky (the welfare state)
September 5, 2013 4:50 am

So, when this is all over, who’s going to take Gore to court for all the damages he has caused? I’d say that 200 million could pay off some coal miners and pay refunds for elevated electric bills people have been paying. I want to see people like Gore punished for all the lies they has been spewing. Take that Peace Prize away from him. It’s about time to take this country back.

September 5, 2013 5:07 am

Rich says: Surely if you’re modelling the climate you can’t say, “My model’s wrong because of internal climate variability” because that’s exactly what you’re supposed to be modelling.
The difference between what these academics do and what a professional engineer would do is simple. If the model is M(t) and the climatevariability funciton is V(t) or written E(t) = 1+V(t)
Then the academic view of the world is that
Temperature = M(t) x E(t)
Whereas for the engineer would see this as:-
Temperature = V(t) + M(t)
In essence there is almost no difference between these two, but the assumptions on which they are based lead to a very big difference. The academic, in their cosy out-of-touch world which only cares about curve fitting to data doesn’t need to worry about being sued if the “bridge falls down”. So they can assume that the model is right and can dismiss that awkward thing called “natural variability” ignore the errors in E(t) and with a wave of their hand magically assume they “averaged out” and ignored. (1/f noise doesn’t average out). In contrast the engineer (who deal with the real world where people die if they are not right) would start from the premise that nothing was known for sure unless or until they were confident they knew how big M(t)’s contribution was. This is in our culture: “expect the unexpected” … expect natural variability. So engineers who are trained to be cautious in real world situations (not ivory towers and grant applications) and who are drilled in the true meaning of “confidence” (models that don’t fail, bridges that don’t collapse, weather forecasts that aren’t disastrously wrong) … we want models which attain the engineer’s meaning of “confidence” and everything else is “natural variability”.
For the academic … “confidence” … is only a paper exercise that the curve fitted
This leads to two very different viewpoints:-
Academic Temperature = M(t) …. Global temperature is the model and confidence= “it fitted”.
Engineer = Temeprature = V(t) …. Global temperature cannot be modelled unless or until we are sure there is a model that works and confidence is your credibility at getting it right first time.

Claude Harvey
September 5, 2013 5:07 am

Note the “spin” in the linked Pacific Climate article summarizing the paper:
“Over long time scales, global climate models successfully simulate changes in a variety of climate variables, including the global mean surface temperature since 1900. However, over shorter time scales the match between models and observations may be weaker.”
Translation: “We’re still all going to burn up and die if we don’t drown first!”

Editor
September 5, 2013 5:13 am

RMB says:
September 5, 2013 at 12:31 am

Try heating the surface of water with a heat gun. At 450degsC the surface should quickly boil, in fact it remains cool. You can not heat water through the surface and thats why they are all having a problem.

Yes, but you can heat water with a stream of air with a dewpoint higher than the temperature of the water. And possibly with a wet bulb temperature greater than the water temperture.
Hint – if you see fog forming over an ocean, you can be pretty confident that realtively warm, moist air is advecting over the water surface and that moisture is condensing on the surface. That releases heat that warms the water, wave action mixes is downward.
The wet bulb temperature is a temperature that an air mass can bring water too by conduction and evaporation. The reason the heat gun doesn’t work well is due to the hot dry air evaporating the water surface.

Steve Keohane
September 5, 2013 5:15 am

Gail Combs says:September 5, 2013 at 3:56 am
gnomish says: @ September 5, 2013 at 2:59 am
… repeat the experiment with an infrared heater.
you may be able to heat the water slightly but you will find that the top layer absorbs most of the radiation and produces vapor in response – then the vapor absorbs the radiation and carries it off.
some numbers on that would make an interesting comparison.
>>>>>>>>>>>>>>>>>>
Agreed.
A heat gun will ‘froth’ the water causing disruption of the surface boundary layer. (That type of disturbance is one of the arguments used by warmists to say the oceans absorb heat from CO2.)

Gail, isn’t the ‘normal’ state of the ocean surface ‘frothed’, due to wave and wind action?

September 5, 2013 5:16 am

1. When temperature anomalies are used, is the temperature of the reference period (which is subtracted from the reading to give the anomaly) also adjusted when the rest of the data are adjusted?
2. When it is stated that Earth is recovering from the Little Ice Age by getting warmer, where is the source of more heat and is it a long-term source (like a warmed ocean portion releasing heat) or is it a quick-changing source, like a radiation imbalance in the atmosphere?
I think it is weak to argue that the Earth is recovering from an LIA unless a mechanism is given, one that is consistent with measurements.
For those who query the actual temperature change in the last 20 years, do try the UAH or RSS satellite record. Note, however, that there is no compelling argument that temperatures taken from a Stevenson screen 2.5 m above the surface of the Earth should be the same as (not offset from) those from a satellite measuring microwaves from a thickness of oxygen some distance above the Earth.

Bruce Cobb
September 5, 2013 5:16 am

Except no one’s claiming that there has been a “pause” for 20 years. Calling a rise in temperature at the rate of 0.14 ± 0.06 °C per decade sure doesn’t sound like a “pause”, although it could be termed a slowdown. And there it is. By cherry-picking the last 20 years, instead of the last, say 17 years, they can claim a “slowdown”. It’s a way of back-pedaling, and thus keeping their precious CO2-centric models alive for at least a while longer.

September 5, 2013 5:22 am

only one complaint…
“For this period, the observed trend of 0.05 ± 0.08 °C per decade is more than four
times smaller than the average simulated trend of 0.21 ± 0.03 °C per decade (Fig. 1b).”
it kinda pissante but… the rate is one fourth that of the average simulated trend…
you cant be four times smaller than anything… once you get to one time smaller, youre at zero.
just saying it cause its true.

bit chilly
September 5, 2013 5:25 am

uk banks were made to pay back customers for mis-sold policies.i trust the government will be paying us all back the 15% green energy tax we are currently paying,and the inflated vehicle tax for vehicles producing higher amounts of co2 ,along with the funding diverted from important research into cancer etc ?
is there any organised concerted effort in the US or the UK to petition government with the now constant stream of information falsifying the cAGW hypothesis ? if not ,it is time it was organised by ordinary citizens.
in the UK a petition with 100,000 signatories must be discussed in parliament. is there such a petition active at the moment ?

Editor
September 5, 2013 5:26 am

gnomish says:
September 5, 2013 at 2:59 am

kadaka
repeat the experiment with an infrared heater.
you may be able to heat the water slightly but you will find that the top layer absorbs most of the radiation and produces vapor in response – then the vapor absorbs the radiation and carries it off.
some numbers on that would make an interesting comparison

That’s a completely different experiment.
What I expect will happen is that evaporation will occur and raise the dew point and wet bulb temperature of the air in the room (kitchen, a stove top was in use). We can ignore the wet bulb temperature as there is little wind. As the dew point goes above the water temperature, the water will begin to warm and conduction will transport heat downward.
A completely different experiment.

R Taylor
September 5, 2013 5:30 am

richardscourtney says:
September 5, 2013 at 1:03 am
——————————————————
It’s no lie. Voodoo priests that truly serve the tribal chief can devine a future that serves his interest, no matter what they have said in the past.

Editor
September 5, 2013 5:32 am

steveta_uk says:
September 5, 2013 at 3:46 am

kadaka, you need to repeat this with an incandescant light heat source, and a dark base to the bowl, to verify that light cannot possibly heat water, as per another of the S*y Dr*gon rants.

This is also a completely different experiment. Light energy that isn’t absorbed at the surface will warm the bottom of the bowl and then heat the water. (In the ocean some will be absorbed by water and stuff in it.) Of course, there’s the claim that visible light doesn’t heat objects, only infrared does that, probably the most blatantly idiotic claim.

Bill Illis
September 5, 2013 5:35 am

Good place to start the numbers from, 1993 at the deepest part of the temp downturn from the Pinatubo eruption. One gets the max temp increase trend starting here.
To be fair, the authors then go on to remove the volcanic and ENSO signals and find less warming of course. Then they note the temp trends are similar to the AMO cycles.
At least the climate scientists are no longer ignoring the difference between the models and the observations.

Snowlover123
September 5, 2013 5:38 am

A lot of you are noting and criticizing the paper for noting the hiatus to be “temporary.” But I would just like to point out that this is a huge step for some climate scientists, to acknowledge that the data does in fact point out that the rate of warming is not statistically significant from zero over the last 15 years. We’re making baby steps. At first there was vehement denial that such a pause existed, and many that acknowledged it were chastised. Now, we are getting “mainstream” confirmation, which IMO is huge. This is also considering that the 1990s saw some pretty quick rates of warming. Even including that rate, the models still grossly overestimate temperature rise.

Chuck L
September 5, 2013 5:39 am

Rich says:
September 5, 2013 at 1:18 am
“This difference might be explained by … internal climate variability.” Surely if you’re modeling the climate you can’t say, “My model’s wrong because of internal climate variability” because that’s exactly what you’re supposed to be modeling. They keep saying this and I don’t get it.
RICH, I do not think the modelers and their enablers are capable of admitting that “maybe the models are wrong” because
a. They want the money to keep flowing
b. The models have become articles of faith, rather than tools for exploring the science

kadaka (KD Knoebel)
September 5, 2013 5:49 am

gnomish said on September 5, 2013 at 2:59 am:

kadaka
repeat the experiment with an infrared heater.
you may be able to heat the water slightly but you will find that the top layer absorbs most of the radiation and produces vapor in response – then the vapor absorbs the radiation and carries it off.
some numbers on that would make an interesting comparison

Proposed: Thermal radiation from an infrared heater applied to the surface of water cannot heat the water.
Experiment setup:
1 bowl of unknown plastic, semi-flexible, no recycling symbol indicating plastic type, 2 1/3 cups (US measure) capacity. Approximate dimensions: 5 1/2″ inside diameter top with 1/2″ wide rim, 2″ effective depth, circular arc curve (concave interior surface) to 2 7/8″ diameter flat bottom, with integral hollow cylindrical section base of 1/4″ height and 2 7/8″ diameter. Base design minimizes heat transfer with surface underneath. Usually used for cold to warm contents (ice cream to oatmeal) but not boiling hot items.
1 Master Forge Wireless Thermometer #0023557, originally purchased at Lowes, consists of display-less transmitting base with probe and receiving hand unit which displays temperature, set for °F. Normally used for grilling/roasting. Has timer count-up and count-down functions displaying minutes and seconds. Used for temperature readings and timing.
2 cups (US measure) room temperature tap water, from well.
1 Sears Kenmore 30″ Electric Free Standing Range, 240VAC, Model # 911.93411000, broiler element used as infrared heater.
Procedure:
Water in bowl, thermometer probe in water. Initial reading 75°F (no decimal), room temperature. Bowl placed inside oven chamber. Bowl resting at center of factory-original steel grid oven rack with water surface at approximately 8″ from broiler element.
Broiler element was turned on, door was left ajar to minimize heating of the bowl and water by heated air in the chamber. Water surface was still.
Results in CSV format:
Time,Temperature
min:sec,°F
0:00,75
0:30,75
1:00,75
1:30,75
2:00,75
2:30,75
3:00,75
3:30,75
4:00,75
4:30,76
5:00,77
5:30,77
6:00,78
6:30,78
7:00,79
7:30,80
8:00,80
8:30,81
9:00,83
9:30,84
10:00,85
Experiment was terminated due to concern over notable acceleration of rate of warming. After shutting off the infrared heater and examining the chamber with the goal of removing the bowl, it was determined the bowl rim had begun thermal-based deformation. Containment of the water had not been lost. After partial cool-down the bowl with water was removed from the chamber. It was observed that the integral base did not have any deformation marks from the individual rods of the steel rack, indicating the steel rack was cooler than the bowl rim.
Except for the rim, there is no noticeable deformation of the bowl. As the rim deformed and solidified into a flexible state apparently unchanged from before, the material is identified as a thermoplastic plastic, not a thermoset plastic.
There was no noticeable production of steam or any other form of water vapor.
Discussion: Output of infrared heater was applied to surface of water. Chamber was not preheated. Temperature of water increased after an apparent warm-up period. It was already known that excessively prolonging the experiment would lead to likely catastrophic containment failure, thus it was planned for it to be terminated upon possible signs of possible container deformation, and it was.
It is clear the water was warmed. But it is also clear the plastic of the bowl absorbed the emissions of the infrared heater, as the plastic nearest to the heater that was not able to effectively use the water as a thermal sink did deform.
As it cannot be determined how much of the heating of the water was due to direct absorption by the water of infrared heater emissions and how much of the heating was from the heating of the bowl due to the emissions, it is evident the bowl used was not made of the proper material for use with this heat source.
Conclusion: Due to deficiencies in experimental apparatus, the proposition has been neither confirmed nor falsified.
Additional: A proper design for this experiment would use a container that will be unaffected by the expected temperatures and that will not directly absorb the emissions of the infrared heater, which would indicate a metal like stainless steel, that will not allow the water to absorb ambient chamber heat, which would not indicate metal.
The recommendation would therefore be for a stainless steel bowl (or similar container) that sits on a base of an insulating and non-heat retaining material such as that used for lightweight fire bricks (examples) or a supporting mat of a material like Kaowool (examples). The insulating material would have to cover the exterior of the bowl up to at least the water level. Ideally the insulation would go to the rim with the water level up to the rim, but a relatively small amount of container above the insulation and the water surface would yield a negligible difference.
Without such a setup to control the confounding possibility of the container heating the water, it is unlikely any meaningful conclusion can be drawn from the attempted heating of water by an infrared heater.

September 5, 2013 6:15 am

Interestingly enough my data confirm a trend of about 0.15 degree C per decade warming both on maxima and mean temperatures.(note that my tables are laid out in degrees C per annum)
http://blogs.24.com/henryp/2013/02/21/henrys-pool-tables-on-global-warmingcooling/
However, a problem turns up if we look at the warming from 2000?
Stop worrying about the global warming, start worrying about the global cooling.
http://blogs.24.com/henryp/2013/04/29/the-climate-is-changing/

Steven Hill from Ky (the welfare state)
September 5, 2013 6:20 am

1st it was Hansen’s Ice Age followed by the boiling planet. Can we get him some Prozac, in fact, Obama needs some as well. It’s treason what these people have done to our nation. All the weather channel talks about is climate change this and that. Tornado’s are down, Hurricanes are down and the insurance companies are cleaning up on Gore’s constant lying. Wake up people, nobody can even get close to what the earth is going to do next…….dah!!!!!

Steven Hill from Ky (the welfare state)
September 5, 2013 6:21 am

Man is nothing more than an ant in a tiny corner of the universe….that’s it, nothing more, nothing less.

September 5, 2013 6:25 am

Steven Hill says
Man is nothing more than an ant in a tiny corner of the universe….that’s it, nothing more, nothing less.
henry@steven
Where is your faith?
http://blogs.24.com/henryp/2013/03/01/where-is-your-faith/

Rich
September 5, 2013 6:34 am

richardscourtney: Thank you for trying to make that clear. Can I summarize it as, “There’s more noise in the system than we assumed”? If so, aren’t we just back with Lorenz’s discovery that chaotic systems produce output that looks like noise? If that’s the case then it’s the noise that has to be modelled not condensed into “error bars”. (I do know it’s not you I’m arguing with. Thanks for your efforts to explain the climate modellers’ thinking).

Bruce Cobb
September 5, 2013 6:34 am

It’s as though they mean to say “stopped”, but somehow it comes out as “slowdown”. Probably something to do with knowing on which side their bread is buttered.

Nick Boyce
September 5, 2013 6:35 am

Reply to Richard Courtney
richardscourtney says:
September 5, 2013 at 3:20 am
You say that my comment is both irrelevant and disruptive in this thread. I don’t see how it can be both, although it might be one or the other. As it happens, my comment seems to have passed by without disrupting the discussion. So that leaves its irrelevancy. Part of the main article in question makes the following claim.
“Global mean surface temperature over the past 20 years (1993–2012) rose at a rate of 0.14 ± 0.06 °C per decade (95% confidence interval)1.”
I’m no expert in 95% confidence intervals as used by statistitians, but even so I’m 100% confident that the (+/-(0.06))*C margin of error mentioned is ludicrously small. I’m just an old twerp who only became interested in global warming etc upon becoming an old age pensioner, so what do I know (?) except that these margins of error are of capital importance when estimating mean global surface temperatures. If I have bee in my bonnet, it’s that these margins of error are always ridiculously small, and I’ll not apologise for that.

Julian Flood
September 5, 2013 6:36 am

Steve Keohane says:
quote
Gail, isn’t the ‘normal’ state of the ocean surface ‘frothed’, due to wave and wind action?
unquote
While we’re doing experiments:
Do various heating-from-above experiments with water that has been rigorously cleaned and the same water that has been polluted with a mix of light oil and surfactant.
Difficult to simulate wave action though, as the bowl won’t be big enough, but we can observe from nature that the mix suppresses waves. I wonder what happens to heating when the surface frothing is suppressed?
JF

Gene Selkov
September 5, 2013 6:36 am

Steven Hill from Ky (the welfare state) says:
> So, when this is all over, who’s going to take Gore to court for all the damages he has caused?
I hope somebody properly skilled tries that, but I am sceptical of the outcome. How can you punish somebody for delivering something that was so universally acclaimed? With ecstatic audiences screaming for more? Can’t charge one for rape if it was consensual, I’m afraid.
The real damages were caused by Gore and nearly half the population of the planet. Can we sue them all?

ATheoK
September 5, 2013 6:45 am

“Commentary from Nature Climate Change, by John C. Fyfe, Nathan P. Gillett, & Francis W. Zwiers
 
Recent observed global warming is significantly less than that simulated by climate models.
–This difference might be explained by some combination of errors in:
—- external forcing,
—–model response and
—–internal climate variability.
 
Global mean surface temperature over the past 20 years (1993–2012) rose at a rate of 0.14 ± 0.06 °C per decade (95% confidence interval)
—1. This rate of warming is significantly slower than that simulated by the climate models participating in Phase 5 of the Coupled Model Intercomparison Project (CMIP5).
–To illustrate this, we considered trends in global mean surface temperature computed from 117 simulations of the climate by 37 CMIP5
models

 
These models generally simulate natural variability — including
—-that associated with the El Niño–Southern Oscillation and
—-explosive volcanic eruptions
—– as well as estimate the combined response of climate to changes in greenhouse gas
concentrations,
—-aerosol abundance (of sulphate, black carbon and organic carbon,
——for example), ozone concentrations (tropospheric and stratospheric),
——-land use (for example, deforestation) and solar variability.
By averaging simulated temperatures only at locations where corresponding observations exist, we find an average simulated rise in global mean surface temperature of 0.30 ± 0.02 °C
per decade (using 95% confidence intervals on the model average). The
observed rate of warming given above is less than half of this simulated rate, and
only a few simulations provide warming trends within the range of observational
uncertainty…
 
…For this period, the observed trend of 0.05 ± 0.08 °C per decade is more than four
times smaller than the average simulated trend of 0.21 ± 0.03 °C per decade (Fig. 1b).
 
It is worth noting that the observed trend over this period — not significantly
different from zero — suggests a temporary ‘hiatus’ in global warming.
 
The divergence between observed and CMIP5-simulated global warming begins in the
early 1990s,
as can be seen when comparing observed and simulated running trends
from 1970–2012 (Fig. 2a and 2b for 20-year and 15-year running trends, respectively).
The evidence, therefore, indicates that the current generation of climate models
(when run as a group, with the CMIP5 prescribed forcings) do not reproduce
the observed global warming over the past 20 years, or the slowdown in global
warming over the past fifteen years.
 
This interpretation is supported by statistical tests of the null hypothesis that the
observed and model mean trends are equal

 
Worthless. Waffle and weasel words that appears scientific, but is not.
 
Begin with the divergence begin in the early 1990’s; or when the models began running. The models were wrong from the get go and should have been tested, qualified and certified before ever running simulations for use.
 
With uncertified unqualified models that have no accuracy to observations; these folks than have the nerve to claim that the models “…generally simulate natural variability…”. The operative word is “generally”, meaning in their opinion, not verified testing.
 
“…By averaging simulated temperatures only at locations where corresponding observations exist…” This is a statistically valid method? Do the errors from the other locations carry on through? This phrase looks like ‘cherry pick’ in capitals and they still can’t get what they want.
 
This averaging is after running the models 117 times. Why 117 runs? Why not 125 runs or 300 runs or 10 runs… Such an odd number that 117 runs, smells like…
 
They have what can only be termed massive observational evidence against the models and in their final sentences they slip in a word ‘suggests’ and then a phrase ‘temporary hiatus in global warming’ with their use of the word ‘hiatus’ within apostrophes for emphasis.
 
Only absolute faith in the unproven theory of anthro global warming can underlay that temporary phrase, as it certainly isn’t in the evidence. Instead, the authors should have declared the models useless until corrected and independently certified. They should also be seriously considering whether anthro contributions to AGW can truly be accurately identified outside of natural variability. I agree with RichardsCourtney about how this phrase is correctly described, (lie), but differ slightly on what the authors should have said.

MikeN
September 5, 2013 6:46 am

I’m confused by the numbers here. If the 15 year trend in the models is .21C per decade, while the 20 year trend is .30C per decade, then that would mean the models calculated .315C warming for 15 years and .60C for 20 years, and thus a cooling of .285C from 1993 to 1998.

JJ
September 5, 2013 6:47 am

Listen to this crap:
It is worth noting that the observed trend over this period — not significantly
different from zero — suggests a temporary ‘hiatus’ in global warming.

The observed trend does not suggest that the cessation of warming is temporary. That’s a lie. And the use of the ‘hiatus’ makes the lie redundant.
The evidence, therefore, indicates that the current generation of climate models
(when run as a group, with the CMIP5 prescribed forcings) do not reproduce
the observed global warming over the past 20 years, or the slowdown in global
warming over the past fifteen years.

“Do not reproduce the observed global warming”? WTF kind of stilted sentence construction is that? And ‘slowdown in global warming’? It didn’t slow down. It stopped. These sort of linguistic tricks to hide the truth and imply lies are propaganda techniques, not honest scientific communication.
Stated plainly, the evidence indicates that the current climate models grossly exaggerated the observed warming over the last 20 years, and predicted even greater warming still, when in fact there was none at all for the past 15 years. This, therefore, demonstrates that the models’ predictions were bad, and have become even worse. The observed trend over this period suggests that anything that these models predict for the future is absolute bullshit.

richardscourtney
September 5, 2013 6:56 am

Geoff Sherrington:
In your post at September 5, 2013 at 5:16 am
http://wattsupwiththat.com/2013/09/05/statistical-proof-of-the-pause-overestimated-global-warming-over-the-past-20-years/#comment-1408491
you say

I think it is weak to argue that the Earth is recovering from an LIA unless a mechanism is given, one that is consistent with measurements.

It seems you don’t understand this thing called ‘science’.
The first stage of a scientific investigation is to admit you don’t understand an observed effect.
After that you can start the process of determining what you don’t understand.
And that process is prevented by pretending that
(a) the effect doesn’t exist because it is not understood
or
(b) that you understand the effect when you don’t.
I wonder where you obtained your mistaken and anti-science idea that a mechanism should be ignored unless its mechanism is understood. Perhaps from climastrologists?
In reality it is a STRENGTH (n.b. not “weak”) to acknowledge what is observed but not understood because that can lead to understanding “which is consistent with measurements”. Indeed, if an understanding is not “consistent with measurements” then it is not a true understanding.
And that is what this thread is all about. The modellers built their climate models to represent their understandings of climate mechanisms. If their understandings were correct then the models would behave as the climate does. The fact that the climate models provide indications which are NOT “consistent with measurements” indicates that the understanding of climate mechanisms of the modellers is wrong (or, at least, the way they have modeled that understanding is in error).
Richard

Gail Combs
September 5, 2013 7:02 am

richardscourtney says: @ September 5, 2013 at 4:04 am
I fear we may be straying too far from the important subject of this thread….
>>>>>>>>>>>>>>>>>
I think it is all related since the inability of the models to preform as advertised is because they completely miss the boat on how the climate actually works.

September 5, 2013 7:05 am

richardscourtney says:
September 5, 2013 at 1:55 am
Until now the modellers have assumed effects of internal variability sum to insignificance over periods of ~15 years.
====================
that really is the crux of the problem. the assumption that natural variability is simply noise around a mean. and thus will average out to zero over short periods of time. chaos tells us something entirely different.
chaos tells us that averages are an illusion of your sample period. as you increase the sample period longer term attractors will come to dominate, changing the long term average without any change in the forcings.
this is completely overlooked in the climate models, which assume that any long term change can only be a result of a change in the forcings.

Scott
September 5, 2013 7:09 am

We sometimes use a 150 gallon metal cattle trough heated with a propane weedburner as a makeshift jacuzzi at our cabin in the woods. The first time we tried to heat it in the winter, we blasted the weedburner at the side of the trough for hours and to our amazement it barely heated the water at all. We wisened up, placed the trough over a shallow trench in the sand, blasted it lengthwise across the bottom and it nicely heated up to temperature in 45 minutes. I suspect if we attempted to blast the weedburner at the waters surface we’d still be waiting for the water to heat up.
I concluded that a large volume of water is best heated from the bottom.

September 5, 2013 7:13 am

I am pleasantly astounded at how quickly discussion of the ‘pause’ has passed from heresy to mainstream. Now all someone has to do is publish the ultimate taboo: natural variability can push temperatures up as well as down.
I am also hugely enjoying KD Knoebel’s rather off-topic but superbly dry experimental reports. There is some ground-breaking determination of the properties of plastics going on right before our eyes: “… the material is identified as a thermoplastic plastic, not a thermoset plastic.” Insightful. I’m sure the Slayers are learning a lot, if they can keep up.

Gail Combs
September 5, 2013 7:14 am

Steve Keohane says: @ September 5, 2013 at 5:15 am
Gail, isn’t the ‘normal’ state of the ocean surface ‘frothed’, due to wave and wind action?
>>>>>>>>>>>>>>>>
It varies. The Horse Latitudes (between 30 and 35 degrees, north and south) were called that because of all the dead horses tossed overboard when the sailing ships got stuck in a no wind situation.
That is why both experiments are of interest.

richardscourtney
September 5, 2013 7:16 am

Nick Boyce:
Your post at September 5, 2013 at 6:35 am begins

Reply to Richard Courtney
richardscourtney says:
September 5, 2013 at 3:20 am
You say that my comment is both irrelevant and disruptive in this thread. I don’t see how it can be both, although it might be one or the other.

Say what!?
The subject of this thread is far too important for semantic disputes.
This links to my post so anybody can easily read what my post said
http://wattsupwiththat.com/2013/09/05/statistical-proof-of-the-pause-overestimated-global-warming-over-the-past-20-years/#comment-1408432
Richard

Gail Combs
September 5, 2013 7:19 am

Geoff Sherrington says: @ September 5, 2013 at 5:16 am
I think it is weak to argue that the Earth is recovering from an LIA unless a mechanism is given, one that is consistent with measurements.
>>>>>>>>>>>>>>>>>>>>
See my comment above on Dansgaard-Oeschger (D-O) events. They are called Bond events during an interglacial.

September 5, 2013 7:28 am

The initial wildly exaggerated claims of climate disaster was deliberate so that draconian restrictions on human freedom could be quickly imposed. Had that been successful, then the inevitable pause could be claimed on the success of the freedom killing regiment imposed on the people. This would then be justified in making them permanent. Fortunately they failed in their efforts and exposed the big lie of AGW and ACC.

richardscourtney
September 5, 2013 7:32 am

Rich:
Thankyou for your reply to me at September 5, 2013 at 6:34 am which says in full

richardscourtney: Thank you for trying to make that clear. Can I summarize it as, “There’s more noise in the system than we assumed”? If so, aren’t we just back with Lorenz’s discovery that chaotic systems produce output that looks like noise? If that’s the case then it’s the noise that has to be modelled not condensed into “error bars”. (I do know it’s not you I’m arguing with. Thanks for your efforts to explain the climate modellers’ thinking).

As to your first question; viz.
“Can I summarize it as, “There’s more noise in the system than we assumed”?”
I answer, Yes.
But your second question is a bit more tricky. It asks,
“If so, aren’t we just back with Lorenz’s discovery that chaotic systems produce output that looks like noise?”
The answer is, possibly.
Please note that I am not avoiding your question. A full answer would contain so many “ifs” and “buts” that it would require a book. However, I addressed part of the answer in my post which supported Gail Combs and is at September 5, 2013 at 4:04 am
http://wattsupwiththat.com/2013/09/05/statistical-proof-of-the-pause-overestimated-global-warming-over-the-past-20-years/#comment-1408455
Indeed, that linked post leads to the entire issue of what is – and what is not – noise.
(Incidentally, I point out that when Gail Combs gets involved in a thread it is useful to read her posts although they are often long: she usually goes to the real crux of an issue.)
I fully recognise that this answer is inadequate and trivial, but I think it is the best I can do here. Sorry.
Richard

Bob L.
September 5, 2013 7:37 am

kadaka: I’m thoroughly enjoying your posts. If the IPCC brought 1/10th of the scientific rigor and honesty to climate issues that you invest in spur-of-the-moment experiments in your kitchen, there would be no AGW movement. Cheers!

richard verney
September 5, 2013 7:38 am

Geoff Sherrington says:
September 5, 2013 at 5:16 am
“..Note, however, that there is no compelling argument that temperatures taken from a Stevenson screen 2.5 m above the surface of the Earth should be the same as (not offset from) those from a satellite measuring microwaves from a thickness of oxygen some distance above the Earth”
///////////////////////
One would not expect the temperature measurement (ie., the absolute temperature) to be the same since as you state, they are measuring temperatures at different locations. However, one would expect the trend of their respective temperature anomalies to be the same. If not, where is the temperature increase that has been observed 2.5m above the ground going, if not upwards to where the satellite is making measurements?

Gail Combs
September 5, 2013 7:38 am

Ric Werme says: @ September 5, 2013 at 5:32 am
….Of course, there’s the claim that visible light doesn’t heat objects, only infrared does that, probably the most blatantly idiotic claim.
>>>>>>>>>>>>>>>>>>>
That claim is quickly refuted by touching a white vs a black surface in the south at about the same time you get treatment for the burns.

richard verney
September 5, 2013 7:43 am

Steven Hill from Ky (the welfare state) says:
September 5, 2013 at 6:21 am
Man is nothing more than an ant in a tiny corner of the universe….that’s it, nothing more, nothing less.
//////////////////////////////////
And ants and termites emit more CO2 than man!
Dangerous thing ants.

Gail Combs
September 5, 2013 7:47 am

Gene Selkov says: @ September 5, 2013 at 6:36 am
….The real damages were caused by Gore and nearly half the population of the planet. Can we sue them all?
>>>>>>>>>>>>>>>>>>>>>>>
Depends on whether or not you can equate it to someone yelling FIRE in a crowded theater. it’s called Reckless Endangerment and is illegal in all (US) states. You have to prove it was done intentionally and knowing that there is no such danger.

Gene Selkov
Reply to  Gail Combs
September 5, 2013 8:34 am

Gail: Thank you for reminding me of Reckless Endangerment. I hoped something like that would apply. Now I recall there were efforts made at one time to trap the persons triggering fire alarms:
http://blog.modernmechanix.com/fire-box-traps-pranksters/

TomRude
September 5, 2013 7:49 am

Got to love it: “This difference might be explained by some combination of errors in external forcing, model response and internal climate variability.”
Yeah Gillett and Co… simply put, your pal AGW science is hardly settled.

richard verney
September 5, 2013 7:50 am

All of those discussing warming the oceans by heat from above are overlooking that the temperature of the air above the open oceans is at about the same temperature as the ocean below.
It is rare for there to be as much as 1 degC difference (usually far less), so nothing like a hot hair drier, or hot IR lamp over a bowl or bucket of water.

September 5, 2013 7:53 am

Gösta Oscarsson says:
September 5, 2013 at 12:29 am
There are a few “model trends” which correctly describes “observed trends”. Wouldn´t it be intresting to analyse in what way they differ from the rest?
####################
Yes that’s what some of us are doing. Contrary to popular belief “the models” are not falsified.
The vast vast majority over estimate the warming and need correction. The question is are those that match observations any better when you look at additional metrics and additional time periods. Or can you learn something from those that do match observations to correct those that dont?
If you are interested in looking at model ‘failures” with a mind toward improving our understanding then this is what you do. If you are interested in preserving the IPCC storyline
then you ignore the failures, and if you just interested in opposing the IPPC storyline then you
just ignore the fact that some do better and you argue that the whole lot are bad.
So in between the triumphalism of “the models are falsified” and the blind allegiance to the IPCC storyline, there is work to do.

BrianR
September 5, 2013 7:53 am

How could the error range for modeled data be a third of observational data? That just seems counterintuitive to me.

Ian L. McQueen
September 5, 2013 7:58 am

david eisenstadt wrote about the incorrect phrase “is more than four times smaller than…..” David, you stole my thunder.
I see this kind of error frequently, and was prepared to comment here. I wrote to Scientific American some time ago about their (mis)use of the phrase and then saw it repeated several months later, so they obviously did not pay attention.
As you point out, if anything becomes one time smaller, it disappears.
IanM

milodonharlani
September 5, 2013 8:01 am

Gene Selkov says:
September 5, 2013 at 6:36 am
Steven Hill from Ky (the welfare state) says:
Re suing Gore:

Theo Goodwin
September 5, 2013 8:01 am

The posts above show that people who post at WUWT have achieved a degree of clarity about the differences between the views of modelers and skeptics that does not exist elsewhere. Richard S Courtney deserves a large portion of the credit for this. I want to emphasize just a point or two and I am confident that Richard will correct my errors.
1. What modelers mean by “internal variability” has nothing to do with what everyone else understands as natural variability. Take ENSO as an example. For modelers, ENSO is not a natural regularity that exists in the world apart from their models; at least, it is not worthy of scientific (empirical) investigation as a natural regularity in the world. Rather, it is a range of temperatures that sometimes runs higher and sometimes runs lower and is treated as noise. Modelers assume that these temperatures will sum to zero over long periods of time. They have no interest in attempting to predict the range of temperatures or lengths of periods. In effect, ENSO is noise for modelers. Given these assumptions, it is clear that the natural regularity cannot serve in any fashion as a bound on models. That is, a natural regularity in the real world cannot serve as a bound on models.
2. Obviously, the way that modelers think about ENSO is the way that they think about anything that a skeptic might recognize as a natural regularity that is worthy of scientific investigation in its own right and that serves as a bound on models. Modelers think of clouds the same way that they think of ENSO. They admit that the models do not handle clouds well and maybe not at all. But this admission does not really matter to them. If they could model clouds well they would treat them as noise; that is, they would assume that cloud behavior averages to zero over longer periods of time and amounts to noise. Consequently, no modeler has professional motivation to create a model that is that ingeniously captures cloud behavior. (Clouds are an especially touchy topic for them because changes in albedo directly limits incoming radiation. However, if you are assuming that it all sums to zero then there is no problem.)
3. Modelers care only for “the signal.” The signal, in practical terms for modelers, is the amount of change in global average temperature that can be assigned to CO2. Theoretically, the signal should include all GHGs but modelers focus on CO2. So, what are modelers trying to accomplish? They are trying to show that some part of global temperature change can be attributed to CO2. Is that science?
4. Modelers’ greatest nightmare is a lack of increase in global average temperature. If there is no increase then there is no signal of CO2 forcing. If there is no signal for a lengthy period then that fact counts, even for modelers, as evidence that their models are wrong. The length of that period cannot be calculated. Why?
5. The length of period cannot be calculated because models embody only “internal variability” and not natural variability. Recall that internal variability is noise. If all representations of natural regularities, such as ENSO, must sum to zero over long periods of time then models cannot provide an account of changes to temperature that are caused by natural variability. In other words, modelers assume that there is not some independently existing world that can bound their models.
6. The only hope for modelers is to drop their assumption that ENSO and similar natural regularities are noise. Modelers must treat ENSO as a natural phenomenon that is worthy of empirical investigation in its own right and do the same for all other natural regularities. They must require that their models are bounded by natural regularities. Modelers must drop the assumption that the temperature numbers generated by ENSO must sum to zero over a long period of time. Once they can model all or most natural regularities then they will have a background of climate change against which a signal for an external forcing such as CO2 will have meaning.

September 5, 2013 8:09 am

“Anthony and his team of volunteers found problems with the US system. Since these two systems would be considered ‘Top of the Line’ the rest of the surface station data can only be a lot worse.”
Actually there is little evidence that the US system is “top of the line”
In terms of long term consistency the US system is plagued by several changes that almost no other country has gone through. The most notable being the TOBS change.
There are only a couple other countries that have had to make TOBS adjustments and in no case is the adjustment in other countries as pervasive as it is in the US.
On the evidence one could argue that while the US has a very dense network of stations the homogeniety of that network and the adjustments required put it more toward the BOTTOM
of the station piles than the Top of the line.
Of course that can also be answered objectively by looking at the number of break points that US systems generate as opposed to the rest of the world.
I’ll leave it at this. there is no evidence that the us system is top of the line. There is more evidence that it has problems that other networks done have, for example, you have to TOBS adjust the data. And finally there is an objective way of telling how “top of the line” a network is. I suppose when I get some time I could take a look at that. But for now I think folks would be wise to suspend judgement ( its not settled science ) about the quality of the US network as opposed to others. could be. could not be.

Gunga Din
September 5, 2013 8:09 am

Speaking of climate models, I made this comment some time ago.
http://wattsupwiththat.com/2012/05/12/tisdale-an-unsent-memo-to-james-hansen/#comment-985181

Gunga Din says:
May 14, 2012 at 1:21 pm

joeldshore says:
May 13, 2012 at 6:10 pm
Gunga Din: The point is that there is a very specific reason involving the type of mathematical problem it is as to why weather forecasts diverge from reality. And, the same does not apply to predicting the future climate in response to changes in forcings. It does not mean such predictions are easy or not without significant uncertainties, but the uncertainties are of a different and less severe type than you face in the weather case.
As for me, I would rather hedge my bets on the idea that most of the scientists are right than make a bet that most of the scientists are wrong and a very few scientists plus lots of the ideologues at Heartland and other think-tanks are right…But, then, that is because I trust the scientific process more than I trust right-wing ideological extremism to provide the best scientific information.

=========================================================
What will the price of tea in China be each year for the next 100 years? If Chinese farmers plant less tea, will the replacement crop use more or less CO2? What values would represent those variables? Does salt water sequester or release more or less CO2 than freshwater? If the icecaps melt and increase the volume of saltwater, what effect will that have year by year on CO2? If nations build more dams for drinking water and hydropower, how will that impact CO2? What about the loss of dry land? What values do you give to those variables? If a tree falls in the woods allowing more growth on the forest floor, do the ground plants have a greater or lesser impact on CO2? How many trees will fall in the next 100 years? Values, please. Will the UK continue to pour milk down the drain? How much milk do other countries pour down the drain? What if they pour it on the ground instead? Does it make a difference if we’re talking cow milk or goat milk? Does putting scraps of cheese down the garbage disposal have a greater or lesser impact than putting in the trash or composting it? Will Iran try to nuke Israel? Pakistan India? India Pakistan? North Korea South Korea? In the next 100 years what other nations might obtain nukes and launch? Your formula will need values. How many volcanoes will erupt? How large will those eruptions be? How many new ones will develop and erupt? Undersea vents? What effect will they all have year by year? We need numbers for all these things. Will the predicted “extreme weather” events kill many people? What impact will the erasure of those carbon footprints have year by year? Of course there’s this little thing called the Sun and its variability. Year by year numbers, please. If a butterfly flaps its wings in China, will forcings cause a tornado in Kansas? Of course, the formula all these numbers are plugged into will have to accurately reflect each ones impact on all of the other values and numbers mentioned so far plus lots, lots more. That amounts to lots and lots and lots of circular references. (And of course the single most important question, will Gilligan get off the island before the next Super Moon? Sorry. 😎
There have been many short range and long range climate predictions made over the years. Some of them are 10, 20 and 30 years down range now from when the trigger was pulled. How many have been on target? How many are way off target?
Bet your own money on them if want, not mine or my kids or their kids or their kids etc.

richardscourtney
September 5, 2013 8:09 am

Steven Mosher:
At September 5, 2013 at 7:53 am
http://wattsupwiththat.com/2013/09/05/statistical-proof-of-the-pause-overestimated-global-warming-over-the-past-20-years/#comment-1408576
you assert

Contrary to popular belief “the models” are not falsified.

Oh, dear! NO!
It seems I need to post the following yet again on WUWT.
None of the models – not one of them – could match the change in mean global temperature over the past century if it did not utilise a unique value of assumed cooling from aerosols. So, inputting actual values of the cooling effect (such as the determination by Penner et al.
http://www.pnas.org/content/early/2011/07/25/1018526108.full.pdf?with-ds=yes )
would make every climate model provide a mismatch of the global warming it hindcasts and the observed global warming for the twentieth century.
This mismatch would occur because all the global climate models and energy balance models are known to provide indications which are based on
1.
the assumed degree of forcings resulting from human activity that produce warming
and
2.
the assumed degree of anthropogenic aerosol cooling input to each model as a ‘fiddle factor’ to obtain agreement between past average global temperature and the model’s indications of average global temperature.
More than a decade ago I published a peer-reviewed paper that showed the UK’s Hadley Centre general circulation model (GCM) could not model climate and only obtained agreement between past average global temperature and the model’s indications of average global temperature by forcing the agreement with an input of assumed anthropogenic aerosol cooling.
The input of assumed anthropogenic aerosol cooling is needed because the model ‘ran hot’; i.e. it showed an amount and a rate of global warming which was greater than was observed over the twentieth century. This failure of the model was compensated by the input of assumed anthropogenic aerosol cooling.
And my paper demonstrated that the assumption of aerosol effects being responsible for the model’s failure was incorrect.
(ref. Courtney RS An assessment of validation experiments conducted on computer models of global climate using the general circulation model of the UK’s Hadley Centre Energy & Environment, Volume 10, Number 5, pp. 491-502, September 1999).
More recently, in 2007, Kiehle published a paper that assessed 9 GCMs and two energy balance models.
(ref. Kiehl JT,Twentieth century climate model response and climate sensitivity. GRL vol.. 34, L22710, doi:10.1029/2007GL031383, 2007).
Kiehl found the same as my paper except that each model he assessed used a different aerosol ‘fix’ from every other model. This is because they all ‘run hot’ but they each ‘run hot’ to a different degree.
He says in his paper:

One curious aspect of this result is that it is also well known [Houghton et al., 2001] that the same models that agree in simulating the anomaly in surface air temperature differ significantly in their predicted climate sensitivity. The cited range in climate sensitivity from a wide collection of models is usually 1.5 to 4.5 deg C for a doubling of CO2, where most global climate models used for climate change studies vary by at least a factor of two in equilibrium sensitivity.
The question is:
if climate models differ by a factor of 2 to 3 in their climate sensitivity, how can they all simulate the global temperature record with a reasonable degree of accuracy.
Kerr [2007] and S. E. Schwartz et al. (Quantifying climate change–too rosy a picture?, available at
http://www.nature.com/reports/climatechange, 2007)
recently pointed out the importance of understanding the answer to this question. Indeed, Kerr [2007] referred to the present work and the current paper provides the ‘‘widely circulated analysis’’ referred to by Kerr [2007]. This report investigates the most probable explanation for such an agreement. It uses published results from a wide variety of model simulations to understand this apparent paradox between model climate responses for the 20th century, but diverse climate model sensitivity.


And, importantly, Kiehl’s paper says:

These results explain to a large degree why models with such diverse climate sensitivities can all simulate the global anomaly in surface temperature. The magnitude of applied anthropogenic total forcing compensates for the model sensitivity.

And the “magnitude of applied anthropogenic total forcing” is fixed in each model by the input value of aerosol forcing.
Thanks to Bill Illis, Kiehl’s Figure 2 can be seen at
http://img36.imageshack.us/img36/8167/kiehl2007figure2.png
Please note that the Figure is for 9 GCMs and 2 energy balance models, and its title is:

Figure 2. Total anthropogenic forcing (Wm2) versus aerosol forcing (Wm2) from nine fully coupled climate models and two energy balance models used to simulate the 20th century.

It shows that
(a) each model uses a different value for “Total anthropogenic forcing” that is in the range 0.80 W/m^-2 to 2.02 W/m^-2
but
(b) each model is forced to agree with the rate of past warming by using a different value for “Aerosol forcing” that is in the range -1.42 W/m^-2 to -0.60 W/m^-2.
In other words the models use values of “Total anthropogenic forcing” that differ by a factor of more than 2.5 and they are ‘adjusted’ by using values of assumed “Aerosol forcing” that differ by a factor of 2.4.
So, each climate model emulates a different climate system. Hence, at most only one of them emulates the climate system of the real Earth because there is only one Earth. And the fact that they each ‘run hot’ unless fiddled by use of a completely arbitrary ‘aerosol cooling’ strongly suggests that none of them emulates the climate system of the real Earth.
Richard

Gene Selkov
September 5, 2013 8:14 am

ferd berple says:
> that really is the crux of the problem. the assumption that natural variability is simply noise around a mean and thus will average out to zero over short periods of time.
This assumption is taught at school but is almost never tested. There is something profoundly counter-intuitive in the way averages are assessed today. I would allow some slack here a hundred, two hundred years ago, when all measurements were tedious, time-consuming, and difficult to track, so we had to replace actual data with the central limit theorem.
There is no such hurdle today, in most cases. Many different types of measurements can be automated and the question of whether they converge or not, and how they vary (chaotically or not), can be resolved in straightforward ways. Instead, everybody still uses estimators, often preferring those that hide the nature of variability.

Jim G
September 5, 2013 8:18 am

I guess Niels Bohr was right when he said, “Prediction is very difficult, especially about the future” . And Yogi Berra, he said, “‘It’s tough to make predictions, especially about the future”, very similar. A philosopher and a scientist in agreement.

richardscourtney
September 5, 2013 8:26 am

Theo Goodwin:
Thankyou for your obviously flattering mentions of me (I wish they were true) in your post at
September 5, 2013 at 8:01 am.
You ask for me to comment on points I disagree in that post. I have several knit-picking points which do not deserve mention, but there is one clarification which needs to be made.
The models parametrise effects of clouds because clouds are too small for them to be included in the large grid sizes of models. Hence, if clouds were understood (they are not) then their effects could only be included as estimates and averages (i.e. guesses).
Also, I have made a post which refutes the climate models on much more fundamental grounds than yours but – for some reason – it is in moderation.
Richard
PS Before some pedant jumps in saying “knit-picking” should be “nit-picking” because nits are insects, I used the correct spelling. Knit-picking was a fastidious task in Lancashire weaving mills. Small knots (called “knits”) occurred and reduced the value of cloth. For the best quality cloth these knits had to be detected, picked apart and the cloth repaired by hand. It was a detailed activity which was pointless for most cloth and was only conducted when the very best cloth was required.

milodonharlani
September 5, 2013 8:26 am

ENSO variability during the Little Ice Age & the “Medieval Climate Anomaly”, as the MWP is now politically correctly called:
http://repositories.lib.utexas.edu/handle/2152/19622
Climate scientists are only now getting around to addressing the question of natural variability that should have preceded any finding of an “unprecedented human fingerprint”.

Jean Parisot
September 5, 2013 8:27 am

“See my comment above on Dansgaard-Oeschger (D-O) events. They are called Bond events during an interglacial.”
What we really need is a tool or decision matrix that attempts to identify the start of one of these D-O or Bond events. All of the effort invested in trying to measure, explain, and manage the change in slope of the global temperature trend isn’t important in comparison to the need for a tool to detect these events as soon as possible. I’ve been impressed with how modern agriculture in the US and Canada responded to this years cooling change – but a global event will take more time.
We know they happen regularly and we know the magnitude, it seems to be a bit more important then a tiny warming trend, regardless of the cause.

September 5, 2013 8:34 am

If Bart said that 2+2 was 3 and Sally said it was 5, would we conclude that “on average” they’d been taught good math skills?
The notion of averaging the output of different models and then comparing them to observations is ludicrous unto itself.

Theo Goodwin
September 5, 2013 8:35 am

richardscourtney says:
September 5, 2013 at 8:26 am
Thanks for your clarification. I look forward to your post. I did not mean to flatter you. You are a tireless and gifted explainer. That is not flattery. (Oh, it occurs to me I can offer a bit of advice. Beware the trolls lest they distract you.)

David S
September 5, 2013 8:38 am

How many times have we seen new evidence that AGW is baloney? Many times of course. And yet the government continues its claim that AGW is a huge problem that must be dealt with. So here’s the problem: We live in something similar to George Orwell’s 1984. Reality and truth no longer matter. The correct answer is whatever the government says it is, reality notwithstanding. Anyone who disagrees gets electric shocks until he does agree. Ok they haven’t started the electric shocks yet but the skeptics are labeled “deniers” and some folks suggest they be sent to re-education camps.

Theo Goodwin
September 5, 2013 8:42 am

Gene Selkov says:
September 5, 2013 at 8:14 am
I know. I want to ask them if they have not heard of computers. On the other hand, it is no surprise that data management is the weakest link in their computing chain. Setting aside the question of their basic assumptions for the moment.

Theo Goodwin
September 5, 2013 8:45 am

Jean Parisot says:
September 5, 2013 at 8:27 am
Yes, work on the matrix is very important. However, to do that you must have reasonably good historical data and you must believe that nature exists outside your computer. Alarmists have trouble with both ideas.

JJ
September 5, 2013 8:50 am

ferd berple says:
that really is the crux of the problem. the assumption that natural variability is simply noise around a mean.

That, and the assumption that natural variability started in 1998…

September 5, 2013 9:07 am

Commentary from Nature Climate Change, by John C. Fyfe, Nathan P. Gillett, & Francis W. Zwiers said,
“For this period, the observed trend of 0.05 ± 0.08 °C per decade is more than four times smaller than the average simulated trend of 0.21 ± 0.03 °C per decade (Fig. 1b). It is worth noting that the observed trend over this period — not significantly different from zero — suggests a temporary ‘hiatus’ in global warming. The divergence between observed and CMIP5-simulated global warming begins in the early 1990s, as can be seen when comparing observed and simulated running trends from 1970–2012 (Fig. 2a and 2b for 20-year and 15-year running trends, respectively). The evidence, therefore, indicates that the current generation of climate models (when run as a group, with the CMIP5 prescribed forcings) do not reproduce the observed global warming over the past 20 years, or the slowdown in global warming over the past fifteen years.”

– – – – – – – –
Why use the term ‘global warming’ in that passage when the dispassionate and / or indifferent term would be something like ‘temperature changes’. If one constructs that passage with ‘temperature changes’ as a context instead of ‘global warming’ as a context then the passage would be clearer science communication with minimum implied presumption of things like ‘global warming’.
For me, the repeated use of ‘global warming’ (4 times in just that passage alone) is the essence of hidden flawed premises. Where there are hidden flawed premises one expects circumstantial conclusions at best and at worst convenient conclusions.
A good strategy is to now stop playing the already gamed GW game. To do so one needs to disallow the biased terminology that predetermines a general context and spin of the outcomes.
Better to use a different set of terms which are scientifically dispassionate and non-spun.
John

James Strom
September 5, 2013 9:08 am

richard verney says:
September 5, 2013 at 2:36 am
Bingo! A “pause” in warming is compatible with theory, but a pause coincident with a substantial increase in the main forcing factor is much less so.

John F. Hultquist
September 5, 2013 9:08 am

Gail Combs says:
September 5, 2013 at 7:14 am
>>>>>>>>>>>>>>>>
“It varies. The Horse Latitudes (between 30 and 35 degrees, north and south) were called that because of all the dead horses tossed overboard when the sailing ships got stuck in a no wind situation.

This is one of at least 3 explanations for the term “Horse Latitudes” and not my favorite.
First – Spanish ships perhaps carried many horses over the years but these sailors were not inclined to sail into the subtropical high pressure zones as they knew they were there, and a Spaniard would be disinclined to throw horses overboard. Some of the crew might go over first. Also, why is the zone named in English and not Spanish? [Paintings exist of unloading horses where there were no docks by forcing the off the deck and into the ocean and then leading them on to land. The difference was important to the horse.]
Next – the English ships carried “napped” crew members.
http://www.worldwidewords.org/topicalwords/tw-nap1.htm
Some of these taken from pubs to which was owed a bar bill – paid by the ship’s crew gathering agents. The new “sailor” did not earn wages until this “dead horse” payment was recovered by the ship’s purse. About 2 -3 weeks out from England the “dead horse” was paid off and the sailor would begin to be paid. Paying off the dead horse was cause for celebration so an effigy of a horse (straw horse) would be hoisted over the water and cut loose to drift in the sea. Songs were sung – shanties. This one is well known – The Dead Horse Shanty:
http://shanty.rendance.org/lyrics/showlyric.php/horse
—–
Another explanation for the term “horse latitudes” comes from the phrase “to horse” in the sense of “to push” or “to pull” something that doesn’t want to go. Sails without wind would present such an occasion and might induce a crew to try to pull (by rowing) a ship out of a calm area. This explanation requires that one believe the English sailors were unaware of the STHP zones and frequently found themselves therein. Thus, would begin an argument about whether Spanish or English were the better sailors. Don’t do there. But it would explain the use of English words for the phrase.
There is also the confusion between the “doldrums” and the horse latitudes.
Day after day, day after day,
We stuck, nor breath nor motion;
As idle as a painted ship
Upon a painted ocean.

See the Rime of the Ancient Mariner by the English poet Samuel Taylor Coleridge – in the lines above speaking of the equatorial area doldrums and not the STHP “horse latitudes.”

September 5, 2013 9:10 am

Steven Mosher says:
September 5, 2013 at 7:53 am
…..if you just interested in opposing the IPPC storyline then you
just ignore the fact that some do better and you argue that the whole lot are bad.
so…which models do you feel do the better job?

Theo Goodwin
September 5, 2013 9:17 am

david eisenstadt says:
September 5, 2013 at 9:10 am
Steven Mosher says:
September 5, 2013 at 7:53 am
…..if you just interested in opposing the IPPC storyline then you
just ignore the fact that some do better and you argue that the whole lot are bad.
“so…which models do you feel do the better job?”
Interesting question because it might elicit an interesting answer. But, as you know, all the models are based on the same circular reasoning. What is the probability that a worthless model will produce a curve that seems to match reality?

Theo Goodwin
September 5, 2013 9:24 am

ferd berple says:
September 5, 2013 at 7:05 am
Very well said. The “radiation-only theory,” used by all Alarmists is purely deterministic. No chaos there, no attractors. Worse, it is pure unwilling to posit the existence of natural regularities that affect temperatures. It is not bounded by reality.

richardscourtney
September 5, 2013 9:28 am

davidmhoffer:
I am disappointed that there have been no congratulations for your excellent post at September 5, 2013 at 8:34 am which says in total

If Bart said that 2+2 was 3 and Sally said it was 5, would we conclude that “on average” they’d been taught good math skills?
The notion of averaging the output of different models and then comparing them to observations is ludicrous unto itself.

Perhaps this will help be people to understand your profound point.
Average wrong is wrong.
Richard

Frank K.
September 5, 2013 9:28 am

“The point is that there is a very specific reason involving the type of mathematical problem it is as to why weather forecasts diverge from reality. And, the same does not apply to predicting the future climate in response to changes in forcings. It does not mean such predictions are easy or not without significant uncertainties, but the uncertainties are of a different and less severe type than you face in the weather case.
No they are NOT, but we’ve been through this before [sigh]…
* Climate models are highly non-linear, coupled sets of differential equations, with associated boundary and initial conditions which are, for many variables, poorly known.
* Climate models are NOT boundary value problems but initial value problems, and are prone to numerical instabilities and error after running for many time steps. To squash these errors, modelers introduce unphysical smoothing and other numerical tricks.
* There are NO guaranteed solutions to these equations, numerically or otherwise. The models as formulated may even be ill-posed, though that is often difficult to assess to to the very poor documentation provided by the developers in some cases (the most prominent of which is NASA GISS and their awful “Model E”).

Gail Combs
September 5, 2013 9:31 am

David S says: @ September 5, 2013 at 8:38 am
…. Ok they haven’t started the electric shocks yet but the skeptics are labeled “deniers” and some folks suggest they be sent to re-education camps.
>>>>>>>>>>>>>>>>>>
NAH, they will just send in a Swat Team to scare you.

David Ball
September 5, 2013 9:35 am

This is a much better experiment;

Richard Barraclough
September 5, 2013 9:40 am

Knit-picking ??? Unlike climate science, in language, consensus is all-important. Nitpicking, with no hyphen, is the accepted word.

James Strom
September 5, 2013 9:42 am

kadaka (KD Knoebel) says:
September 5, 2013 at 5:49 am
gnomish said on September 5, 2013 at 2:59 am:
kadaka, your experiment will not make your desired point unless a) your container has no bottom and b) your “water” has no contaminants–just like the ocean.

Stephen Rasey
September 5, 2013 9:52 am

@Steven Mosher 8:09 am

On the evidence one could argue that while the US has a very dense network of stations the homogeniety of that network and the adjustments required put it more toward the BOTTOM of the station piles than the Top of the line.
Of course that can also be answered objectively by looking at the number of break points that US systems generate as opposed to the rest of the world.

A counter hypothesis is that the Berkley Earth scalpel runs amok with high density data, because the homogeniality of the network is an invalid assumption.
what to me appears to be minimally discussed wholesale decimation and counterfeiting of low frequency information happening within the BEST process. Dec. 13, 2012 (Circular Logic….)
—-
The [AGU Dec 2012] poster does NOT assuage my concerns. It reinforces I have not misunderstood the BEST process. “Results” amounts to comparing two untrustworthy methods with similar assumptions against each other. ….
The Rohde 2013 paper uses synthetic error free data. The scalpel is not mentioned. My concern is the use of the scalpel on real, error riddled data.
Jan 21, 2013 5:58pm (Berkley Earth finally….)
A class of events called Recalibration…. A property of this “recalibration class” is that there is slow buildup of instrument drift, then quick, discontinuous offset to restore calibration [Scalpels cuts at what appear to be discontinuities] Not only will Instrument drift and climate signal be inseparable, we have multiplied the drift in the overall record by discarding the correcting recalibration at the discontinuities.. The Scalpel is discarding the recalibrations, keeping the drift. Jan 23, 2013 11:30 am (ibid)
The denser the network, the more likely the scalpel will make the cut at recalibration events because most of the neighbors are not recalibrated at the same time.

September 5, 2013 9:53 am

Steven Mosher
Simple steps of research: The null hypothesis is that there is no significance between models and observations. Define the measure. In this case, the degree in which models are discrepant from observations. Run the experimental models and take measures of discrepancy between models and observations of temperature and CO2. Look at results. Dang. We must reject the null hypothesis and accept that there is a significant discrepancy. The models are falsified in that they do not reflect observed temperature measures in the face of observed increasing CO2. The next phase should be to examine the why’s and do more work on the models, possibly even rejecting or severely trimming the case for CO2.
There is no reason to go all adolescent angsty over the term “falsified”. If really good experimenters did that we would not have a lightbulb that works. And let me add, one that works a %$#*& of a lot better than the “is the light on I can’t tell” twisty ones.

richardscourtney
September 5, 2013 9:56 am

Richard Barraclough:
Thanks for your post at September 5, 2013 at 9:40 am.
I enjoyed that.
Richard

September 5, 2013 9:58 am

Or, the modeled average is what you expect but never get. ROTFLMAO!

September 5, 2013 10:00 am

If they had plotted the SST data,(which is the best metric for climate change) from 2003 when the warming peaked they would see the current cooling trend – but that would be a step too far for Nature. For an estimate of the coming cooling see
http://climatesense-norpag.blogspot.com/2013/07/skillful-so-far-thirty-year-climate.html

wobble
September 5, 2013 10:03 am

I always thought the CAGW modelers were claiming that their failed predictions were simply a matter of variability.
I would “correctly” model winnings from a coin toss game by predicting that I make nothing on each toss. Obviously, this isn’t going to happen. I’m obviously going to win some and lose some. In the long term, I should win/lose nothing, but in the near term I might win or lose quite a bit. For example, after 15 tosses, it’s possible that I’ve won 10 and only lost 5 for net winnings of 5. This would erroneously suggest a trend of 0.33 wins per toss.
Likewise, I thought the modelers were claiming that their models are correct and that time will eventually prove this.

Gunga Din
September 5, 2013 10:05 am

davidmhoffer says:
September 5, 2013 at 8:34 am
If Bart said that 2+2 was 3 and Sally said it was 5, would we conclude that “on average” they’d been taught good math skills?
The notion of averaging the output of different models and then comparing them to observations is ludicrous unto itself.

==================================================================
Depends who you ask.
http://www.foxnews.com/us/2013/08/30/new-age-education-fuzzy-math-and-less-fiction/

Stephen Rasey
September 5, 2013 10:13 am

@richardscourtney at 9:56 am
Re: Richard Barraclough: at 9:40 am. Knit-picking vs. Nitpicking
I think Knit-picking is the better term.
Pull loose threads of logic and data and see what comes unraveled.

September 5, 2013 10:29 am

At least dr. Norman Page got it right
http://wattsupwiththat.com/2013/09/05/statistical-proof-of-the-pause-overestimated-global-warming-over-the-past-20-years/#comment-1408688
henry says
but there are are only a few of us
http://blogs.24.com/henryp/2013/04/29/the-climate-is-changing/
who really know what is coming:
HOW CAN WE STOP THIS GLOBAL COOLING?
It looks like all the media and the whole world still believe that somehow global warming will soon be back on track again. Clearly, as shown, this is just wishful thinking. All current results show that global cooling will continue. As pointed out earlier, those that think that we can put more carbon dioxide in the air to stop the cooling are just not being realistic. There really is no hard evidence supporting the notion that (more) CO2 is causing any (more) warming of the planet, whatsoever. On same issue, there are those that argue that it is better to be safe than sorry; but, really, as things are looking now, they are now also beginning to stand in the way of progress. Those still pointing to melting ice and glaciers, as “proof” that it is (still) warming, and not cooling, should remember that there is a lag from energy-in and energy-out. Counting back 88 years i.e. 2013-88= we are in 1925.
Now look at some eye witness reports of the ice back then?
http://wattsupwiththat.com/2008/03/16/you-ask-i-provide-november-2nd-1922-arctic-ocean-getting-warm-seals-vanish-and-icebergs-melt/
Sounds familiar? Back then, in 1922, they had seen that the arctic ice melt was due to the warmer Gulf Stream waters. However, by 1950 all that same ‘lost” ice had frozen back. I therefore predict that all lost arctic ice will also come back, from 2020-2035 as also happened from 1935-1950. Antarctic ice is already increasing.
To those actively involved in trying to suppress the temperature results as they are available on-line from official sources, I say: Let fools stay fools if they want to be. Fiddling with the data they can, to save their jobs, but people still having to shove snow in late spring, will soon begin to doubt the data…Check the worry in my eyes when they censor me. Under normal circumstances I would have let things rest there and just be happy to know the truth for myself. Indeed, I let things lie a bit. However, chances are that humanity will fall in the pit of global cooling and later me blaming myself for not having done enough to try to safeguard food production for 7 billion people and counting.
It really was very cold in 1940′s….The Dust Bowl drought 1932-1939 was one of the worst environmental disasters of the Twentieth Century anywhere in the world. Three million people left their farms on the Great Plains during the drought and half a million migrated to other states, almost all to the West. http://www.ldeo.columbia.edu/res/div/ocp/drought/dust_storms.shtml
I find that as we are moving back, up, from the deep end of the 88 year sine wave, there will be standstill in the speed of cooling, on the bottom of the wave, and therefore naturally, there will also be a lull in pressure difference at that > [40 latitude], where the Dust Bowl drought took place, meaning: no wind and no weather (read: rain). However, one would apparently note this from an earlier change in direction of wind, as was the case in Joseph’s time. According to my calculations, this will start around 2020 or 2021…..i.e. 1927=2016 (projected, by myself and the planets…)> add 5 years and we are in 2021.
Danger from global cooling is documented and provable. It looks we have only ca. 7 “fat” years left……
WHAT MUST WE DO?
1) We urgently need to develop and encourage more agriculture at lower latitudes, like in Africa and/or South America. This is where we can expect to find warmth and more rain during a global cooling period.
2) We need to tell the farmers living at the higher latitudes (>40) who already suffered poor crops due to the cold and/ or due to the droughts that things are not going to get better there for the next few decades. It will only get worse as time goes by.
3) We also have to provide more protection against more precipitation at certain places of lower latitudes (FLOODS!),

milodonharlani
September 5, 2013 10:38 am

Richard Barraclough says:
September 5, 2013 at 9:40 am
Knit-picking ??? Unlike climate science, in language, consensus is all-important. Nitpicking, with no hyphen, is the accepted word.
————————
Accepted because it’s the actual word, referring to the eggs of lice. “Knit-picking” is bogus folk etymology with no historical basis whatsoever. There is however a form of knitting called picking.

Bruce Cobb
September 5, 2013 10:53 am

@milodon, yes. It helps to know the entomology of a word.

milodonharlani
September 5, 2013 10:55 am

Bruce Cobb says:
September 5, 2013 at 10:53 am
Ouch!

Tad
September 5, 2013 11:01 am

I feel that these types of analyses aren’t appropriate because the mean temperature does not follow a linear process over time. It’s some combination of orbital movements, weather patterns, ocean circulation, perhaps volcanic activity, and a bit due to mankind’s activities of one sort of another. That said, the author is using the alarmists’ own methods against them and I guess it’s fine for that.

September 5, 2013 11:08 am

David Ball says: September 5, 2013 at 9:35 am
……………….
An excellent experiment.
Heat absorbed by the world oceans is moved around by the major currents, most notably by the Gulf Stream and its extension in the North Atlantic, Kuroshio-Oyashio currents system in the North Pacific and yhe equatorial currents in the Central Pacific.
In order to influence global climate these major currents (assuming relatively steady solar input) heat transport (current’s velocity, volume or both) has to change.
One could speculate about causes of such changes, either of the global or local proportions.
It is somewhat odd to think that a local cause is the primary factor, but from data I have looked at, that appear to be the case as listed here:
AMO – Far North Atlantic Tectonics
PDO – Kamchatka – Aleutian Archipelago Tectonics
ENSO (SOI) – Central Pacific Tectonics
http://www.vukcevic.talktalk.net/TDs.htm

jbird
September 5, 2013 11:09 am

@Jonnya99: “I am pleasantly astounded at how quickly discussion of the ‘pause’ has passed from heresy to mainstream. Now all someone has to do is publish the ultimate taboo: natural variability can push temperatures up as well as down.”
Good observation, although there are still a few who claim that there is no actual pause. The AGW faithful will cling to the idea for as along as they can without addressing the obvious questions:
(1) Was natural variability addressed in the models? If not, why not?
(2) If the models cannot accurately address natural variability are they reliable at all?
(3) If the pause is caused by natural variability then (as you note) can they “push temperatures up as well as down?”
My guess is that the MSM will quietly let this issue die by simply publishing less and less about it in the coming months. Funding for continued “research” and for anti-fossil fuel, environmental “advocacy” will dry up. Both of these things are happening now.

September 5, 2013 11:14 am

@TheoGoodwin Thank you for your description.
Question: Do the models essentially assume that all temperature variability is caused by CO2 levels plus noise?
If so and the correct model is that there are a multitude of factors (sun, oceans, clouds, volcanos…..) affecting temperatures, then we could observe the results we have observed.
When the alarmists say “the models are based on the laws of physics”, how can they make that claim and leave out the “forcings” from the sun, the oceans, clouds, etc.?

BBould
September 5, 2013 11:29 am

Richardscourtney: The Bayesian Treatment of
Auxiliary Hypotheses – This paper examines the standard Bayesian solution to the Quine-Duhem
problem, the problem of distributing blame between a theory and its auxiliary
hypotheses in the aftermath of a failed prediction.
http://joelvelasco.net/teaching/3865/strevens%20auxhyp.pdf
This is a bit over my head and I may have missed the mark and it is not relevant, but it may be why the models have not been falsified.

M Courtney
September 5, 2013 11:53 am

Nits or Knits will tie you in knots or not… if you remember the purpose of words.
That is, they communicate.
And they carry several forms of knowledge;
-The literal meaning (we all know what words mean)
-The emotional content (for which remembrances of things past are important)
-The tone (for which a confrontational change of rhythm may be important)
-The beat (for which word and sound length matter and it helps readability)
-Probably more…
So nits or knots or gnats are nuts.
What matters is the ease of conveying your message as persuasively, or as entertainingly, as possible.

Stephen Rasey
September 5, 2013 11:55 am

@kellyhaughton at 11:14 am
When the alarmists say “the models are based on the laws of physics”, how can they make that claim
They are based upon laws of physics. Just not ALL known Laws of physics.
Isaac Newton modeled the flight of cannonballs using his laws of motion….. neglecting air resistance. Cannoneers weren’t impressed.

milodonharlani
September 5, 2013 12:00 pm

M Courtney says:
September 5, 2013 at 11:53 am
I have to agree that in context “knit” is more entertaining than “nit” at communicating the same message, although Mann does like to compare skeptics to pine bark beetle larvae.

milodonharlani
September 5, 2013 12:02 pm

PS: I eschewed inserting the term nit-wit into the above copy.

M Courtney
September 5, 2013 12:05 pm

milodonharlani says at September 5, 2013 at 12:02 pm… Prudent call. My father does like a fight if any is offered. Or even if it just seems to be.
You know, it’s fun – in a way.

September 5, 2013 12:06 pm

Dr Darko Butina says:
September 5, 2013 at 1:24 am
The Vukcevic’s histogram is also based on the annual average and therefore not on ‘actual’ temperatures.
Hi Darko
Only actual temperature I take seriously is my own when goes to 38C or above.
Prije 3-4 mjeseca vido sam tvoj prilog na ovaj blog, ali veci dio je dalek iznad moje expertise, koja je blago budi receno, najcesce manje nego povrsna, bes obzira na predmet discusije.
Sve najbolje.

Bob Greene
September 5, 2013 12:09 pm

kadaka: repeat your IR experiment with an unwaxed paper cup with water. You can boil water in a paper cup on hot coals without burning the cup. Every Boy Scout knows that trick. I leave it to the experts whether the heat transfer to water is by convection through the container, transfer from the heated air, radiative transfer or a combination of the three. In any event enough heat is transferred from the paper to the water to prevent the paper from burning.

milodonharlani
September 5, 2013 12:10 pm

M Courtney says:
September 5, 2013 at 12:05 pm
milodonharlani says at September 5, 2013 at 12:02 pm… Prudent call. My father does like a fight if any is offered. Or even if it just seems to be.
You know, it’s fun – in a way.
—————————–
I was thinking of Little Mikey Mann, not the distinguished Senior Courtney.

September 5, 2013 12:18 pm

@ Bruce Cobb says: September 5, 2013 at 10:53 am
*GROAN* – Bad pun! 😉

Theo Goodwin
September 5, 2013 12:26 pm

kellyhaughton says:
September 5, 2013 at 11:14 am
In practical terms, they are looking for forcing from CO2 and treating everything else as noise. They would dispute my claim. They would say that they are aware of the “forcings and feedbacks calculation” that must be done. However, they will get nowhere on that calculation. As explained in my first post above, cloud behavior is a natural regularity and their handling of cloud variation in their models will be subject to the same circularity as their handling of ENSO. They will treat cloud variation as summing to zero over the long run. The proof is in the pudding. How many models are trumpeting their skill at reproducing cloud behavior and, among them, how many are trumpeting their novel conclusions showing that cloud behavior is an important negative feedback (that cloud behavior seriously lowers the effects of CO2)?

Theo Goodwin
September 5, 2013 12:27 pm

M Courtney says:
September 5, 2013 at 12:05 pm
Please suggest to him that his time is better spent explaining the failings of models and other parts of CAGW.

Theo Goodwin
September 5, 2013 12:33 pm

kellyhaughton says:
September 5, 2013 at 11:14 am
“When the alarmists say “the models are based on the laws of physics”, how can they make that claim and leave out the “forcings” from the sun, the oceans, clouds, etc.?”
The only physics that they consider is the physics of radiation among Sun, Earth, and GHGs. They have no place for an experimental physics of natural regularities. They do not cover the physics of ENSO, AMO, you name it except to treat them as numerical indexes that will sum to zero. Several Alarmists have published articles arguing that the AMO and ENSO must sum to zero and cannot influence climate.

M Courtney
September 5, 2013 12:34 pm

Theo Goodwin says at September 5, 2013 at 12:27 pm…
I agree entirely and my work email records my conversation with my father on much the same theme (well, in the particular) even though I am paid to have other priorities at that time.
But I am not my father’s minder. He is his own man – don’t ask me to be responsible for focussing him, please (pretty please).

Theo Goodwin
September 5, 2013 12:36 pm

milodonharlani says:
September 5, 2013 at 10:38 am
I was there when ‘knit’ became ‘nit’. The invention of the nit was a very Sixties thing.

richardscourtney
September 5, 2013 12:40 pm

BBould:
Thankyou for your post addressed to me at September 5, 2013 at 11:29 am which says in total.

Richardscourtney: The Bayesian Treatment of
Auxiliary Hypotheses – This paper examines the standard Bayesian solution to the Quine-Duhem
problem, the problem of distributing blame between a theory and its auxiliary
hypotheses in the aftermath of a failed prediction.
http://joelvelasco.net/teaching/3865/strevens%20auxhyp.pdf
This is a bit over my head and I may have missed the mark and it is not relevant, but it may be why the models have not been falsified.

Firstly, for some reason my computer locks up when I try to download that link. So, at the moment I cannot answer your specific question.
Do you have another link or a reference so I can access the paper another way?
For the moment, I draw your attention to a recent excellent post from Robert Brown on another thread. It starts by (rightly) chastising me for failing to caveat the ‘all other things being equal fallacy’ but if you get past that he deals with the oversimplification of models. It is here
http://wattsupwiththat.com/2013/09/03/another-paper-blames-enso-for-global-warming-pause-calling-it-a-major-control-knob-governing-earths-temperature/#comment-1406638
Richard

richardscourtney
September 5, 2013 12:47 pm

Friends:
It seems that some want to address me through my son.
That is not reasonable. Does any of you have a son who agrees with you, and would you want one?
Please talk to him about his views and to me about mine. Otherwise he and me may lose the fun of our arguments with each other 🙂
Richard

M Courtney
September 5, 2013 12:54 pm

richardscourtney says at September 5, 2013 at 12:47 pm…
Obviously, I disagree.

Theo Goodwin
September 5, 2013 1:07 pm

richardscourtney says:
September 5, 2013 at 12:47 pm
How could I have forgotten that most fundamental point? My bad. Never again will I address you through your son.

Theo Goodwin
September 5, 2013 1:09 pm

M Courtney says:
September 5, 2013 at 12:34 pm
You are correct. Please pardon me.

M Courtney
September 5, 2013 1:18 pm

Theo Goodwin. No Worries, Sir.
But I really do agree with you when you imply that my father should focus more on the real issues rather than smashing everyone who is wrong on the internet.
He would get to bed earlier.

BBould
September 5, 2013 1:20 pm

Richardscourtney: Thanks for the link. I’m sure this is obvious to you but it’s a PDF file and needs adobe acrobat reader to download. Other than that I can’t understand why you can’t access the link as it works fine for all my computers except tablets.

milodonharlani
September 5, 2013 1:26 pm

Theo Goodwin says:
September 5, 2013 at 12:36 pm
At least in the US, “nitpicking” has been in the language since the 1950s, if not before.

richardscourtney
September 5, 2013 1:44 pm

BBould:
Repeated attempts to download the file have each locked-up my computer so I have had to restart it.
As you suspected, I do have Adobe Acrobat and that loads before the problem arises. I notice from the header that the paper is 42 pages so I am wondering if the problem has something to do with the large file size.
If you cannot provide a reference for me so I can try to access it elsewhere, perhaps – as a start – you can copy its abstract to here so I can at least understand what you are asking about?
Sorry to be a nuisance about this.
Richard

rgbatduke
September 5, 2013 1:46 pm

I’ve said this on other threads, but it is especially true on this one. CIMP5 is an aggregation of independent models. There is no possible null hypothesis for such an aggregate, nor is the implied use of ordinary statistics in the analysis above axiomatically supportable.
GCMs are not independent, identically distributed samples.
Consequently, the central limit theorem has absolutely no defensible application to the mean or variance obtained for a single projective parameter extracted from an ensemble of GCMs. This is equally true in both directions. One cannot reject “CIMP5” per se or any of the participating models on the grounds of a hypothesis test based on an assumed normal distribution and the error function used to obtain a p-value, nor can one assert that the mean of this projective distribution and its variance enable any statement about how “likely” it is to have any given temperature in the underlying distribution.
What one can do is take each model in the collection, where each model typically produces a spread of outcomes for a Monte Carlo random perturbation of the initial conditions and parameters, analyze the mean of that spread and its statistical moments and properties, and compare the result to observation because in this case the Monte Carlo perturbation is indeed selection of random iid samples from a known distribution and hence both the central limit theorem applies and — given a Monte Carlo generated statistical envelope of model results — one doesn’t really need it. One can assess the p-value directly by comparing the actual trajectory to the ensemble of model generated trajectories even if the latter is not Gaussian.
It is this last step that is never done. Based on the spaghetti snarl of GCM results that I’ve seen in e.g. AR4 or 5 for specific models (compared to the actual temperature, most of them would individually fail a correctly implemented hypothesis test when compared to the data, where now there is a meaningful null hypothesis and p-value for each model separately (the hypothesis of the model itself being correct and the contingent probability of producing the observed result). Indeed, a lot of them would fail with a very high degree of confidence (very small p-values, well below an e.g. 0.05 cutoff).
If those models were removed from CIMP5, one would — at a guess, since I do not have access to the actual distribution of trajectories for all of the contributing models and have to generalize from the samples I’ve seen — one would give up pretty much all of the models to the right of the primary peak, by throwing them into the garbage can (especially the secondary and tertiary peaks, as the distribution isn’t even cleanly unimodal in figure b). I’m guessing that while one could not actively fail all of the ones in between that cutoff and reality, a lot of the remaining ones would have systematically poor p-values — low enough to ensure that they aren’t all right for sure without necessarily being able to tag specific ones as wrong.
Even this analysis is faulty, because executing the strategy above is a form of data dredging, and the criterion for passing the hypothesis test has to be accordingly strengthened because you have so many opportunities to pass the hypothesis test that it isn’t surprising that some models might seem to do so even though they truly fail as a statistical accident. This would knock off a few more. I’m guessing that by the time one was done with this process, one would have a much, much smaller set of models that survived the “I guess I need to reasonably agree with empirical observation, don’t I?” cut, and the model mean would be much, much closer to reality (and still irrelevant).
At that point, though, one could examine the moderately successful (or at least, “close”) models to see what features they have in common and to help craft the next generation of models. It would also be an excellent time to re-assess the input variables (thinking about omitted variables in particular) and consider backwards (Bayesian) inference of parameters like the sensitivity, finding a value of the sensitivity that (for example) makes the centroid of the Monte Carlo distribution agree with the observed record. At least, it would be an excellent time to do this sort of thing if the GCM owners were doing science instead of performing political theater. And some of them are! Climate sensitivity is in free-fall for precisely that reason, because the clearly evident warming bias in almost all the models has driven a lot of honest (and formerly honestly mistaken) researchers back to the drawing board to determine the highest the sensitivity could reasonably be expected to without making the p-value TOO small, a form of Bayesian analysis that perhaps overweights the the former prior. This IMO will only mean that they have to move it again in the future (sigh) but that is their call. That’s what the future is for, anyway — to validate the better and falsify the worser.
rgb

Editor
September 5, 2013 1:46 pm

kadaka (KD Knoebel) says:
September 5, 2013 at 5:49 am (replying to )
gnomish said on September 5, 2013 at 2:59 am:

Proposed: Thermal radiation from an infrared heater applied to the surface of water cannot heat the water.

Referencing the problems in the experimental setup above (melting bowl, little effect, difficulty i setup and measurements).
Try it again, but with a much larger “area” of the water exposed to the heat/volume of water in total. That is, use a steel baking pan much larger than the IR heater area. That way, the IR heats heats the water under the heater, but the sides of the aluminum or steel pan are far away for the edges of the IR heater. If the water is as high as possible in the pan, then the pan edges will be both further away (increasing r^2 losses) and have less area exposed to the IR radiation coming the center of the IR heater.

RMB
Reply to  RACookPE1978
September 7, 2013 5:58 am

I would argue with your proposal. Radiation enters water but physical heat does not.

John F. Hultquist
September 5, 2013 2:03 pm

M Courtney says:
September 5, 2013 at 11:53 am
“-The literal meaning (we all know what words mean)

Not so. Explaining, I think, the recent gaffe of Tony Abbott –
“No one — however smart, however well-educated, however experienced — is the suppository of all wisdom.” Mr Abbott appeared to mix up the word “suppository” with the word repository.
http://www.independent.co.uk/news/world/australasia/australian-election-tony-abbott-hits-bum-note-with-suppository-of-all-wisdom-gaffe-8757527.html
Now I return you to your regular programming.

Aphan
September 5, 2013 2:04 pm

richardscourtney –
I’ve been thinking for a long time now that those of us who are not satisfied with the science on climate change need to do some simple, strategic “marketing” to better represent ourselves. For example, we do not deny the climate changes-we’re the ones who usually point that out. We do not deny that it warms when it actually warms, nor do we deny that it cools when it cools etc. But we’ve allowed the “other side” to define us for so long, that even the general public accept their definitions of us. It has to stop. But we have to have other things to fill that void. Definitions and statements that reflect the truth. Things that are simple and solid and consistent and completely irrefutable that we just keep saying and saying and saying and saying until their definitions of us lose all traction.
With that said, I LOVED something you said earlier:
“The modellers built their climate models to represent their understandings of climate mechanisms. If their understandings were correct then the models would behave as the climate does. The fact that the climate models provide indications which are NOT “consistent with measurements” indicates that the understanding of climate mechanisms of the modellers is wrong (or, at least, the way they have modeled that understanding is in error).”
I wanted to ask you if I could use that statement-over and over and over again? (I’ll credit you every time if you wish.) If we, as those who are “skeptical about what is being called climate science” just state something like this over and over again, we drive home several truths all at once. If we could cause people in general to start asking themselves…and then others…:
“Is this study done with models?” or
“Why are scientists using models they know are wrong?” or
“Do the scientists even know their models are wrong?” or
“So how much of what we’re being told is based on EVIDENCE and FACT and how much is based on flawed models”?
…we’d be creating a whole world full of skeptics and critical thinkers. Imagine…..:)

Gunga Din
September 5, 2013 2:22 pm

John F. Hultquist says:
September 5, 2013 at 2:03 pm
M Courtney says:
September 5, 2013 at 11:53 am
“-The literal meaning (we all know what words mean)
Not so. Explaining, I think, the recent gaffe of Tony Abbott –
“No one — however smart, however well-educated, however experienced — is the suppository of all wisdom.” Mr Abbott appeared to mix up the word “suppository” with the word repository.
http://www.independent.co.uk/news/world/australasia/australian-election-tony-abbott-hits-bum-note-with-suppository-of-all-wisdom-gaffe-8757527.html
Now I return you to your regular programming.

==========================================================================
Well, I can think of a few for whom “suppository” would be the word to describe the source of some of their bits of “wisdom”.

richardscourtney
September 5, 2013 2:25 pm

Aphan:
re your question to me at September 5, 2013 at 2:04 pm.
Yes, of course you can use it if you want to. I would not have written it if I did not want people to read it.
And feel free to adopt it as your own if you want. I take great pleasure when I see phrases I invented or first applied long ago. And the pleasure is greatest when people tell me it is something I should hear (I smile inside and say nothing).
Richard

Aphan
Reply to  richardscourtney
September 5, 2013 2:59 pm

richardscourtney-
“And the pleasure is greatest when people tell me it is something I should hear (I smile inside and say nothing).”
LOL! You scamp you! I think that at least once, you should respond along the lines of “Why…that is a truly brilliant point!” If you do it within earshot of your son, you can tell him I put you up to it. 🙂

BBould
September 5, 2013 2:39 pm

Richardscourtney: Here is the abstract.
This paper examines the standard Bayesian solution to the Quine-Duhem
problem, the problem of distributing blame between a theory and its auxiliary
hypotheses in the aftermath of a failed prediction. The standard
solution, I argue, begs the question against those who claim that the problem
has no solution. I then provide an alternative Bayesian solution that
is not question-begging and that turns out to have some interesting and
desirable properties not possessed by the standard solution. This solution
opens the way to a satisfying treatment of a problem concerning ad hoc
auxiliary hypotheses.

richardscourtney
September 5, 2013 2:42 pm

BBould:
In light of my difficulty downloading the file to comment on a paper as you requested, I write to say I now think you probably have an answer to the question you are really asking.
I think that answer is probably provided by considering the information in my post at September 5, 2013 at 8:09 am
http://wattsupwiththat.com/2013/09/05/statistical-proof-of-the-pause-overestimated-global-warming-over-the-past-20-years/#comment-1408595
together with the information in the short post from davidmhoffer at September 5, 2013 at 8:34 am
http://wattsupwiththat.com/2013/09/05/statistical-proof-of-the-pause-overestimated-global-warming-over-the-past-20-years/#comment-1408620
and the information in the long post from rgbatduke at September 5, 2013 at 1:46 pm
http://wattsupwiththat.com/2013/09/05/statistical-proof-of-the-pause-overestimated-global-warming-over-the-past-20-years/#comment-1408910
I especially commend study of the long post from Prof Brown, aka rgbatduke (see, I told you his comments are good).
Richard

Steve Keohane
September 5, 2013 2:49 pm

Gail Combs says: September 5, 2013 at 7:14 am
Thanks for the reminder about the Horse Latitudes, hadn’t heard of them in, I guess, decades.

richardscourtney
September 5, 2013 2:50 pm

BBould:
Thankyou for your post addressed to me at September 5, 2013 at 2:39 pm. It came in while I was writing my post to you at September 5, 2013 at 2:39 pm.
OK. I see why you want me to read the paper: its abstract claims

The standard solution, I argue, begs the question against those who claim that the problem has no solution.

Well, “those who claim that the problem has no solution” certainly includes me and I think includes Prof Brown, so I really do need to find a way to get at that paper.
Richard

Theo Goodwin
September 5, 2013 2:53 pm

BBould says:
September 5, 2013 at 2:39 pm
Richardscourtney: Here is the abstract.
“This paper examines the standard Bayesian solution to the Quine-Duhem
problem, the problem of distributing blame between a theory and its auxiliary
hypotheses in the aftermath of a failed prediction.”
Quine created the Duhem-Quine thesis but would have no patience for Bayesians. The Duhem-Quine thesis does not reference auxillary hypotheses. It applies to all the hypotheses in the theory and to the evidence for the theory.
“The standard
solution, I argue, begs the question against those who claim that the problem
has no solution. I then provide an alternative Bayesian solution that
is not question-begging and that turns out to have some interesting and
desirable properties not possessed by the standard solution. This solution
opens the way to a satisfying treatment of a problem concerning ad hoc
auxiliary hypotheses.”
This might be good work but it is not part-and-parcel of Quine’s work. Seems to me that it just takes Quine’s logic and adds to it. To those who pursued probabilities for various hypotheses, he remarked that he had no interest in colored marbles in an urn.

Theo Goodwin
September 5, 2013 2:56 pm

milodonharlani says:
September 5, 2013 at 1:26 pm
Do you have first hand evidence? I do not trust dictionaries on the matter of street language.
If you are right, I still claim that the nit was a creation of the Sixties. Do you remember nits?

Theo Goodwin
September 5, 2013 3:01 pm

rgbatduke says:
September 5, 2013 at 1:46 pm
Thanks so much for this very important work. Some among mainstream climate scientists who are usually willing to take skeptics seriously do not understand what you have just explained.

Theo Goodwin
September 5, 2013 3:05 pm

M Courtney says:
September 5, 2013 at 1:18 pm
When Lincoln referred to “the better angels of our nature” he was thinking of the first hour after a long, good night’s sleep.

Editor
September 5, 2013 3:19 pm

rgbatduke says:
September 5, 2013 at 1:46 pm

I’ve said this on other threads, but it is especially true on this one. CIMP5 is an aggregation of independent models. There is no possible null hypothesis for such an aggregate, nor is the implied use of ordinary statistics in the analysis above axiomatically supportable.
GCMs are not independent, identically distributed samples.

OK. So check me here, and see if I have interpreted what you wrote correctly.
You CANNOT add all of the GCM outputs together and “average” them together for each year because they are not independently measured properties subject to statistical theory. That is, if they really were accurate computer models of an accurately modeled physical process, EVERY run of the same model parameters would be identical: A calculator does NOT add 2+2+2 = 6 differently every time. Further, this future temperature vs CO2 growth is NOT a biologically statistical values like a tree height and trunk diameter: you CANNOT measure a lot of them and get a “more accurate” diameter or average height because these are not models based on “average growth rates per ton of fertilizer or per thousand gallons of water” right?
However, since model inputs DO vary statistically because of their Monte Carlo internal calculators, it IS a valid comparison to run each model separately several thousand times, then compare THAT “average” model output to itself to see if it is putting out random or valid predictions. (Ignore Oldberg for the rest of this, OK?) At this point, one should compare the 24-odd average model runs against real world (no volcanos since 1993, known aerosol changes, known ENSO and PDO changes, known Arctic and Antarctic polar ice cover changes) and see which are most accurate.
If any are not-as-bad-as-the-rest-but-not-right (within 2 std deviation at least), we should throw out the worst 20 models, modify the remaining 4 and continue to re-run them until they duplicate the past 50 years accurately. Then wait and see which of the corrected 4 is best. In the meantime, fix the bad 20 that were originally trashed.
Correct?

Gail Combs
September 5, 2013 3:24 pm

John F. Hultquist says: @ September 5, 2013 at 9:08 am
….This is one of at least 3 explanations for the term “Horse Latitudes”
>>>>>>>>>>>>>>>>>>>
I always figured they would eat the beasts not throw them over board….
Isn’t the rewriting of history great? (Just don’t tell all those kids sweating through there history finals)

jorgekafkazar
September 5, 2013 3:49 pm

Theo Goodwin says: “Very well said. The “radiation-only theory,” used by all Alarmists is purely deterministic….”
And yet these same Alarmists claim there is such a thing as thermal inertia in a system driven by instantaneous radiative transfer.

September 5, 2013 3:52 pm

rgbatduke on September 5, 2013 at 1:46 pm
I’ve said this on other threads, but it is especially true on this one. CIMP5 is an aggregation of independent models. There is no possible null hypothesis for such an aggregate, nor is the implied use of ordinary statistics in the analysis above axiomatically supportable.
GCMs are not independent, identically distributed samples.
[. . .]

– – – – – – – –
rgbatduke,
Your comment was helpful. Thanks.
Considering your whole comment, what if a large voluntary group of independent (of government)) academic institutions decide to start an evaluation of climate. Decide to do it without relying on the current body of IPCC bureaucracies, processes and reports. Lets say their mission would be the integration of a very widely balanced sample of climate research into a consistent comprehensive overview; a mission without a mandate to look for evidence of any particular climate factor (for example anthropogenic from burning fossil fuels). Lets say the product of the consortium is for itself but is freely available to anyone or any government. Government isn’t the target of the product.
Question for RGB => In that scenario would you expect models would have the significance given to them by the IPCC? What reasonable role for models would you suggest in the scenario?
John

BBould
September 5, 2013 3:55 pm

Richardscourtney: If you do a google search of the title more pop up and you should be able to choose which one works. Please post about this paper if you can.
Much thanks.

milodonharlani
September 5, 2013 4:02 pm

Theo Goodwin says:
September 5, 2013 at 2:56 pm
Can’t tell if you’re kidding or not, but yes, I do have first hand evidence, or first ear, from the late 1950s.
Also direct documentary evidence, too, from early ’50s or late ’40s DoD jargon. A November 1951 Colliers magazine is quoted on the Net as saying: “Two long-time Pentagon stand-bys are fly-speckers and nit-pickers. The first of these nouns refers to people whose sole occupation seems to be studying papers in the hope of finding flaws in the writing, rather than making any effort to improve the thought or meaning; nit-pickers are those who quarrel with trivialities of expression and meaning, but who usually end up without making concrete or justified suggestions for improvement.”
You could check at a big library or buy the Nov 3, 10, 17 or 24, 1951 issues of Colliers on Amazon or eBay.

richardscourtney
September 5, 2013 4:05 pm

RACookPE1978:
You conclude your post at September 5, 2013 at 3:19 pm asking
http://wattsupwiththat.com/2013/09/05/statistical-proof-of-the-pause-overestimated-global-warming-over-the-past-20-years/#comment-1408995

If any are not-as-bad-as-the-rest-but-not-right (within 2 std deviation at least), we should throw out the worst 20 models, modify the remaining 4 and continue to re-run them until they duplicate the past 50 years accurately. Then wait and see which of the corrected 4 is best. In the meantime, fix the bad 20 that were originally trashed.
Correct?

Obviously, rgb will make whatever answer he wants. This is my ‘two pence’.
The problem is the ‘Texas sharpshooter fallacy’.
The Texas sharpshooter fires a scatter-gun at a wall, then paints a target around the middle of the impacts on the wall, and points to the target as evidence he is a good shot.
The models which failed to make an accurate forecast need to be rejected or amended because they are known to lack forecasting skill.
But removing the models which missed the target of an accurate prediction does not – of itself – demonstrate that the remaining models have forecasting skill: the models which seem to have made an accurate forecast may only have done that by chance (removing the ‘failed’ models is ‘painting the target’ after the gun was fired).
Therefore, and importantly, the remaining models may not accurately forecast the next 20 years.
There is an infinite number of possible futures. A model must emulate the dominant mechanisms of the modelled system if it is to be capable of agreement with the future that will eventuate. And each model is unique (e.g. each incorporates a unique value of climate sensitivity). Therefore, at most only one of them emulates the Earth’s climate system.
Hence, the outputs of the models cannot be averaged because average wrong is wrong.
Furthermore, there is no reason to suppose a model can forecast if it cannot hindecast, but an ability to hindecast does not indicate an ability to forecast. This is because there are many ways a model can be ‘tuned’ to match the past, and none of those ways may make the model capable of an accurate forecast.
Therefore, a model has no demonstrated forecast skill until it has made a series of successful forecasts.
Richard

richardscourtney
September 5, 2013 4:09 pm

BBould:
re your suggestion to me at September 5, 2013 at 3:55 pm.
Yes, having read the abstract I really do want to read that paper. I will do the search in the morning. It is now past midnight here. And, of course, I will reply to you when I have read it.
Richard

richardscourtney
September 5, 2013 4:11 pm

BBould:
What is the title and who is the author, please?
Richard

milodonharlani
September 5, 2013 4:11 pm

I see that issues of Collier’s are on line:
http://www.unz.org/Pub/Colliers-1951nov03

milodonharlani
September 5, 2013 4:14 pm

Wordola’s first recorded use is 1954:
http://www.wordola.com/wusage/nitpicking/f1950-t1959.html

September 5, 2013 4:30 pm

A physics-based equation, using only one external forcing, calculates average global temperature anomalies since before 1900 with R2 = 0.9. The equation is at http://climatechange90.blogspot.com/2013/05/natural-climate-change-has-been.html . Everything not explicitly considered must find room in that unexplained 10%.

BBould
September 5, 2013 4:35 pm

Richardscourtney: The Bayesian Treatment of
Auxiliary Hypotheses
Michael Strevens
British Journal for the Philosophy of Science, 52, 515–537, 2001
Copyright British Society for the Philosophy of Science

BBould
September 5, 2013 4:35 pm

Theo Goodwin: Thank you very much!

Editor
September 5, 2013 5:47 pm

milodonharlani says:
September 5, 2013 at 12:02 pm
> PS: I eschewed inserting the term nit-wit into the above copy.
Ah. I thought there was something witless about that previous comment.

Theo Goodwin
September 5, 2013 7:38 pm

milodonharlani says:
September 5, 2013 at 4:02 pm
Thanks. You are a class act.

Gunga Din
September 5, 2013 7:58 pm

Ric Werme says:
September 5, 2013 at 5:47 pm

milodonharlani says:
September 5, 2013 at 12:02 pm
> PS: I eschewed inserting the term nit-wit into the above copy.

Ah. I thought there was something witless about that previous comment.

=====================================================================
Perhaps nitlamps were invented to help nitwits see the light?

Brian H
September 6, 2013 12:29 am

“hiatus”, the fall-back defense? Like luke-warmism, agreeing cedes the unspoken assumptions, which are comprehensively false.

Richard Barraclough
September 6, 2013 1:15 am

Good to see a little etymological sparring in amongst the science.
Now, if only we could all distinguish between “its” and it’s”……..

Reply to  Richard Barraclough
September 6, 2013 6:44 am

@Richard Barraclough – knowing the difference between those 2 won me a IT contract at the State Library. 😉

richardscourtney
September 6, 2013 2:59 am

BBould:
I have now downloaded the paper
Strevens M, ‘The Bayesian Treatment of Auxiliary Hypotheses’, British Journal for the Philosophy of Science, 52, 515–537, 2001
At September 5, 2013 at 11:29 am you suggested to me

it may be why the models have not been falsified.

I have made a cursory study of the paper and will continue to give it much more thought. However, I am writing now to say that I do not think the paper is relevant to the discussion in this thread.
Firstly, I was surprised that I was unaware of a paper published 12 years ago which had the importance you suspected. My initial impression is that it does not have that importance.
Secondly, as a general rule, the importance of a paper is inversely related to its length. This paper is 42 pages long. My first reading of the paper suggests that it obeys that general rule.
The purpose of the paper seems to be to express a personal reaction of Michael Strevens to the work of Newstein. I do not know what if any personal or professional interactions Strevens has had with her, but he makes some personal remarks; e.g.

Newstein, a brilliant but controversial scientist, has asserted both that h is true and that e will be observed. You do not know Newstein’s reasons for either assertion, but if one of her claims turns out to be correct, that will greatly increase your confidence that Newstein is putting her brilliance to good use and thus your confidence that the other claim will also turn out to be correct.

Section 2.4, page12
The subject of the paper is an attempt to quantify to what degree evidence refutes a theory.
In the 1950s Pierre Duhem cogently demonstrated that a scientific hypothesis is not directly refuted by evidence. This is because the evidence also represents additional hypotheses concerning how the evidence was produced and observed.
Duhem’s argument is plain when stated; e.g. if is assumed a long-jumper broke the world record, then the measurement to assess that assumption assumes the tape measure was accurate.
The Quine-Duhem thesis expands on that seminal work of Duhem.
There is always more than one assumption concerning the evidence (e.g. in the long-jump illustration, in addition to assumptions about the tape measure there are assumptions about how it was used). And there is a central hypothesis (e.g. the long-jump measurement provided a correct indication). In essence, the Quine-Duhem thesis says there is no way to determine how an individual assumption affects the importance of the indication provided by the evidence.
Hence, it cannot be known to what degree a piece of evidence refutes a theory because the acceptance of the evidence is adoption of unquantified assumptions.
This, of course, is undeniably true and it affords a get-out to pseudoscientists. Indeed, it has been used by climastrologists (e.g. unmeasured heat must be in deep ocean where it cannot be measured). As you imply, it could also be used as a get-out to falsification of climate models (i.e. the models are right so the evidence must be wrong).
Avoidance of such get-outs requires clear recognition of what is – and what is not – being assessed. An example of this need for clarity is stated by my post in this thread at September 5, 2013 at 3:20 am
http://wattsupwiththat.com/2013/09/05/statistical-proof-of-the-pause-overestimated-global-warming-over-the-past-20-years/#comment-1408432
That post is directly pertinent to the subject of Srevens’ paper because it argues that the uncertainties in the data are a separate issue from whether the climate models emulate the data.
Strevens’ paper claims it is possible to assign individual assessments to the assumptions included in a piece of evidence. In his Introduction on page 1 he writes

I will present a kind of posterior objectivist argument: that on the correct Bayesian understanding of the Quine-Duhem problem, Bayesian conditionalization provides a way to assess the impact on a hypothesis h of the falsification of ha that behaves in certain objectively desirable ways, whatever the values of the prior probabilities.
I will argue that the standard Bayesian solution to the Quine-Duhem problem is incorrect (section 2.4).
I then show, in section 2.5, that given the standard, incorrect Bayesian solution to the Quine-Duhem problem, no posterior objectivist argument of the sort I intend to provide would be possible.

Those are bold claims which the paper fails to fulfill.
This failure seems to be because those claims are not the true purpose of the paper which says in Section 2.4, page 14

A Bayesian might reply that, in the scenarios sketched by Dorling and others, there are no Newstein effects. More generally, the probabilities have been chosen in such a way that δc is zero, so that the entire probability change can be attributed to δ qd . But how is one to evaluate this claim?

Indeed, a Bayesian would reply that. And would not see a need to refute Newstein.
In a peer review of the paper I would discuss the purported refutation, but that does not seem to be needed here. That is because, as the paper admits, the refutation is pointless. It admits in Section 5, page 26

The Quine-Duhem problem is, in many ways, the central problem concerning the role of auxiliary hypotheses in science. One might hope, then, that a Bayesian solution to the Quine-Duhem problem would provide answers to many other questions involving auxiliary hypotheses. My solution cannot be directly employed in a Bayesian treatment of other problems in confirmation theory, however, because it provides a formula for what I call a partial posterior probability rather than for the posterior probability itself.

In other words, Strevens’ admits his analysis only affords a solution to one limited type of assessment and is not generally applicable.
I hope this brief and cursory reply is sufficient answer to your request.
Richard

Sleepalot
September 6, 2013 3:16 am

@ Kadaka: I applaud you for doing the experiment, however …
In your first experiment you used enough energy to boil the water [1], yet only warmed it 3C. Yes, you falsified the proposition, but you rather proved the point – imo.
[1] if you’d put your roughly 0.5kg of water in a 1600 Watt kettle for 5 minutes it assuredly would’ve boiled.

richardscourtney
September 6, 2013 5:19 am

Dan Pangburn:
I see nobody has answered your question in your post at September 5, 2013 at 4:30 pm which says in total

A physics-based equation, using only one external forcing, calculates average global temperature anomalies since before 1900 with R2 = 0.9. The equation is at http://climatechange90.blogspot.com/2013/05/natural-climate-change-has-been.html . Everything not explicitly considered must find room in that unexplained 10%.

The model is merely an exercise in curve fitting. As the link says

The word equation is: anomaly = ocean oscillation effect + solar effect – thermal radiation effect + CO2 effect + offset.

This matches the data because the ‘effects’ are tuned to obtain a fit with the anomaly.
Hence, the model demonstrates that those ‘effects’ can be made to match the anomaly, but it does not demonstrate there are not other variables which may be similarly tuned to obtain a match with the anomaly.
The model matches the form of the anomaly. But, importantly, it only explains the opinion of its constructor: it does NOT explain anything about climate behaviour. Therefore, it does not have a residual of “10%” of climate behaviour which is unexplained.
The model – as every model – represents the understanding of its constructor. But the model has no demonstrated predictive skill and, in that sense, it is similar to the GCMs.
Richard

kadaka (KD Knoebel)
September 6, 2013 6:14 am

Sleepalot said on September 6, 2013 at 3:16 am:

@ Kadaka: I applaud you for doing the experiment, however …
In your first experiment you used enough energy to boil the water [1], yet only warmed it 3C. Yes, you falsified the proposition, but you rather proved the point – imo.
[1] if you’d put your roughly 0.5kg of water in a 1600 Watt kettle for 5 minutes it assuredly would’ve boiled.

If all of the air molecules exiting the heat gun impacted the water, and all energy gained from passage through the heat gun was transferred to the water resulting in the temperature of the air that bounced off the water surface being no greater than room temperature, you might have a point.
Except mere hot air blowing on a surface is far less efficient than the direct heating of an electric kettle, where the heating element may be immersed in the water with some designs. All that energy is not transferred, all the air molecules did not hit the surface. As the heat gun was agitating the water surface, there was likely energy lost as latent heat due to vaporization, to a degree far in excess of that of an electric kettle. Etc.
So, since a process that is many times more efficient could have delivered enough energy to boil the water in that time, and the process used that was far less efficient only warmed the water a few degrees, what is shown is… A more efficient heating method could have heated the water faster. And that’s about it for your comparison.

September 6, 2013 6:18 am

richardscourtney says: September 5, 2013 at 1:03 am
Friends:
The paper is reported to say
It is worth noting that the observed trend over this period — not significantly different from zero — suggests a temporary ‘hiatus’ in global warming.
NO! That is an unjustifiable assumption tantamount to a lie.
Peer reviewed should have required that it be corrected to say something like:
It is worth noting that the observed trend over this period — not significantly different from zero — indicates a cessation of global warming. It remains to be seen when and if warming will resume or will be replaced by cooling.
Richard
________
Hello Richard,
I agree with your above assessment. Furthermore:
In several recent papers, we are witnessing an undignified scramble by the warmist establishment to spin the story one more time. It is just more warmist nonsense, espoused by people who have ABSOLUTELY NO PREDICTIVE TRACK RECORD. I suggest that one’s predictive track record is perhaps the only objective measure of scientific competence.
In 2002, we wrote with confidence:
“Climate science does not support the theory of catastrophic human-made global warming – the alleged warming crisis does not exist.”
http://www.apegga.org/Members/Publications/peggs/WEB11_02/kyoto_pt.htm
The above statement was based on strong evidence available at that time that the Sensitivity of Earth Temperature to increased atmospheric CO2 was not significant and was vastly over-estimated by the climate models cited by the IPCC.
The term “temporary warming hiatus” implies that warming will resume. I submit that it will not, and Earth is entering a natural cooling period. I wrote this global cooling prediction in an article, also published in 2002.
The above global cooling prediction was based on strong evidence available at that time that global warming and cooling cycles were primarily natural in origin and Earth was nearing the end of a natural warming cycle and about to enter a natural cooling cycle. These natural cycles are somewhat irregular and the timing of our prediction (cooling to start by 2020-2030) may be a bit late – global cooling may have already begun, although we will likely only know this with certainty in hindsight.
We do know that SC24 is a dud and similar periods of solar inactivity (e.g. the Dalton and Maunder Minimums) have coincided with severe global cooling and major population declines due to famine in Northern countries. I suggest that IF this imminent global cooling is severe, and we are woefully unprepared due to global warming nonsense, the price society pays for our lack of preparedness will be much more grave.
Warmist nonsense has resulted in the squandering of over a trillion dollars of scarce global resources, mostly on inefficient and ineffective “green energy” schemes.
In 2002 we wrote with confidence:
“The ultimate agenda of pro-Kyoto advocates is to eliminate fossil fuels, but this would result in a catastrophic shortfall in global energy supply – the wasteful, inefficient energy solutions proposed by Kyoto advocates simply cannot replace fossil fuels.”
The policy makers of Europe, Ontario and California could have benefitted from this advice – instead, they severely damaged their economies by foolishly adopting worthless green energy schemes and are now having to reverse these decisions, due to soaring energy prices.
Another point – the satellite temperature record suggests a probable warming bias in the surface temperature record of about 0.2 C since 1979, or about 0.07C per decade, so one should regard the alleged surface temperature warming trends as of being questionable accuracy.
The global warming camp has much to answer for. They have promoted false alarm and have profited from it. They have squandered significant global resources. They have caused us to focus our attentions on a non-crisis – global warming – and thus have caused us to ignore a much greater potential threat, probable imminent global cooling. They have acted like imbecilic thugs, and have caused several eminent scientists to be dismissed from their academic positions.
At a minimum, I suggest that these thuggish university dismissals should be reversed without delay, with suitable apologies.
Foolish “green energy” schemes and the lavish subsidies that make them attractive should cease immediately.
I also suggest that serious study of probable global cooling and its possible mitigation, if it is severe, should be commenced without delay.
Regards, Allan

richardscourtney
September 6, 2013 6:35 am

Allan MacRae:
Thankyou for your post addressed to me at September 6, 2013 at 6:18 am
http://wattsupwiththat.com/2013/09/05/statistical-proof-of-the-pause-overestimated-global-warming-over-the-past-20-years/#comment-1409424
It says

I suggest that one’s predictive track record is perhaps the only objective measure of scientific competence.

Hmmmm.
Well, if you are talking about the taking of empirical measurements then, no, I don’t see how that can be true.
But if you are talking about theoretical modeling then your assertion must be true. And it goes to the nub of this thread.
Richard

September 6, 2013 6:44 am

Global warming climatology is notable for the absence of the statistical populations underlying its models. The absence of these populations wounds this discipline. Casualties from this wound include probability theory, information theory, mathematical statistics and logic.
In their paper, Fyfe et al show how conclusions may be drawn from a global temperature time series despite this seemingly unsurmountable barrier. One makes a bunch of assumptions and buries these assumptions!

richardscourtney
September 6, 2013 6:57 am

Friends:
Please resist temptation to answer the post from Terry Oldberg at September 6, 2013 at 6:44 am.
You know he is wrong.
I know he is wrong.
And he knows he is wrong because on previous IPCC threads he has been unable to define the “the statistical populations” he claims are “absent”.
Any attempt to engage with him is like entering Alice’s rabbit hole. And it completely destroys a thread.
If anybody doubts the need for my request I suggest that – as a recent example – they peruse the recent thread at
http://wattsupwiththat.com/2013/08/31/wuwt-hot-sheet-for-saturday-august-31st-2013/
Richard

September 6, 2013 7:00 am

John F. Hultquist says: September 5, 2013 at 2:03 pm
“No one — however smart, however well-educated, however experienced — is the suppository of all wisdom.”
– Tony Abbott
Disagree: I respectfully submit that the IPCC is the suppository of all wisdom.

September 6, 2013 7:10 am

A pondering about the pause:
During La Nina/La Nada conditions, when there are fewer clouds overhead but more wind, the ocean surface is roughened up, which leads to a less warm surface due to top layers mixing as well as shoving warmer top water away. That’s not of interest to me in the pause. What is of interest is the amount of warming that happens below the surface due to SWIR penetration under these conditions. If the skies are not under “clear sky” conditions during these recharge periods, we should see less warming of the water below the surface. Eventually, the needle goes to the positive side of the ENSO dial (IE El Nado or El Modoki), and the surface calms down to the point that this now less warmed water again sits on top, If these conditions continue, we should see a stable pause in subsequent land temperatures. However, if the swing back to La Nina/La Nada gets even less defined with more clouds, and the oceans get less and less recharged due to equatorial cloud cover, we could even see a stepping down process in subsequent land temperatures.
So then the question is, what data do we have on subsurface recharging warming during non El Nino conditions over this time period?

September 6, 2013 7:24 am

Maybe we need more definition and descriptive names for these ENSO periods, such as: La Nina, La Modiki, La Nada, Neutral, El Nado, El Modiki, and El Nino. It seems to me that the goodies in the pause could be found in the waters of La Modiki, La Nada and Neutral.

BBould
September 6, 2013 7:30 am

Richardscourtney: Thanks for taking the time to explain the paper I brought up, its truly appreciated. This is the reason I started looking into it, post (not addressed to me) from realclimate – “Read up on Quine and the issue of auxiliary hypotheses. In practice, all theories are ‘wrong’ (as they are imperfect models of reality), and all tests involve multiple hypotheses. Judging which one (or more) are falsified by a mismatch is non-trivial. I have no problem agreeing that mismatches should be addressed, but wait for the post. – gavin]”
Hopefully this will help explain my interest.

BBould
September 6, 2013 7:33 am

Pamela Gray: Thanks you made me think of another question. Does anyone study how much energy the ocean loses at night? I know my swimming pool warms and cools much slower than the surrounding air but its always much cooler at dawn.

rgbatduke
September 6, 2013 7:45 am

This might be good work but it is not part-and-parcel of Quine’s work. Seems to me that it just takes Quine’s logic and adds to it. To those who pursued probabilities for various hypotheses, he remarked that he had no interest in colored marbles in an urn.
Sacrilege! Polya would be turning in his grave! Taleb, on the other hand, might not (partly because “he’s not dead yet!”:-). As his character “Joe the cab driver” (IIRC) in The Black Swan” might say to an analysis of the data above, “It’s a mugs game”. If you flip a two-sided coin 20 times and get heads every time, only an idiot would apply naive probability theory with the assumption of an unbiased coin and claim that the probability of the next flip being heads is 0.5. A Bayesian, however firmly they might have believed it was an unbiased coin initially, would systematically adjust the prior estimate of 0.5 until the maximum likelihood of the outcome coincides with the data and at this point would be deeply suspicious that the coin actually had two heads, that the coin was a magical coin, that the coin was so amazingly weighted and carefully flipped that it had p_head -> 0.999999, that it’s all a horrible dream and there is no real coin. A mugs game.
At this point, I’ll simply amplify what I said above in two ways. First, one can, with an enormous amount of effort, attempt an actual statistical analysis of a (essentially meaningless) composite hypothesis, but nothing of the sort has been attempted for CIMP5 in part because doing so would be spectacularly difficult — right up there with the difficulty of the problem that the GCMs are attempting to solve (which is already one of the most difficult computational problems humans have attempted to solve). The difficulty arises because the theories are highly multivariate and have an abundance of assumptions. Every assumption in every model is subject to Bayes theorem as a Bayesian prior! That is, when one assigns a specific functional form to the radiation profile of the atmosphere with various CO_2 levels, one is — since we cannot precisely compute this and are forced to use one of several approximations (see e.g. Petty’s book) we have to statistically weight the probability that those approximations are correct and downgrade the certainty of our results (our eventual error estimate) accordingly.
To put it more formally, the assertion is that
If all the assumptions made constructing the computation are correct, then the model predicts thus and such. However, the assumptions are not certain, and the best estimate of the probability that the model prediction is correct is strictly decreased according to their uncertainty. This can be summarized in the aggregate assumption that “the internal, highly nonlinear, dynamically chaotic differential equations solved by the computer code is correct and sufficiently insensitive to the range of possible error in the Bayesian priors that the output is meaningful in all dimensions (because the code doesn’t just predict temperature, it predicts lots of other things about the future climate as well). Analyzing this precisely for a single theory is enormously difficult, which is why most of the models resort to Monte Carlo to attempt to measure it instead of theoretically predict it. But there are further assumptions built into e.g. the ranges explored by the Monte Carlo itself (more Bayesian priors), into the selection of input variables itself (hard to “Monte Carlo” the omission of a variable that in fact is important), into the granularity and geometry selected (again, difficult to Monte Carlo as the codes may not be written to be length-scale renormalizable in an unbiased way), and one cannot escape the fundamental assumption “this code will correctly predict the multidimensional future in a consistent way and within a useful precision” no matter what you do.
That is the basis for the ultimate null hypothesis, per model. In the end, the model produces an ensemble of results that supposedly span the range of model uncertainty given the priors stated and unstated that went into its construction. If reality fails to lie well within that range in any significant dimension/projection, the model should be considered suspect (weak failure) or overtly fail a hypothesis test depending on how badly reality fails to live within that range.
Imagine attempting to extend this process collectively for all the models in CIMP5! How can one rigorously assess whether approximation A or approximation B for e.g the radiative properties of CO_2 in the atmosphere is most probably correct when the answer could be both are adequate as the basis of a correct theory if everything else is done correctly or neither of them will work because a correct (predictive) theory requires an exact treatment of CO_2’s radiative properties. And then, of course, there are the rest of the greenhouse gases, the non-greenhouse atmosphere, water vapor, clouds, the ocean, aerosols, soot and other particulates, the extent and effect of the biosphere — it is difficult to even count the underlying assumptions that are built into each model and not all of them are in all of the models.
So how can one frame the null hypothesis for CIMP5? “Somewhere in the collection of contributing GCMs is a model that is a reliable predictor of the actual climate”, so that we can then assess a probability of getting the current result if that is true? No, that won’t work. The implicit null hypothesis in the figures above and used by the IPCC in the assessment reports is that “The mean of the collection of models in CIMP5 is a reliable predictor of the actual climate, and the standard deviation of the distribution of results is a measure of the probability of the actual climate arising from the common initial condition given this correct computation of its time evolution”. Which is nonsense, unsupported by statistical theory indeed (as I argue above) unsupportable by statistical theory, and at the end of the day, all that the figure above really demonstrates is that the GCMs are very likely not independent and unbiased in their underlying assumptions because they do produce a creditable Gaussian (with the wrong mean, but seriously, this is enormously unlikely in a single projective variable obtained by a collection of supposedly independent models that otherwise significantly differ in their predictions of e.g. rainfall).
IMO the one conclusion that is immediately justified by the distribution of CIMP5 results is that the GCMs are enormously incestuous, sharing whole blocks of common assumptions, and that at least one of those common assumptions is badly incorrect and completely unexplored by the Monte Carlo perturbations of initial conditions in the individual models. If one performed a similar study of the projective distribution of results in other dimensions one might even gain some insight into just what shared assumptions are most suspect, but that would require a systematic deconstruction of all of the models and code and some sort of gross partitioning of the shared and different features — an awesomely complex task.
The second amplification is a simple observation that I’ll make on the process of highly multivariate predictive modeling itself (wherein I’m moderately expert). There are two basic kinds of multivariate predictive models. An a priori model assumes that the relevant theory is correctly known and attempts to actually compute the result using that theory, using tools like Monte Carlo to assess the uncertainty in the computed outcome where one can (as noted above, one cannot assess the uncertainty linked to some of the prior assumptions implicit in the implementation by Monte Carlo or any other objective way as there is no “assumption space” to sample and nonlinear chaotic dynamics can amplify even small errors into huge end stage differences (see e.g. the “stiffness” of a system of coupled ODEs and chaos theory, although the amplification can easily be significant even for well-behaved models).
In order to set up the Monte Carlo, one has to assign values and uncertainty ranges to the many variable parameters that the model relies upon. This is typically done by training the model — using it to compute a known sequence of outcomes from a known initialization and tweaking things until there is a good correspondence between the actual data and the model-produced data. The tweaking process typically at least “can” provide a fair amount of information about the sensitivity of the model results to these assumptions and hence give one a reasonable knowledge of the expected range of errors predicting the future. One then applies the model to a trial set of data that (if one is wise) one has reserved from the training data to see if the model continues to work. This second stage “validates” the model within the prior assumption “the training and trial set are independent and span the set of important features in the underlying a priori assumed known dynamics”. Finally, of course, the validation process in science never ends.
It doesn’t matter a whit if your model perfectly captures the training data, nails the trial data square on the money, if the first time you compare it to new trial data from the real world it immediately goes to hell well outside of the probable range of outcomes you expected. If you are prospecting for oil with a predictive model, it doesn’t matter if your code can predict previously drilled wells 95% of the time if you only get one oil strike in 100 drilling attempts the first time you use it to direct oil exploration efforts. You are fired and go broke no matter how fervently you argue that your code is good and you are just unlucky. Ditto predicting the stock market, predicting pretty much anything of value. Science is even more merciless than commerce in this regard everywhere but climate science!. Classical physics was “validated” by experiment after experiment in pretty good agreement for a century after its discovery, but then an entire class of experiments could not be explained by classical a priori models. By “could not be explained”, I mean specifically that even if one built a hundred classical models, each slightly different, to e.g. try to predict electronic spectra or the outcome of an electron diffraction experiment, the mean of all of those distinct a priori models would never converge to or in the end have meaningful statistical overlap with the actual accumulating experimental data. They in fact did not, and a lot of effort was put into trying!
The problem, of course, was that a common shared assumption in the “independent” models was incorrect. Further, it was an assumption that one could never “sample” with mere Monte Carlo or any sort of presumed spanning of a space of relevant assumptions, because it was one of the a priori assumed known aspects of the computation that was incorrect, even though (at the time) it was supported by an enormous body of evidence involving objects as large as molecules or small clumps of molecules on up. We had to throw classical physics itself under the bus in order to make progress.
You’d think that we would learn from this sort of paradigm-shifting historical result not to repeat this sort of error, and in the general physics community I think the lesson has mostly been learned, as this sort of process occurs all the time in the delicate interplay between experimental physics and theoretical physics. In some sense we expect new experiments to overturn both great and small aspects of our theoretical understanding, which is why people continue to study e.g. neutrinos and look for Higgs bosons, because even the Standard Model in physics is not now and will never be settled science, at best it will continue to be consistent with observations made so far. A single (confirmed) transluminal neutrino or new heavy particle or darkon can ruin your whole theoretical day, and then it is back to the drawing board to try, try again.
In the meantime, this example shows the incredible stupidity of claiming that the centroid of the projection of a single variable from a collection of distinct priori models with numerous shared assumptions, many of which cannot be discretely tested or simulated, has any sort of statistically relevant connection to reality. Each model one at a time is subject to the usual process of falsification that all good science is based on. Collectively they do not have more weight, they have less. By stupidly including obviously failed models in the average, you without question pull the average away from the true expected behavior.
rgb

Gunga Din
September 6, 2013 7:48 am

Richard Barraclough says:
September 6, 2013 at 1:15 am
Good to see a little etymological sparring in amongst the science.
Now, if only we could all distinguish between “its” and it’s”……..

=======================================================================
I’m getting better at it. Someone here (maybe you?) gave a tip a while back to keep them straight.
If it’s the possessive, treat it like “his” or “hers”, no apostrophe.

richardscourtney
September 6, 2013 7:50 am

BBould:
Thankyou for the acknowledgement in your post addressed to me at September 6, 2013 at 7:30 am.
I am grateful that you brought the paper to my attention because I was not aware of it despite its having been published so long ago (i.e. in 2001). And that lack of awareness is not surprising considering the serious flaws the paper contains and the limited – to the degree of being almost useless – conclusion it reaches. However, if RC and the like are intending to use that paper as an excuse for model failure then they really, really must be desperate!
In the light of why you say you raised the paper I now consider the trouble I had obtaining the paper was well worth it. If anybody attempts to excuse model failure by resurrecting that paper from obscurity then I can now refute the laughable attempt.
Thankyou.
Richard

richardscourtney
September 6, 2013 8:03 am

rgbatduke:
Thankyou for your brilliant post at September 6, 2013 at 7:45 am
http://wattsupwiththat.com/2013/09/05/statistical-proof-of-the-pause-overestimated-global-warming-over-the-past-20-years/#comment-1409484
I commend everyone interested in the subject of this thread to read, study and inwardly digest it.
And I ask you to please amend it into a form for submission to Anth0ny for him to have it as a WUWT article.
Richard

September 6, 2013 8:14 am

rgbatduke (Sept. 6, 2013):
To your list of shortcomings in the methodology of global warming research, you could have added that the general circulation models are insusceptible to validation because the events in the underlying statistical populations do not exist.

September 6, 2013 8:25 am

In model research, the question is, does the model adequately simulate the workings of the underlying statistical population. The null hypothesis would therefore be: There is no statistical difference between the model results and observations. Logically, it is thus susceptible to validation.

Reply to  Pamela Gray
September 6, 2013 11:27 am

Pamela Gray:
Your understanding of the meaning of “validation” is identical to mine. The populations underlying the general circulation models do not exist; thus, these models are insusceptible to being validated.
In the paper entitled “Spinning the Climate,” the long-time IPCC expert reviewer Vincent Gray reports that he once complained to IPCC management that the models were insusceptible to being validated yet the IPCC assessment reports were claiming they were validated. In tacit admission of Vincent’s claim, IPCC management established the policy of changing the word “validated” to the similar sounding word “evaluated.” Evaluation” is a process that can be conducted in lieu of the non-existent statistical population.” Confused by the similarity of the sounds that were made by the two word, many people continued to assume the models were validated.
To dupe people into thinking that similar sounding words with differing meanings are synonyms is an oft used technique on the part of the IPCC and affiliated climatologists. When words with differing meanings are treated as synonyms, each word in the word pair is polysemic (has more than one meaning). When either word in such a word-pair is used in making an argument and this word changes meaning in the midst of this argument, the argument is an example of an “equivocation.” By logical rule, one cannot draw a proper conclusion from an equivocation. To draw an IMPROPER conclusion is the equivocation fallacy. IPCC-affiliated climatologists use the equivocation fallacy extensively in leading dupes to false or unproved conclusions ( http://wmbriggs.com/blog/?p=7923 ) .

September 6, 2013 8:30 am

Solid surfaces lose heat more rapidly. Water loses heat more slowly. However, because Earth is more of a water planet than a land planet, it is an interesting question. My hunch is that heat belched up from the oceans become our land temperatures, which at night send that heat up and outa here! Especially under clear sky night conditions (strong radiative cooling).

September 6, 2013 9:23 am

richardscourtney says: September 6, 2013 at 6:35 am
Hello Richard,
To be clear, we are talking about one’s predictive track based on modeling.
Specifically, the GCMs cited by the IPCC greatly over-estimate the sensitivity of Earth’s climate to atmospheric CO2, and under-estimate the role of natural climate variability. This was obvious a decade ago from the inability of these models to hindcast the global cooling period that occurred from ~1945 to 1975, until they fabricated false aerosol data to force their models to conform. As a result of these fatal flaws, these “IPCC GCMs” have grossly over-predicted Earth’s temperature and have demonstrated NO PREDICTIVE SKILL – this is their dismal “predictive track record”..
The IPCC wholeheartedly endorsed this global warming alarmism and so did much of the climate science establishment. Anyone who disagreed was ridiculed as a “denier”, and due to the extremist position of the global warming camp, some leading academics were dismissed from their universities, some received death threats, and some suffered actual violence. The imbecilic, dishonest and thuggish behaviour of the global warming camp was further revealed in the Climategate emails.
Our conceptual model is based on very different input assumptions from the IPCC GCMs. We assumed, based on substantial evidence that was available a decade ago, that climate sensitivity to increased atmospheric CO2 is insignificant, and that natural variability was the primary characteristic of Earth’s climate. We further assumed, based on credible evidence, that solar variability was a significant driver of natural climate variability. Therefore, we wrote in 2002 that there was no global warming crisis, and the lack of warming for the past 10-15 years demonstrates this conclusion to be plausible.
We further wrote in 2002 that global cooling to start by 2020-2030, and it remains to be seen whether this will prove correct or not – but warming has ceased for a significant time, and I suggest that global temperatures are at a plateau and are about to decline. We did not predict the severity of this global cooling trend, but if the solar driver hypo holds, then cooling could be severe. This we do not know, but we do know from history that global cooling is a much greater threat to humanity than (alleged) global warming.
Regards, Allan

richardscourtney
September 6, 2013 9:39 am

Allan MacRae:
re your post at September 6, 2013 at 9:23 am.
Allan, you begin your post by saying to me, “To be clear …”.
To be clear, yes, I agree.
Richard

Gunga Din
September 6, 2013 10:48 am

Terry Oldberg says:
September 6, 2013 at 8:14 am
rgbatduke (Sept. 6, 2013):
To your list of shortcomings in the methodology of global warming research, you could have added that the general circulation models are insusceptible to validation because the events in the underlying statistical populations do not exist.

====================================================================
Mr. layman here. To me it sounds like you just said, “The models can’t be wrong because the models say they are right.”
If that is not what you meant would you please explain in layman’s terms?
(Feel free to insult me if you wish as long as you explain.)

richardscourtney
September 6, 2013 11:01 am

Gunga Din:
re your post at September 6, 2013 at 10:48 am.
Can you see that disc of light behind you?
You have entered Alice’s rabbit hole and that disc is where you entered.
It is light from the outside. Enjoy it while you can. You may never see it again.
Richard

Aphan
Reply to  richardscourtney
September 6, 2013 12:41 pm

richardscourtney:
“Can you see that disc of light behind you?
You have entered Alice’s rabbit hole and that disc is where you entered.
It is light from the outside. Enjoy it while you can. You may never see it again.”
You’re killing me here. Smart AND clever AND humble? I feel a science crush coming on….

Gunga Din
September 6, 2013 11:16 am

http://wattsupwiththat.com/2013/09/05/statistical-proof-of-the-pause-overestimated-global-warming-over-the-past-20-years/#comment-1409624
==============================================================
As long as it’s brighter than a “nitlamp” I think I can fing my way out. 😎

September 6, 2013 11:58 am

Gunga Din (Sept 6 at 10:48):
Thank you for giving me the opportunity to clarify. I did not mean to say “The models can’t be wrong because the models say they are right.” I did mean to say that the models are insusceptible to being validated. This has the significance that the method by which models were created was not the scientific method of investigation. A consequence is that many IPCC conclusions, including the conclusion that global warming is man-made, must be discarded. The previous sentence should not be taken to mean that we know the warming is not man-made.
The widespread view that the models were created by the scientific method is a product of successful use of the deceptive argument known as the “equivocation fallacy” on the part of the IPCC and affiliated climatologists. An equivocation fallacy is a conclusion that appears to be true but that is false or unproved. For details, please see the peer-reviewed article at http://wmbriggs.com/blog/?p=7923 .

Gunga Din
September 6, 2013 12:33 pm

Terry Oldberg
http://wattsupwiththat.com/2013/09/05/statistical-proof-of-the-pause-overestimated-global-warming-over-the-past-20-years/#comment-1409676
==================================================================
Thank you.
“Equivocation fallacy” sounds very similar to “bait and switch”.
(Guess I didn’t need the nitlamp afterall.)

September 6, 2013 1:01 pm

kadaka (KD Knoebel) says:
September 5, 2013 at 2:32 am
If you have a one meter squared body of water, how much would a 1600W (1.6 kilowatt) hair dryer heat the body of water over 60 seconds, 60 minutes, from the surface?

September 6, 2013 1:32 pm

Rich – The equation is physics-based. The physics is the first law of thermodynamics, conservation of energy. This is discussed more completely starting on page 12, Anomaly Calculation (an engineering analysis), in an early paper made public 4/10/10 at http://climaterealists.com/attachments/database/2010/corroborationofnaturalclimatechange.pdf . This shows an early version of the equation which has since been refined.
The equation contains only one external forcing, the time-integral of sunspot numbers which serves as an excellent proxy for average global temperature. The mechanism has been attributed to influence of change to low altitude clouds, average cloud altitude, cloud area, and even location of cloud ‘bands’ as modulated by the jet stream. I expect it will eventually be found to be some combination of these. The high sensitivity of average global temperature to tiny changes in clouds is calculated at http://lowaltitudeclouds.blogspot.com . It is not necessary to know the mechanism to calculate the proxy factor.
Determining the value of a single proxy factor is not ‘curve fitting’.
Graphs that show (qualitatively because proxy factors are not applied) the correlation between the time-integral of sunspot numbers and average global temperature can be seen at http://hockeyschtick.blogspot.com/2010/01/blog-post_23.html or at http://climaterealists.com/attachments/ftp/Verification%20Dan%20P.pdf (this shows an earlier version of the equation. HadCRUT4 data was not used.
The only hypothesis that was made is that average global temperature is proportional to the time-integral of the sunspot time-integral. The rest is arithmetic. The coefficient of determination, R2, = 0.9 demonstrates that the hypothesis was correct.
The past predictive skill of the equation is easily demonstrated. Simply determine the coefficients at any time in the past using data up to that time and then use the equation, thus calibrated, to calculate the present temperature. For example, the predicted anomaly trend value for 2012 (no CO2 change effect) using calibration through 2005 (actual sunspot numbers through 2012) is 0.3888 K. When calibrated using measurements through 2012 the calculated value is 0.3967 K; a difference of only 0.008 K.
The future predictive skill, after 2012 to 2020, depends on the accuracy of predicting the sunspot number trend for the remainder of solar cycle 24 and the assumption that the net effective ocean oscillation will continue approximately as it has since before 1900.
This is an equation that calculates average global temperature. It is not a model, especially not a climate model…or a weather model. An early version of it, made public in 2010, predicted a downtrend from about 2005.
Part of the problem in trying to predict measured temperatures is that the measurements have a random uncertainty with standard deviation of approximately ±0.1 K so only trends of measurements are meaningful for comparison with calculations.

rgbatduke
September 6, 2013 1:50 pm

To your list of shortcomings in the methodology of global warming research, you could have added that the general circulation models are insusceptible to validation because the events in the underlying statistical populations do not exist.
I could have waxed poetic considerably longer, for example pointing out the recently published comparison of four GCMs to a toy problem that is precisely specified and known and that should have a unique answer. All four got different answers. The probability that any of those answers/models is correct is correspondingly strictly less than 25% and falling fast even if we do NOT know what the correct answer is (the best one could say is that one of the four models got it right and the others got it wrong, but of course all four could have gotten it wrong as well, hence strictly less than). This example alone is almost sufficient to demonstrate a lack of “convergence” in any sort of “GC model space”, although 4 is too small a number to be convincing.
I could also have ranted a bit about the stupidity of training and validating hypothesized global warming models using data obtained from a single segment of climate measurements when the climate was monotonically warming, which may be what you are trying to say here (sometimes I have difficulty understanding you but I think that sometimes I actually agree with what you say:-). When training e.g. Neural Network binary classification models, it is often recommended that one use a training set with balanced number of hits and misses, yesses and noes, because if you have an actual population that is (say) 90% noes and train with it, the network quickly learns that it can achieve 90% accuracy (which is none too shabby, believe me) by always answering no!
Of course this makes the model useless for discriminating the actual yesses and noes in the population outside of the training/trial set, but hey, the model is damn accurate! And of course the solution is to build a good discriminator first and correct it with Bayes theorem afterwards, or use the net to create an ordinal list of probable yes-hood and empirically pursue it to an optimum payoff.
GCMs appear to have nearly universally made this error. Hindcasting the prior ups and downs in the climate record is not in them, to the extent that we even have accurate data to use for a hindcast validation. By making CO_2 the one significant control knob at the expense of natural variations that are clearly visible in the climate record and that are not be predicted by the GCMs as far as I know, certainly not over multicentury time scales, all the models have to do is predict monotonic warming and they will capture all of the training/trial data and there are LOTS of ways to write, tune, initialize a model to have monotonic behavior without even trying. The other symptoms of failure — getting storms, floods and drought, SLR, ice melt, and many other things wrong were ignored, or perhaps they expected that future tweaks would fix this while retaining the monotonic behavior that the creators of the models all expected to find and doubtless built into the models in many ways. Even variables that might have been important — for example, solar state — were nearly constant across the training/trial interval and hence held to be irrelevant and rejected from the models. Now that many of those omitted variables — ENSO, the PDO, solar state are radically changing, now that the physical science basis for the inclusion and functional behavior of other variables like clouds and soot is being challenged, it can hardly be that surprising that the monotonic models that all were trained and validated by the same monotonic interval and insensitive to all of these possibly important drivers continue to show monotonic behavior while the real climate does not as those drivers have changed state?
If the training set for a tunable model does not span the space of possible behaviors of the system being modeled, of course you’re going to be at serious risk of ending up with egg on your face, and with sufficiently nonlinear systems you will never have sufficient data to use as a training set. Nicholas Nassim Taleb’s book The Black Swan is a veritable polemic against the stupidity of believing otherwise and betting your life or fortune on it. Here we are just betting the lives of millions and the fortunes of the entire world on the plausibility that the GCMs built in a strictly bull market can survive the advent of the bear, or are bear-proof, or prove that bears have irrevocably evolved into bulls and will never be seen again. This bear is extinct. It is an ex-bear.
Until, of course, it sits up and bites you in the proverbial ass.
So yeah, Terry, I actually agree. One of many troubling aspects of GCMs is that they have assumptions built into them supported by NO body of data or observation or even any particularly believable theory. They have assumptions that contradict or are neutral to the existing observational data, such as “the PDO can safely be ignored”, or “the 1997-1998 warming that is almost all of the warming observed over the training interval was all due to an improbable ENSO event, not CO_2 per se”, or “solar variability is nearly irrelevant to the climate”. And every one of them is an implicit Bayesian prior, and to the extent that the assumptions are not certain, they weaken the probable reliability of the predictions generated by the models that incorporate them, even by omission.
rgb

rgbatduke
September 6, 2013 2:00 pm

If you have a one meter squared body of water, how much would a 1600W (1.6 kilowatt) hair dryer heat the body of water over 60 seconds, 60 minutes, from the surface?
Well, let’s see, that’s one metric ton (1000 kg) of water. Its specific heat is 4 joules per gram degree centigrade. 1000 kg is a million grams. To raise it 1 one degree requires 4 million joules. If you dumped ALL 1600 W into the water, prevented the water from cooling or heating (adiabatically isolated it), it would take 42 minutes to raise it by a degree. If you tried heating it with warm blowing air from a hair drier on the TOP SURFACE, however, you would probably NEVER warm the body of water by a degree. I say “probably” because the wind from the hair drier (plus its heat) would encourage surface evaporation. Depending on the strength of the wind and how it is applied, it might COOL the water due to latent heat of evaporation, or the heat provided by the hair drier might be sufficient to replace it by a bit. However, even in the latter case, since water will cheerfully stratify, all you’d end up doing is warming the top layer of water until latent heat DID balance the hair drier’s contribution to the water (probably at most a few degrees) and it would then take a VERY long time for the heat to propagate to the bottom of the cubic meter, assuming that that bottom is adiabatically insulated. Days. Maybe longer. And it would as noted probably not heat without bound — it would just shift from one equilibrium temperature at the surface to another slightly warmer on.
rgb

rgbatduke
September 6, 2013 2:09 pm

Your understanding of the meaning of “validation” is identical to mine. The populations underlying the general circulation models do not exist; thus, these models are insusceptible to being validated.
No scientific model can be verified in the strong sense. All scientific models can be falsified in the strong sense. So what is the point? We could have validated any given GCM in the weak sense by observing that it is “still” predicting global climate reasonably accurately (outside its training/trial where this is not surprising). No interval of observing that this is true is sufficient to verify the model in the strong sense (so that we believe that it can never be falsified, the data proves the model). But plenty of models, including GCMs, could be validated in the weak sense up to the present.
It’s just that they (mostly) aren’t. They are either strongly falsified or left in limbo, not definitively (probably) correct or incorrect, so far.
I do not understand your point about the populations underlying the GCMs, after all. You’ll have to explain in non-rabbit-hole English, with examples, if you want me to understand. Sorry.
rgb

richardscourtney
September 6, 2013 2:35 pm

Dan Pangburn:
re your post addressed to me at September 6, 2013 at 1:32 pm
http://wattsupwiththat.com/2013/09/05/statistical-proof-of-the-pause-overestimated-global-warming-over-the-past-20-years/#comment-1409791
in response to my answer to you at September 6, 2013 at 5:19 am
http://wattsupwiththat.com/2013/09/05/statistical-proof-of-the-pause-overestimated-global-warming-over-the-past-20-years/#comment-1409387
Sorry, but the model IS a curve fitting exercise. I remind that the link says

The word equation is: anomaly = ocean oscillation effect + solar effect – thermal radiation effect + CO2 effect + offset.

The link is
http://climatechange90.blogspot.com/2013/05/natural-climate-change-has-been.html
and the mathematical representation of that equation is there (if I knew how to copy it to here then I would).
If the model were not a curve fitting exercise then there would be accepted definitions of
ocean oscillation effect,
solar effect,
thermal radiation effect,
CO2 effect,
and the offset to be applied.
There are no such agreed definitions. The parameters are each compiled to fit the curve.
As you say, they do not disagree with known physics. But other curve fitting exercises could, too. And they would also ‘wiggle the elephants trunk’.
This is not to say the model is wrong. But there is no reason to think it is right. I explain this in my post which you have answered.
Sorry, but that is the way it is.
Richard

Gary Pearse
September 6, 2013 2:48 pm

rgbatduke says:
September 6, 2013 at 2:09 pm
On top of all that, the GCM, largely right or largely wrong, cannot be even trained over any interval because the world’s temp record keepers have an algorithm that keeps changing the record. Probably the GCMs in existence were “trained” over HadCrut 2 or 3 and now we have Hadcrut 4 for example. Man, a large team has to come in and re-correct the temperature records going back to the raw data. If its dangerous global warming we are trying to quantify, I contend that there is little need for adjustments even if there is some reasonable case for it if we are to be facing runaway warming and seas rising metres. Correcting here or there by 0.2 -0.4 (I call it the thumbtack method – stick the tack in to about 1945 and rotate counter clockwise a half a degree) won’t even matter if we are going to have unbearable heat rise. We haven’t even got our feet wet and GISS was calling for the West Side Highway to be under water before now and its about 10 feet above the water in Manhattan at the present time. The GCMs are easy – we can just throw them out.

September 6, 2013 5:11 pm

Rich – The constants and variables in the math equation are defined just after the math equation. I’ll connect the terms in the word equation with the math equation and try to expand on them a bit more.
ocean oscillation effect = (A,y) “There is some average surface temperature oscillation that accounts for all of the oceans considered together of which the named oscillations are participants.” Page 1, 3rd paragraph from bottom.
solar effect = B/17 * summation of sunspot numbers from 1895 to the calculation year. This accounts for the energy gained by the planet above or below break-even and expresses it as temperature change.
thermal radiation effect = B/17 * summation of 43.97*(T(i)/286.8)^4 from 1895 to the calculation year. This accounts for the energy radiated by the planet above or below break-even and expresses it as temperature change.
CO2 effect = C/17 * summation of ln(CO2 level in the calculation year/CO2 level in 1895 from 1895 to calculation year)
Offset to be applied = D (see the paper)
“…agreed definitions” These are the definitions of the terms in the equation. What matters is the results of the equation. The results match the down-up-down-up-too soon to tell of reported average global temperatures (which have sd≈±0.1 K). The whole point is that these are not for anyone else to ‘agree’ on. I don’t know of anyone else who has thought to look at the time-integral of sunspots.
I’m not sure what you mean by ‘parameters’. The coefficients are ‘tuned’ (tediously) to maximize R2 but, except for the proxy factor, they can be estimated fairly closely by a look at anomaly measurements.
“But other curve fitting exercises could, too.” I don’t think so. Here is the challenge. Fit the measured anomalies back to 1895 with R2=0.9. Approximate the accepted average global temperature trend back to 1610. Use only one external forcing.
I think the equation is right because it does all those things and also gave a good prediction of 2012 measurements based on data through 2005. I have no interest in an ‘is not’ ‘is too’ argument. The equation and graph with prediction are made public and waiting for future measurements.

September 6, 2013 7:51 pm

http://wattsupwiththat.com/2013/09/05/statistical-proof-of-the-pause-overestimated-global-warming-over-the-past-20-years/#comment-1409063
Dan Pangburn says: September 5, 2013 at 4:30 pm
“A physics-based equation, using only one external forcing, calculates average global temperature anomalies since before 1900 with R2 = 0.9. The equation is at http://climatechange90.blogspot.com/2013/05/natural-climate-change-has-been.html
Everything not explicitly considered must find room in that unexplained 10%.”
_____________
Thank you Dan,
This was interesting. You say “About 41.8% of reported average global temperature change results from natural ocean surface temperature oscillation and 58.2% results from change in the rate that the planet radiates energy to outer space, as calculated using a proxy, which is the time-integral of sunspot numbers.” So Solar is your “one external forcing”.
You used Hadcrut4 Surface Temperature record in this analysis.
I suggest that this Surface Temperature record probably exhibits a significant warming bias – my rough estimate for Hadcrut3 was about 0.07C per decade, at least back to ~1979 and possibly much further.
How would your analysis change if you were to decrease your surface temperature record by 0.07C/decade from about 1945 to present, and particularly how would this change the inferred impact of increased atmospheric CO2 and other parameters in your equation?
If you want to email me, you can contact me through my website at http://www.OilSandsExpert.com
Thank you, Allan

nevket240
September 6, 2013 9:08 pm

http://www.smh.com.au/environment/climate-change/rising-ocean-acidity-may-spur-climate-action-20130907-2tbe7.html
as an avowed Climate Cycler and denier of Climate Goring I cannot understand how useless the media have been in executing their responsibility to journalism, instead they have been nothing better than glorified story tellers. As per this ‘article’ the Ph has moved to 8.1 from the 8.2 of pre industrial levels. OH?? really?? what a drastic change. I am saddened by this massive shift and ask for all Greens to avoid electrical power and carbon based products immediately. NOW!!!
regards

September 6, 2013 9:33 pm

Among the probability theory specialists being present here
e.g. Richard, Alan, Dan, dr Brown
I do have an interesting problem for you.
I took a random sample of 47 weather stations, carefully selected to be suitably globally representative.
I analysed all daily data, determining the change in temperature noted over time.
http://blogs.24.com/henryp/2013/02/21/henrys-pool-tables-on-global-warmingcooling/
I observed that the change in the speed of warming/cooling can be set out against time giving binomials with high correlation, >0.95. In the case in the drop of the speed maximum temp.correlation was >0,995.
Unfortunately, the binomial fit would show tremendous cooling coming up in the future… I therefore came up with the sine wave best fit for the drop in the speed of maximum temp.
http://blogs.24.com/henryp/2012/10/02/best-sine-wave-fit-for-the-drop-in-global-maximum-temperatures/
Now back in 1985 William Arnold reported a connection between sunspots and planet alignment.
Observe from my a-c curves (can be determined and easily estimated)
1) change of sign: (from warming to cooling and vice versa)
1904, 1950, 1995, 2039
2) maximum speed of cooling or warming = turning points
1927, 1972, 2016
Then I put the dates of the various positions of Uranus and Saturn next to it:
1) we had/have Saturn synodical with Uranus (i.e. in line with each other)
1897, 1942, 1988, 2032
2) we had complete 180 degrees opposition between Saturn and Uranus
1919, 1965, 2009,
In all 7 of my own results & projections, there is an exact 7 or 8 years delay, before “the push/pull ” occurs, that switches the dynamo inside the sun, changing the sign or direction of the warming/cooling….!!!! Conceivably the gravitational pull of these two planets has some special lob sided character, causing the actual switch. Perhaps Uranus’ apparent side ward motion (inclination of equator by 98 degrees) works like a push-pull trigger. Either way, there is a clear correlation. Other synodical cycles of planets probably have some interference as well either delaying or extending the normal cycle time a little bit. So it appears William Arnold’s report was right after all….(“On the Special Theory of Order”, 1985).
http://www.cyclesresearchinstitute.org/cycles-astronomy/arnold_theory_order.pdf
My reasoning now is that the probability of there not being a relationship of the alignment of the planets Uranus and Saturn with the speed of incoming energy, is only 1 / 7 to the power 7
Am I right or am I wrong?

kadaka (KD Knoebel)
September 6, 2013 10:48 pm

From HenryP on September 6, 2013 at 9:33 pm:

I took a random sample of 47 weather stations, carefully selected to be suitably globally representative.

They were a carefully selected random sample.
Please provide what you think is the meaning of “random”. You keep using that word. I do not think it means what you think it means.

richardscourtney
September 7, 2013 2:29 am

Dan Pangburn:
Thankyou for your post addressed to me at September 6, 2013 at 5:11 pm.
http://wattsupwiththat.com/2013/09/05/statistical-proof-of-the-pause-overestimated-global-warming-over-the-past-20-years/#comment-1409960
Unfortunately, I have nothing more to add because I have explained my view in my previous two replies to you.
As I explained, the model you promote is a curve fitting exercise.
I write to try to help you understand the problem with the model.
I ask you to consider the post to you from Allan MacRae at September 6, 2013 at 7:51 pm
http://wattsupwiththat.com/2013/09/05/statistical-proof-of-the-pause-overestimated-global-warming-over-the-past-20-years/#comment-1410046
He says

You used Hadcrut4 Surface Temperature record in this analysis.
I suggest that this Surface Temperature record probably exhibits a significant warming bias – my rough estimate for Hadcrut3 was about 0.07C per decade, at least back to ~1979 and possibly much further.

I add that the other global temperature data sets vary, too. This is GISS
http://jonova.s3.amazonaws.com/graphs/giss/hansen-giss-1940-1980.gif
Does the model only work for Hadcrut4?
If so, then it will not work soon because the Hadcrut4 data are altered most months.
Does the model work for Hadcrut4, Hadcrut3 and GISS which are different?
If so, then – as I said – it is a curve fitting exercise which provides no information.
Please note that the transient nature of the global temperature data sets is why in my above post at
http://wattsupwiththat.com/2013/09/05/statistical-proof-of-the-pause-overestimated-global-warming-over-the-past-20-years/#comment-1408432
I argued that there are two separate issues when considering performance of numerical climate models, and these are
(a) the data
and
(b) comparison of model results with the data.
Curve fitting deliberately combines those issues and, therefore, it is not possible to assess one by using the other.
Richard

richardscourtney
September 7, 2013 2:41 am

Henry P:
re your question to me an others.
Sorry, but I cannot provide an answer to your question until you have provided the clarification requested of you by kadaka (KD Knoebel) at September 6, 2013 at 10:48 pm
http://wattsupwiththat.com/2013/09/05/statistical-proof-of-the-pause-overestimated-global-warming-over-the-past-20-years/#comment-1410115
I have provided a reply to Dan Pangburn but – for some mysterious reason – it (and another reply I attempted to someone else on another thread) is trapped in moderation.
Richard

richardscourtney
September 7, 2013 3:40 am

nevket240:
re your post at September 6, 2013 at 9:08 pm
http://wattsupwiththat.com/2013/09/05/statistical-proof-of-the-pause-overestimated-global-warming-over-the-past-20-years/#comment-1410078
You say concerning so-called ‘ocean acidification’

As per this ‘article’ the Ph has moved to 8.1 from the 8.2 of pre industrial levels

The pH of the ocean surface layer varies so much in space and in time that there is no possibility of such a small average change having been detected.
However, if that pH change has happened then it would induce an argument concerning which is cause and which is effect.
A change of only 0.1 in the average pH of the ocean surface layer would alter the equilibrium between atmospheric CO2 and oceanic CO2 concentrations to induce a rise in atmospheric CO2 which would be greater than the claimed rise from 280 ppmv to ~400 ppmv since the industrial revolution. Such a pH change could be (but probably is not) a result in variation of sulphur emission from volcanoes beneath the sea followed by the sulphur taking centuries before the thermohaline circulation conveys it to the ocean surface layer.
The subject of the carbon cycle is interesting but not pertinent to this thread. I have answered your post so it has not been ignored, so there is no reason to pursue the matter here.
If you want more info. on the carbon cycle then I suggest you use the WUWT Search facility for
Salby
then read the threads which that provides.
Richard

kadaka (KD Knoebel)
September 7, 2013 6:36 am

RMB said on September 7, 2013 at 5:58 am:

I would argue with your proposal. Radiation enters water but physical heat does not.

If I drop a pebble into water, there are ripples in the water.
The object doesn’t have to enter the water. I can skip a stone across a pond, and everywhere the stone touches the water there will be ripples.
The ripples are evidence of the transfer of kinetic energy.
The thermal energy of a gaseous molecule is basically just kinetic energy.
So picture a pebble as small as a molecule that hits the water, whether it enters or just bounces off the surface. It can transfer kinetic energy to the water, which is transferring physical heat.
Thus it is shown that air that is warmer can transfer physical heat to water that is cooler.
Thus you are wrong.

RMB
Reply to  kadaka (KD Knoebel)
September 7, 2013 8:46 am

“it can transfer kinetic energy to the water”. I thought so too but when I tried to heat water through the surface using a heat gun thats not the result I got. At 450degsC the heat should quickly boil the water but the water remains cold including the point on the surface where the heat is being directed. The rejection of heat is very convincing. If you want to heat water through the surface, the only way to do it is to apply the heat source through a metal floating object. The floating object kills the surface tension and heat will flow. I don’t pretend to know exactly why the heat is so convincingly blocked but my guess is that we just don’t know enough about the properties of surface tension, after all not many people fire heat guns at buckets of water.

Aphan
Reply to  RMB
September 7, 2013 12:32 pm

RMB-
Just some thoughts.
Getting a set amount of water to “boil” takes more than just a heat source that is 430 degrees. Boiling water is the result of convection and conduction, and only results with ALL of the water in a given container reach the boiling point.
Unless you replicate ALL of the conditions that impact the Ocean and it’s temperatures in your home experiment, you haven’t proven anything about the Ocean’s heat/energy absorption.
For example-
The saline/salt and nutrient content in Ocean water is different than tap water. This makes the way it conducts and radiates heat different than tap water.
The mineral content of ocean water also affects it’s surface tension, as does it’s movements.
Surface tension DECLINES when temperatures INCREASE.
Boiling water in a pan from the bottom introduces not just a heat source to the water, but the conductivity of that heat through metal, and the convection cycle of hot water and air on the bottom of the pan rising quickly to the top, overturning, and bringing the cooler water down to the bottom to be heated quickly.
Unless you have introduce a way for the warm water at the surface of your container to be forced to circulate to the lower depths so that the cold water can then come to the surface and be heated, all of the water will never reach the boiling point at the same time.
And those are just a few differences that I can think of off the top of my head.
Thermal energy DOES enter the Ocean, from the Sun above, AND from the Earth’s radiation below and around it. This radiation causes the molecules in the water to vibrate-which then release that energy as heat. But heat RISES, so that energy/heat remains and circulates in the top layers of the ocean, and does not “sink” to the bottom or hide etc. Warm water dragged to the depths of the oceans by currents, interacts with HUGE, and much larger amounts of much colder water, AND pressure and when it does, it cools, thus releasing the additional energy/heat, which then rises to the surface over time.

rgbatduke
September 7, 2013 7:22 am

My reasoning now is that the probability of there not being a relationship of the alignment of the planets Uranus and Saturn with the speed of incoming energy, is only 1 / 7 to the power 7
Am I right or am I wrong?

You are wrong, and until you learn what post hoc ergo propter hoc means, and understand the difference between curve fitting numerology and science, you will never, ever correct your mistake. We’ve had this discussion before.
You might try reading books on this or something — there is too much to teach you easily online and you haven’t demonstrated any eagerness to learn, as you are too enamored of your own ideas to listen to any others. Also, you’re apparently a half-dozen college level math courses short of having what you need to really understand your mistakes. I will try just one time and then quit.
Fitting any small segment of data with any combination of functions and then using those functions to extrapolate outside of the fit region is a process fraught with peril. Interpolation — filling in in between the data points — has some basis if there is reason to believe that the function being fit is smooth on the granularity of the data (and can lead to well-known errors even then if your assumption turns out to be wrong). Extrapolation not only fails, but often one KNOWS that it will fail. If one tries to fit, e.g. a polynomial to a smooth curve, there is a theorem (Weirstrauss Theorem) that one can always do so within systematically reducing bounds. Indeed, there is a constitutive relation — Taylor series in calculus — that can accomplish such a fit either directly or piecewise, up to a point. But the Taylor series contains within it the prediction of its own FAILURE if you attempt to extrapolate outside of a certain radius of convergence or (more generally) the data range used to fit a known function. The higher order, neglected terms come back to haunt you by eventually increasing without bound unless the function being fit IS a finite polynomial.
The exact same thing happens if you use other bases (a basis in this context is a spanning set of functions that represent unit vectors in an infinite dimensional linear vector space that contains all arbitrary smooth functions, as you would know if you’d taken a university level linear algebra class or a class on functional analysis or ordinary differential equations) or mixtures of bases, such as a few polynomial terms plus harmonic functions (either of which can be turned into an orthonormal basis on any fixed interval or with some effort on the entire real line). You can fit something quite beautifully by accident in some finite region, but there is no reason at all a priori to think that the fit can be extrapolated!
You should look up Koutsoyiannis lovely hydrology paper that I’ve posted a dozen or so times on various threads addressing this point. The first page of his paper is the best illustration of this point I’ve ever seen, as he shows three successive blow ups of an actual data set that at first looks constant, then like it is linear, then exponential, then like a harmonic function, and beyond that it could turn out that ALL of this noise on a function that really is linear, or anything else. Think of a polynomial fit as always having an infinite number of terms with unconstrained coefficients waiting to jump out and snare you as soon as you get outside of the fit region.
The point of this is that what you are trying is actually even less justified then the GCMs. At least they actually incorporate a priori believed-to-be-known physics, reasons for the functional forms they try to apply and compute with. What you are doing is also known to be numerically unstable to extrapolation. And then there is post hoc ergo propter hoc, a.k.a. correlation is not causality.
I doubt that any of this will make the slightest dent in your armor, but we have to try, we have to try.
rgb

BBould