Statistical proof of 'the pause' – Overestimated global warming over the past 20 years

Commentary from Nature Climate Change, by John C. Fyfe, Nathan P. Gillett, & Francis W. Zwiers

Recent observed global warming is significantly less than that simulated by climate models. This difference might be explained by some combination of errors in external forcing, model response and internal climate variability.

Global mean surface temperature over the past 20 years (1993–2012) rose at a rate of 0.14 ± 0.06 °C per decade (95% confidence interval)1. This rate of warming is significantly slower than that simulated by the climate models participating in Phase 5 of the Coupled Model Intercomparison Project (CMIP5). To illustrate this, we considered trends in global mean surface temperature computed from 117 simulations of the climate by 37 CMIP5

models (see Supplementary Information).

These models generally simulate natural variability — including that associated

with the El Niño–Southern Oscillation and explosive volcanic eruptions — as

well as estimate the combined response of climate to changes in greenhouse gas

concentrations, aerosol abundance (of sulphate, black carbon and organic carbon,

for example), ozone concentrations (tropospheric and stratospheric), land

use (for example, deforestation) and solar variability. By averaging simulated

temperatures only at locations where corresponding observations exist, we find

an average simulated rise in global mean surface temperature of 0.30 ± 0.02 °C

per decade (using 95% confidence intervals on the model average). The

observed rate of warming given above is less than half of this simulated rate, and

only a few simulations provide warming trends within the range of observational

uncertainty (Fig. 1a).

Ffe_figure1

Figure 1 | Trends in global mean surface temperature. a, 1993–2012. b, 1998–2012. Histograms of observed trends (red hatching) are from 100 reconstructions of the HadCRUT4 dataset1. Histograms of model trends (grey bars) are based on 117 simulations of the models, and black curves are smoothed versions of the model trends. The ranges of observed trends reflect observational uncertainty, whereas the ranges of model trends reflect forcing uncertainty, as well as differences in individual model responses to external forcings and uncertainty arising from internal climate variability.

The inconsistency between observed and simulated global warming is even more

striking for temperature trends computed over the past fifteen years (1998–2012).

For this period, the observed trend of 0.05 ± 0.08 °C per decade is more than four

times smaller than the average simulated trend of 0.21 ± 0.03 °C per decade (Fig. 1b).

It is worth noting that the observed trend over this period — not significantly

different from zero — suggests a temporary ‘hiatus’ in global warming. The

divergence between observed and CMIP5-simulated global warming begins in the

early 1990s, as can be seen when comparing observed and simulated running trends

from 1970–2012 (Fig. 2a and 2b for 20-year and 15-year running trends, respectively).

The evidence, therefore, indicates that the current generation of climate models

(when run as a group, with the CMIP5 prescribed forcings) do not reproduce

the observed global warming over the past 20 years, or the slowdown in global

warming over the past fifteen years.

This interpretation is supported by statistical tests of the null hypothesis that the

observed and model mean trends are equal, assuming that either: (1) the models are

exchangeable with each other (that is, the ‘truth plus error’ view); or (2) the models

are exchangeable with each other and with the observations (see Supplementary

Information).

Brief: http://www.pacificclimate.org/sites/default/files/publications/pcic_science_brief_FGZ.pdf

Paper at NCC: http://www.nature.com/nclimate/journal/v3/n9/full/nclimate1972.html?WT.ec_id=NCLIMATE-201309

Supplementary Information (241 KB) CMIP5 Models
0 0 votes
Article Rating
348 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
justsomeguy31167
September 5, 2013 12:07 am

Make this sticky? This is huge because of the journal and the authors. Maybe real scientists are seeing so much evidence against AGW that some will tell the truth.

The Ghost Of Big Jim Cooley
September 5, 2013 12:08 am

Does anyone know of any PRO-AGW websites that are commenting on the shift toward natural variability and the lack of warming? Or are they all turning a blind eye to it?

braddles
September 5, 2013 12:25 am

While the CMIP5 models may well have warming rates clustered around 0.3 degrees per decade, we shouldn’t forget that these are NOT the models that are being used to influence policy. The ones being use are much more extreme and should have been utterly discredited by now.
An example here in Australia is a CSIRO model that predicts ‘up to’ 5 degrees by 2070, almost one degree per decade. This was the figure quoted by (former) Prime Minister Gillard and used to justify the carbon tax introduced in 2012.
You can bet that President Obama does not read Nature Climate Change.
In short, the journals are comparing the milder models to the real world (and even then they are failing) while protecting from scrutiny the extreme models that are being presented to policy-makers.

Gösta Oscarsson
September 5, 2013 12:29 am

There are a few “model trends” which correctly describes “observed trends”. Wouldn´t it be intresting to analyse in what way they differ from the rest?

RMB
September 5, 2013 12:31 am

Try heating the surface of water with a heat gun. At 450degsC the surface should quickly boil, in fact it remains cool. You can not heat water through the surface and thats why they are all having a problem.

AndyG55
September 5, 2013 12:33 am

And that is compared to the highly manipulated trend created in HadCrud.
I wonder how the models perform against actual reality !

el gordo
September 5, 2013 12:51 am

‘Or are they all turning a blind eye to it?’
Deltoid is in a death spiral, the blogmasta (Tim Lambert) departed the scene months ago and slowly the place is being taken over by contrarians. Its also under a severe DoS attack.
The old warmist faithful are simply denying the new reality. They don’t even accept the hiatus, even after I pointed out that 97% of scientists agree that its real.

SideShowBob
September 5, 2013 1:00 am

RMB says:
September 5, 2013 at 12:31 am
“Try heating the surface of water with a heat gun. At 450degsC the surface should quickly boil, in fact it remains cool. You can not heat water through the surface… ”
Honestly that is such a moronic comment I think you were sent here to intentionally bring this website into disrepute !

richardscourtney
September 5, 2013 1:03 am

Friends:
The paper is reported to say

It is worth noting that the observed trend over this period — not significantly different from zero — suggests a temporary ‘hiatus’ in global warming.

NO! That is an unjustifiable assumption tantamount to a lie.
Peer reviewed should have required that it be corrected to say something like:
It is worth noting that the observed trend over this period — not significantly different from zero — indicates a cessation of global warming. It remains to be seen when and if warming will resume or will be replaced by cooling.
Richard

September 5, 2013 1:08 am

This ‘histogram’ is based on the actual temperatures
http://www.vukcevic.talktalk.net/CETd.htm

Rich
September 5, 2013 1:18 am

“This difference might be explained by … internal climate variability.” Surely if you’re modelling the climate you can’t say, “My model’s wrong because of internal climate variability” because that’s exactly what you’re supposed to be modelling. They keep saying this and I don’t get it.

Dr Darko Butina
September 5, 2013 1:24 am

It is amazing that all the ‘proofs’ of global warming trends are ‘validated’ by another model or miss-use of statistics and NOT by thermometer. The Vukcevic’s histogram is also based on the annual average and therefore not on ‘actual’ temperatures. The global temperature does not exist, it cannot be measured, not a single property of our atmosphere is global – all the properties are local and climate community should not ignore Essex et al (2007), Kramm-Dlugi (2001) and Butina (2012). Dr Darko Butina

Greg
September 5, 2013 1:35 am

“It is worth noting that the observed trend over this period — not significantly different from zero — suggests a temporary ‘hiatus’ in global warming. ”
What “suggests” that it is temporary?
Ah well we’re getting there slowly. No point in expecting a total and sudden 180. At least it does now seem to be polite to talk about it.

richardscourtney
September 5, 2013 1:55 am

Rich:
Your entire post at September 5, 2013 at 1:18 am says

“This difference might be explained by … internal climate variability.” Surely if you’re modelling the climate you can’t say, “My model’s wrong because of internal climate variability” because that’s exactly what you’re supposed to be modelling. They keep saying this and I don’t get it.

I will try to explain what they are saying, but please do NOT assume my attempt at explanation means I agree with the explanation because I don’t.
The models assume climate varies because of internal variability. This is “noise” around a stable condition.
The models calculate that climate varies in determined manner in response to “forcings”.
Thus, a change to a forcing causes the climate to adjust so a trend in climate parameter (e.g. global temperature) occurs during the adjustment.
If these assumptions are true then
(a) at some times internal variability will add to a forced trend
and
(b) at other times internal variability will subtract from a forced trend.
Until now the modellers have assumed effects of internal variability sum to insignificance over periods of ~15 years. But the ‘pause’ has lasted longer than that. So, internal variability must be significant to climate trends over periods of more than 15 years if the ‘pause’ is an effect of internal variability negating enhanced forcing form increased greenhouse gases (GHGs).
Unfortunately, this is a ‘double edged sword’.
If internal variability has completely negated GHG forced warming for the recent about two decades, then
internal variability probably doubled the warming assumed to have been GHG forced over the previous two decades.
And that ignores the fact that warming from the LIA has been happening for centuries so natural variability clearly does occur for much longer periods than decades (as is also indicated by ice cores). When that is acknowledged then ALL the recent global warming can be attributed to internal variability so there is no residual warming which can be attributed to GHG forced warming.
I hope this explanation is clear and helpful.
Richard

Crispin in Waterloo
September 5, 2013 1:58 am

“It remains to be seen when and if warming will resume or will be replaced by cooling. ”
Right on Richard. That is exactly how it should be phrased. There are multiple indications that it will be cooling. You are also correct that a style editor should have picked that up even if the reviewers did not. Adopting the term ‘hiatus’ was to allow wiggle room for doom-laden forecasters to maintain the story that the heating will come back with more vigour after the ‘pause’.
‘Pause’ implies that the tape will roll when ‘Play’ is pressed again.
By someone.
Or something.
Or not.

Ken Hall
September 5, 2013 1:59 am

Rich (1:18am). You are correct. They should honestly say, my model is wrong. I do not particularly care why it is wrong, as that is for the coders and theoreticians to figure out to try to create a better model. All I care about is the policies which are being implemented, which are hurting millions of families and starving them of energy and money because those models are wrong. I want the politicians to recognise that the models are wrong and to change policy and to throw the warmists out of work and to stop basing dangerously expensive policies on unproven theories backed by fearmongering.

Gail Combs
September 5, 2013 2:07 am

richardscourtney says: @ September 5, 2013 at 1:03 am
Richard, if they had modified their statement to say “It remains to be seen when and if warming will resume or will be replaced by cooling.” The paper would never have made it out of Pal- Review Errr Peer-Review. Heck they could have had something similar in the original submission and it got scrubbed.
What I find most intriguing is the admission:

The evidence, therefore, indicates that the current generation of climate models
(when run as a group, with the CMIP5 prescribed forcings) do not reproduce
the observed global warming over the past 20 years,
or the slowdown in global
warming over the past fifteen years.

So it is not just the last fifteen years it is the last twenty year that the models “do not reproduce”
EPIC FAIL! Now can we all go home and forget this nightmare?

Sleepalot
September 5, 2013 2:15 am

“Global mean surface temperature over the past 20 years (1993–2012) rose at a rate of 0.14 ± 0.06 °C per decade (95% confidence interval)1. ”
Bullshit.

Cheshirered
September 5, 2013 2:25 am

richardscourtney says:
September 5, 2013 at 1:03 am
Very good point. Another subtle way of presenting The Cause in a favourable light. ie this is only temporary and warming WILL start again soon, hence we cannot let up ‘tackling climate change’.
Translation: keep the funding flowing.

kadaka (KD Knoebel)
September 5, 2013 2:32 am

RMB spouted off on September 5, 2013 at 12:31 am:

Try heating the surface of water with a heat gun. At 450degsC the surface should quickly boil, in fact it remains cool. You can not heat water through the surface and thats why they are all having a problem.

This is standard “Sky Dragon Slayer” stuff you’re spewing, but, what the heck, tried it for myself.
Proposed: A heat gun applied to the surface of water cannot heat the water.
Experiment setup:
1 bowl of unknown plastic, semi-flexible, no recycling symbol indicating plastic type, 2 1/3 cups (US measure) capacity. Approximate dimensions: 5 1/2″ inside diameter top with 1/2″ wide rim, 2″ effective depth, circular arc curve (concave interior surface) to 2 7/8″ diameter flat bottom, with integral hollow cylindrical section base of 1/4″ height and 2 7/8″ diameter. Base design minimizes heat transfer with surface underneath. Usually used for cold to warm contents (ice cream to oatmeal) but not boiling hot items.
1 Master Forge Wireless Thermometer #0023557, originally purchased at Lowes, consists of display-less transmitting base with probe and receiving hand unit which displays temperature, set for °F. Normally used for grilling/roasting. Has timer count-up and count-down functions displaying minutes and seconds. Used for temperature readings and timing.
2 cups (US measure) room temperature tap water, from well.
1 Conair 1600W hair dryer, 125VAC, Model 064, used as heat gun.
Procedure:
Water in bowl, thermometer probe in water. Initial reading 74°F (no decimal), room temperature. Bowl resting on white porcelain-coated metal surface (stove top) at 74°F per probe, room temperature.
Heat gun on high, held by hand, outlet aimed at water surface of bowl, approximately 8 inches away at 45° from horizontal, aimed at center of surface. Water surface was notably agitated by the air flow, small quantity of water lost over edge of bowl.
Results in CSV format:
Time,Temperature
min:sec,°F
0:00,74
0:30,74
1:00,75
1:30,76
2:00,76
2:30,77
3:00,77
3:30,78
4:00,78
4:30,78
5:00,79
Discussion: Output of heat gun was applied to surface of water. Temperature of water increased.
Conclusion: A heat gun applied to the surface of water can heat the water. The proposition is falsified.
I tried it, showed to myself you were wrong. How should I have done the experiment so it will yield the result you are certain must happen?

Gail Combs
September 5, 2013 2:32 am

Rich says: @ September 5, 2013 at 1:18 am
…. Surely if you’re modelling the climate you can’t say, “My model’s wrong because of internal climate variability” because that’s exactly what you’re supposed to be modelling. They keep saying this and I don’t get it.
>>>>>>>>>>>>>>>>
What they are eluding to but dare not say is “climate variability” = Chaos
FROM the WUWT article:

First of all, what is Chaos? I use the term here in its mathematical sense….
Systems of forces, equations, photons, or financial trading, can exist effectively in two states: one that is amenable to mathematics, where the future states of the systems can be easily predicted, and another where seemingly random behaviour occurs.
This second state is what we will call chaos. It can happen occasionally in many systems….
There are, however, systems where chaos is not rare, but is the norm. One of these, you will have guessed, is the weather….
So, what does it mean to say that a system can behave seemingly randomly? Surely if a system starts to behave randomly the laws of cause and effect are broken?
Chaotic systems are not entirely unpredictable, as something truly random would be. They exhibit diminishing predictability as they move forward in time, and this diminishment is caused by greater and greater computational requirements to calculate the next set of predictions. Computing requirements to make predictions of chaotic systems grow exponentially, and so in practice, with finite resources, prediction accuracy will drop off rapidly the further you try to predict into the future. Chaos doesn’t murder cause and effect; it just wounds it!….

In other words this study shows that climate is a Chaotic System (DUH!) and therefore ” prediction accuracy will drop off rapidly the further you try to predict into the future.” However when the whole scam (and your gant money) is dependent on computer models ‘predicting’ catastrophic warming (Oh, My we must act NOW!) the last thing you are going to announce is that you have figured out the system is chaotic and therefore all that money for all those computers and models has been wasted.
Why the heck do you think there has been such a big fight over whether or not the IPCC makes ‘Predictions’ or ‘Projections’

Laurie
September 5, 2013 2:34 am

John C. Fyfe, Nathan P. Gillett, & Francis W. Zwiers
I tried to find CVs of these authors and found nothing. Also, I’m ignorant of “Nature Climate Change”. Can someone provide information please?

richardscourtney
September 5, 2013 2:34 am

Gail Combs:
re your post addressed to me at September 5, 2013 at 2:07 am
http://wattsupwiththat.com/2013/09/05/statistical-proof-of-the-pause-overestimated-global-warming-over-the-past-20-years/#comment-1408393
I agree both your points except that your second point is even stronger than you express.
Actually the true but unstated finding is that the models do not work for any length of time.
This is implicit because of the LIA issue I mention in my explanation for Rich at September 5, 2013 at 1:55 am
http://wattsupwiththat.com/2013/09/05/statistical-proof-of-the-pause-overestimated-global-warming-over-the-past-20-years/#comment-1408386
And it is why I said to him

I will try to explain what they are saying, but please do NOT assume my attempt at explanation means I agree with the explanation because I don’t.

Richard

richard verney
September 5, 2013 2:36 am

“Global mean surface temperature over the past 20 years (1993–2012) rose at a rate of 0.14 ± 0.06 °C per decade (95% confidence interval”
//////////////////////////
The fact is that during this 20 year period, the rise in temperature has not been linear (even if one applies some light smoothing to account for year to year variability).
All, or almost all, of the rise in temperature these past 20 years has been associated with a one off isolated event, namely the Super El Nino of 1998. Given the uncertainty 0.14 ± 0.06 °C per decade, we cannot be certain that all the rise in temperature is due to this ENSO event, but certainly the vast majority is. When this is taken into account, it is clear that the models are further off target than even this paper suggests.
As regards the hiatus, of course it is temporary only. Sooner or later, it is inevitable that temperatures will begin to change. But as Richard observes, we do not know in which direction that change will take place.
One further point on the pause, if the CO2 warming theory is sound, it becomes ever more difficult for there to be a pause in circumstances of elevated CO2 levels. It would easier for there to be a say 15 year pause (ie., when natural variability counteracts the warming effect of CO2) when CO2 levels are in the range of say 310 to 335ppm. It is more difficult when CO2 levels are in the range of 380 to 400ppm. It will be even more difficult should CO2 levels reach say 420ppm.
The higher the level of CO2 the greater the CO2 forcing. We are told (and, of course, this is a new development not mentioned in previous IPCC reports) that model runs do sometimes project lengthy pauses in the rise of temperature. However, we are not told at what level of CO2 this pause in the model projection occurs. Has any model shown a 17 or so year pause with CO2 levels in the range of 380 to 400ppm (and rising)?
I find it difficult to conceive how any model could project a lengthy pause when built on the assumption that CO2 is the dominant temperature driver and has dominion over natural variability. Of course they could contain a random number generator to input from time to time negative forcings from natural variability and another random number generator to input negative forcings from volcanoes and it is possible that these randomly generated negative forcings coincide to produce a pause, but this would only be short lived since the negative forcings claimed for volcanoes is only short lived. Ditto if they included a random generator to additionally throw La Nina into the mix.
Finally, this type of study is precisely the type of study which the IPCC itself should right from the early days have conducted when auditing the efficacy of its models and their projections. A reprot such as this should be included in AR5 irrespective of this type of paper.

Berényi Péter
September 5, 2013 2:44 am

The Ghost Of Big Jim Cooley says:
September 5, 2013 at 12:08 am
Does anyone know of any PRO-AGW websites that are commenting on the shift toward natural variability and the lack of warming? Or are they all turning a blind eye to it?

Note replies to comment #2, #6 & #11 by Dr. Gavin A. Schmidt under Unforced variations: Sept. 2013 at the RealClimate blog (Climate science from climate scientists).
1. Promises a future post on Fyfe & al. 2013 (as soon as it comes out will have to be addressed here)
2. Says that conflating model-observation mismatch to a contradiction “is a huge (and unjustified) leap” (whatever that’s supposed to mean)
3. Repeats old mantra “all theories are ‘wrong’ (as they are imperfect models of reality)” (therefore proving them wrong is not an issue, right?)
4. “Judging which one (or more) are falsified by a mismatch is non-trivial.”
5. Has “no problem agreeing that mismatches should be addressed”
6. Is a strong proponent of incorrect, but “useful” theories.
There you go.

Laurie
September 5, 2013 2:52 am

Nevermind 😉 I found what I was looking for concerning the authors. Is “Nature Climate Change” associated with the “Nature” journal?

Gail Combs
September 5, 2013 2:56 am

Sleepalot says: @ September 5, 2013 at 2:15 am
“Global mean surface temperature over the past 20 years (1993–2012) rose at a rate of 0.14 ± 0.06 °C per decade (95% confidence interval)1. ”
Bullshit.
>>>>>>>>>>>>>>>>>>
My thoughts exactly.

Australian temperature records shoddy, inaccurate, unreliable. Surprise!
….The audit shows 20-30% of all the measurements back then [before 1972] were rounded or possibly truncated. Even modern electronic equipment was at times, so faulty and unmonitored that one station rounded all the readings for nearly 10 years! These sloppy errors may have created an artificial warming trend. The BOM are issuing pronouncements of trends to two decimal places like this one in the BOM’s Annual Climate Summary 2011 of “0.52 °C above average” yet relying on patchy data that did not meet its own compliance standards around half the time. It’s doubtful they can justify one decimal place, let alone two….
It was the sharp eye of Chris Gillham who noticed the first long string of continuous whole numbers in a site record…. The audit team were astonished at how common the problem was. Ian Hill and Ed Thurstan developed software to search the mountain of data and discovered that while temperatures of .0 degrees ought to have been 10% of all the measurements, some 20 – 30% of the entire BOM database was recorded as whole number, or “.0″.…..

Anthony and his team of volunteers found problems with the US system. Since these two systems would be considered ‘Top of the Line’ the rest of the surface station data can only be a lot worse. A.J. Strata goes into an analysis of error in the temperature data based on information gleaned from the Climategate e-mails HERE.

kadaka (KD Knoebel)
September 5, 2013 2:56 am

Laurie said on September 5, 2013 at 2:34 am:

John C. Fyfe, Nathan P. Gillett, & Francis W. Zwiers
I tried to find CVs of these authors and found nothing. Also, I’m ignorant of “Nature Climate Change”. Can someone provide information please?

Sometime after Climategate, the previously well-respected journal Nature, while still somewhat respected, decided to divest itself of “climate science” and created the special Nature Climate Change journal, with the expected press release that this was done to highlight the global importance of the issue, give it the attention it is due, yada yada.
To search for published scholarly works, and from them discover the resumes of their writers, use Google Scholar: http://scholar.google.com/
The first name shows up as “JC Fyfe”. Looks like there’s two of them, one does biomedical. The other does climate science, here’s an example that was done for the American Meteorological Society (AMS):

Extratropical Southern Hemisphere Cyclones: Harbingers of Climate Change?
John C. Fyfe

Canadian Centre for Climate Modelling and Analysis, Meteorological Service of Canada, Victoria, British Columbia, Canada

Try Google Scholar for that and the other names.

johnmarshall
September 5, 2013 2:58 am

Or perhaps it is because the models have CO2 as an agent of warming when it cannot do this.

gnomish
September 5, 2013 2:59 am

kadaka
repeat the experiment with an infrared heater.
you may be able to heat the water slightly but you will find that the top layer absorbs most of the radiation and produces vapor in response – then the vapor absorbs the radiation and carries it off.
some numbers on that would make an interesting comparison

Nick Boyce
September 5, 2013 3:05 am

At the risk of repeating myself, in view of the admitted uncertainties in the global surface air temperature record, it is not at all clear how much, if any, global warming has taken place at the surface of the earth since about 1880.
http://lidskialf.blogspot.co.uk/p/global-warming-is-hoax-2.html

richardscourtney
September 5, 2013 3:20 am

Nick Boyce:
Your post at September 5, 2013 at 3:05 am says in total

At the risk of repeating myself, in view of the admitted uncertainties in the global surface air temperature record, it is not at all clear how much, if any, global warming has taken place at the surface of the earth since about 1880.
http://lidskialf.blogspot.co.uk/p/global-warming-is-hoax-2.html

Yes, I know. Indeed, I have been hammering the point in many places for many years; see e.g.
http://www.publications.parliament.uk/pa/cm200910/cmselect/cmsctech/memo/climatedata/uc0102.htm
So, I care about the inability to determine global temperature at least as much as you do.
However, that is NOT relevant to the discussion in this thread.
The climate models attempt to emulate climate change as indicated by global temperature (whatever that metric means). But the paper being discussed reports that the models fail in that attempt.
This failure is important because all IPCC predictions and projections are based on outputs of the climate models. Therefore, if the models do not emulate climate change – and the paper reports that they don’t – then everything the IPCC says is wrong so needs to be ignored.

Discussion of the failings of global temperature determination would disrupt the thread from its important subject. It should be avoided however much you, I or anyone else cares about the travesty which is determination of global temperature.
Richard

Gail Combs
September 5, 2013 3:45 am

richardscourtney says: @ September 5, 2013 at 2:34 am
…I agree both your points except that your second point is even stronger than you express.
Actually the true but unstated finding is that the models do not work for any length of time.
This is implicit because of the LIA issue I mention….
>>>>>>>>>>>>>>>
In the light of the geologic past the whole edifice crumbles. This study talks of “Global mean surface temperature over the past 20 years (1993–2012) rose at a rate of 0.14 ± 0.06 °C per decade”
That is a STABLE Climate and we should thank God for it.
From NOAA:

…Two different types of climate changes, called Heinrich and Dansgaard-Oeschger events, occurred repeatedly throughout most of this time. Dansgaard-Oeschger (D-O) events were first reported in Greenland ice cores by scientists Willi Dansgaard and Hans Oeschger. Each of the 25 observed D-O events consist of an abrupt warming to near-interglacial conditions that occurred in a matter of decades, and was followed by a gradual cooling….
http://www.ncdc.noaa.gov/paleo/abrupt/data3.html

(Note this not talking centuries but decades.)
How much of a ‘Warming?

Were Dansgaard-Oeschger events forced by the Sun?
Abstract
Large-amplitude (10–15 Kelvin), millennial-duration warm events, the Dansgaard-Oeschger (DO) events, repeatedly occurred during ice ages. Several hypotheses were proposed to explain the recurrence pattern of these events….

Not only were this drastic changes in temperature but they still do not know what caused them.
These abrupt warmings also occur during Interglacials.
Again from NOAA.

A Pervasive 1470-Year Climate Cycle in North Atlantic Glacials and Interglacials: A Product of Internal or External Forcing?
Gerard C. Bond (Lamont-Doherty Earth Observatory…
New evidence from deep sea piston cores in the eastern and western subpolar North Atlantic suggests that regional climate underwent rapid sub-Milankovitch variability, not only during the last glaciation, as has been previously documented on a global scale, but also during the present interglacial (Holocene) and the previous interglacial (stage 5e). The evidence consists of recurring shifts in lithic grain concentrations, lithic grain petrology and percentages of foraminiferal species. Amplitudes of this cycle during interglacials are much smaller than during glacials, typically by a factor of 2 to 3 in temperature and by more than one order of magnitude in amounts of ice rafted debris…
Three features are especially noteworthy in our records. First, we find a persistent quasi-periodic cycle with a mean pacing of 1470 years in both glacials and interglacials, demonstrating that climate on that time scale oscillated independently of ice volumes….
The origin of the 1470-year cycle is far from clear. Its persistence across glacial- interglacial boundaries is evidence that it cannot have been produced by any internal process involving ice-sheet instabilities. On the other hand, the cycle pacing is close to the overturning time of the ocean, raising the possibility that it arises from an internal oscillation within the ocean’s circulation. External processes, such as solar forcing and harmonics of the orbital periodicities cannot be ruled out, but are, at least presently, difficult to test.
http://www.ncdc.noaa.gov/paleo/chapconf/bond_abs.html

Even at a factor of 2 to 3 smaller (of the 10–15 Kelvin amplitude) that still gives a 1 to 3 Kelvin change “in a matter of decades” a far cry from the ‘Catastrophic’ 0.14 ± 0.06 per decade the Warmists are bleating on about.

steveta_uk
September 5, 2013 3:46 am

kadaka, you need to repeat this with an incandescant light heat source, and a dark base to the bowl, to verify that light cannot possibly heat water, as per another of the S*y Dr*gon rants.

Gail Combs
September 5, 2013 3:56 am

gnomish says: @ September 5, 2013 at 2:59 am
… repeat the experiment with an infrared heater.
you may be able to heat the water slightly but you will find that the top layer absorbs most of the radiation and produces vapor in response – then the vapor absorbs the radiation and carries it off.
some numbers on that would make an interesting comparison.
>>>>>>>>>>>>>>>>>>
Agreed.
A heat gun will ‘froth’ the water causing disruption of the surface boundary layer. (That type of disturbance is one of the arguments used by warmists to say the oceans absorb heat from CO2.)

richardscourtney
September 5, 2013 4:04 am

Gail Combs:
re your post at September 5, 2013 at 3:45 am
http://wattsupwiththat.com/2013/09/05/statistical-proof-of-the-pause-overestimated-global-warming-over-the-past-20-years/#comment-1408442
I fear we may be straying too far from the important subject of this thread. However, without meaning to start a side-track in the thread, I write to stress the importance of your point for the benefit of others.
The D-O Events indicate that the Earth has two stable conditions (i.e. glacial and interglacial). Transition between them consists of rapid ‘flickers’ between the two states until the climate stabilises in one of them.
This is consistent with the climate system being chaotic and having (at least) two strange attractors.
If that indication is correct then the fundamental assumption used in the climate models is wrong. The models assume climate change is driven by forcings.

However, the climate system has varying thermal input and varying temperature during each year so it is never in equilibrium. And, therefore, it oscillates (e.g. global temperature rises and falls by 3.8°C during each year).
If the chaotic climate system is constantly seeking its nearest strange attractor while constantly experiencing a changing equilibrium then ‘forcing’ is not relevant to climate change.
Richard

September 5, 2013 4:30 am

Gail Combs says:
September 5, 2013 at 2:07 am
So it is not just the last fifteen years it is the last twenty year that the models “do not reproduce”

An interesting and accurate description. I would like to see the push back from the warmists if the reality of your statement is shown to them.

KevinM
September 5, 2013 4:33 am

117 simulations
114 high
3 on target
+ 0 low.
—————————
Groupthink

Scottish Sceptic
September 5, 2013 4:38 am

This is what happens when you try to model 1/f noise believing it to be a signal.
As the IPCC kindly showed in their graph showing the frequency distribution of the temperature it is 1/f noise. Like “normal” noise this is completely random, but unlike normal noise although the value at any time is random, in 1/f noise there is a high correlation between successive time points.
So e.g. if it becomes “hot” … it stays hot (for a while). If it is cold … it stays cold and if there is a trend … it tends to stick around.
In other words to the naive academic who wants to mine data for their next paper … it is full of quirks that can be said to be “something” which are all just random noise.
The only reason they got away with it so long is that it takes so long for the climate to change … and before their bogus claims of finding “something” got tossed in the trash.

Steven Hill from Ky (the welfare state)
September 5, 2013 4:50 am

So, when this is all over, who’s going to take Gore to court for all the damages he has caused? I’d say that 200 million could pay off some coal miners and pay refunds for elevated electric bills people have been paying. I want to see people like Gore punished for all the lies they has been spewing. Take that Peace Prize away from him. It’s about time to take this country back.

Scottish Sceptic
September 5, 2013 5:07 am

Rich says: Surely if you’re modelling the climate you can’t say, “My model’s wrong because of internal climate variability” because that’s exactly what you’re supposed to be modelling.
The difference between what these academics do and what a professional engineer would do is simple. If the model is M(t) and the climatevariability funciton is V(t) or written E(t) = 1+V(t)
Then the academic view of the world is that
Temperature = M(t) x E(t)
Whereas for the engineer would see this as:-
Temperature = V(t) + M(t)
In essence there is almost no difference between these two, but the assumptions on which they are based lead to a very big difference. The academic, in their cosy out-of-touch world which only cares about curve fitting to data doesn’t need to worry about being sued if the “bridge falls down”. So they can assume that the model is right and can dismiss that awkward thing called “natural variability” ignore the errors in E(t) and with a wave of their hand magically assume they “averaged out” and ignored. (1/f noise doesn’t average out). In contrast the engineer (who deal with the real world where people die if they are not right) would start from the premise that nothing was known for sure unless or until they were confident they knew how big M(t)’s contribution was. This is in our culture: “expect the unexpected” … expect natural variability. So engineers who are trained to be cautious in real world situations (not ivory towers and grant applications) and who are drilled in the true meaning of “confidence” (models that don’t fail, bridges that don’t collapse, weather forecasts that aren’t disastrously wrong) … we want models which attain the engineer’s meaning of “confidence” and everything else is “natural variability”.
For the academic … “confidence” … is only a paper exercise that the curve fitted
This leads to two very different viewpoints:-
Academic Temperature = M(t) …. Global temperature is the model and confidence= “it fitted”.
Engineer = Temeprature = V(t) …. Global temperature cannot be modelled unless or until we are sure there is a model that works and confidence is your credibility at getting it right first time.

Claude Harvey
September 5, 2013 5:07 am

Note the “spin” in the linked Pacific Climate article summarizing the paper:
“Over long time scales, global climate models successfully simulate changes in a variety of climate variables, including the global mean surface temperature since 1900. However, over shorter time scales the match between models and observations may be weaker.”
Translation: “We’re still all going to burn up and die if we don’t drown first!”

Editor
September 5, 2013 5:13 am

RMB says:
September 5, 2013 at 12:31 am

Try heating the surface of water with a heat gun. At 450degsC the surface should quickly boil, in fact it remains cool. You can not heat water through the surface and thats why they are all having a problem.

Yes, but you can heat water with a stream of air with a dewpoint higher than the temperature of the water. And possibly with a wet bulb temperature greater than the water temperture.
Hint – if you see fog forming over an ocean, you can be pretty confident that realtively warm, moist air is advecting over the water surface and that moisture is condensing on the surface. That releases heat that warms the water, wave action mixes is downward.
The wet bulb temperature is a temperature that an air mass can bring water too by conduction and evaporation. The reason the heat gun doesn’t work well is due to the hot dry air evaporating the water surface.

Steve Keohane
September 5, 2013 5:15 am

Gail Combs says:September 5, 2013 at 3:56 am
gnomish says: @ September 5, 2013 at 2:59 am
… repeat the experiment with an infrared heater.
you may be able to heat the water slightly but you will find that the top layer absorbs most of the radiation and produces vapor in response – then the vapor absorbs the radiation and carries it off.
some numbers on that would make an interesting comparison.
>>>>>>>>>>>>>>>>>>
Agreed.
A heat gun will ‘froth’ the water causing disruption of the surface boundary layer. (That type of disturbance is one of the arguments used by warmists to say the oceans absorb heat from CO2.)

Gail, isn’t the ‘normal’ state of the ocean surface ‘frothed’, due to wave and wind action?

Geoff Sherrington
September 5, 2013 5:16 am

1. When temperature anomalies are used, is the temperature of the reference period (which is subtracted from the reading to give the anomaly) also adjusted when the rest of the data are adjusted?
2. When it is stated that Earth is recovering from the Little Ice Age by getting warmer, where is the source of more heat and is it a long-term source (like a warmed ocean portion releasing heat) or is it a quick-changing source, like a radiation imbalance in the atmosphere?
I think it is weak to argue that the Earth is recovering from an LIA unless a mechanism is given, one that is consistent with measurements.
For those who query the actual temperature change in the last 20 years, do try the UAH or RSS satellite record. Note, however, that there is no compelling argument that temperatures taken from a Stevenson screen 2.5 m above the surface of the Earth should be the same as (not offset from) those from a satellite measuring microwaves from a thickness of oxygen some distance above the Earth.

Bruce Cobb
September 5, 2013 5:16 am

Except no one’s claiming that there has been a “pause” for 20 years. Calling a rise in temperature at the rate of 0.14 ± 0.06 °C per decade sure doesn’t sound like a “pause”, although it could be termed a slowdown. And there it is. By cherry-picking the last 20 years, instead of the last, say 17 years, they can claim a “slowdown”. It’s a way of back-pedaling, and thus keeping their precious CO2-centric models alive for at least a while longer.

david eisenstadt
September 5, 2013 5:22 am

only one complaint…
“For this period, the observed trend of 0.05 ± 0.08 °C per decade is more than four
times smaller than the average simulated trend of 0.21 ± 0.03 °C per decade (Fig. 1b).”
it kinda pissante but… the rate is one fourth that of the average simulated trend…
you cant be four times smaller than anything… once you get to one time smaller, youre at zero.
just saying it cause its true.

bit chilly
September 5, 2013 5:25 am

uk banks were made to pay back customers for mis-sold policies.i trust the government will be paying us all back the 15% green energy tax we are currently paying,and the inflated vehicle tax for vehicles producing higher amounts of co2 ,along with the funding diverted from important research into cancer etc ?
is there any organised concerted effort in the US or the UK to petition government with the now constant stream of information falsifying the cAGW hypothesis ? if not ,it is time it was organised by ordinary citizens.
in the UK a petition with 100,000 signatories must be discussed in parliament. is there such a petition active at the moment ?

Editor
September 5, 2013 5:26 am

gnomish says:
September 5, 2013 at 2:59 am

kadaka
repeat the experiment with an infrared heater.
you may be able to heat the water slightly but you will find that the top layer absorbs most of the radiation and produces vapor in response – then the vapor absorbs the radiation and carries it off.
some numbers on that would make an interesting comparison

That’s a completely different experiment.
What I expect will happen is that evaporation will occur and raise the dew point and wet bulb temperature of the air in the room (kitchen, a stove top was in use). We can ignore the wet bulb temperature as there is little wind. As the dew point goes above the water temperature, the water will begin to warm and conduction will transport heat downward.
A completely different experiment.

September 5, 2013 5:30 am

richardscourtney says:
September 5, 2013 at 1:03 am
——————————————————
It’s no lie. Voodoo priests that truly serve the tribal chief can devine a future that serves his interest, no matter what they have said in the past.

Editor
September 5, 2013 5:32 am

steveta_uk says:
September 5, 2013 at 3:46 am

kadaka, you need to repeat this with an incandescant light heat source, and a dark base to the bowl, to verify that light cannot possibly heat water, as per another of the S*y Dr*gon rants.

This is also a completely different experiment. Light energy that isn’t absorbed at the surface will warm the bottom of the bowl and then heat the water. (In the ocean some will be absorbed by water and stuff in it.) Of course, there’s the claim that visible light doesn’t heat objects, only infrared does that, probably the most blatantly idiotic claim.

September 5, 2013 5:35 am

Good place to start the numbers from, 1993 at the deepest part of the temp downturn from the Pinatubo eruption. One gets the max temp increase trend starting here.
To be fair, the authors then go on to remove the volcanic and ENSO signals and find less warming of course. Then they note the temp trends are similar to the AMO cycles.
At least the climate scientists are no longer ignoring the difference between the models and the observations.

Snowlover123
September 5, 2013 5:38 am

A lot of you are noting and criticizing the paper for noting the hiatus to be “temporary.” But I would just like to point out that this is a huge step for some climate scientists, to acknowledge that the data does in fact point out that the rate of warming is not statistically significant from zero over the last 15 years. We’re making baby steps. At first there was vehement denial that such a pause existed, and many that acknowledged it were chastised. Now, we are getting “mainstream” confirmation, which IMO is huge. This is also considering that the 1990s saw some pretty quick rates of warming. Even including that rate, the models still grossly overestimate temperature rise.

Chuck L
September 5, 2013 5:39 am

Rich says:
September 5, 2013 at 1:18 am
“This difference might be explained by … internal climate variability.” Surely if you’re modeling the climate you can’t say, “My model’s wrong because of internal climate variability” because that’s exactly what you’re supposed to be modeling. They keep saying this and I don’t get it.
RICH, I do not think the modelers and their enablers are capable of admitting that “maybe the models are wrong” because
a. They want the money to keep flowing
b. The models have become articles of faith, rather than tools for exploring the science

kadaka (KD Knoebel)
September 5, 2013 5:49 am

gnomish said on September 5, 2013 at 2:59 am:

kadaka
repeat the experiment with an infrared heater.
you may be able to heat the water slightly but you will find that the top layer absorbs most of the radiation and produces vapor in response – then the vapor absorbs the radiation and carries it off.
some numbers on that would make an interesting comparison

Proposed: Thermal radiation from an infrared heater applied to the surface of water cannot heat the water.
Experiment setup:
1 bowl of unknown plastic, semi-flexible, no recycling symbol indicating plastic type, 2 1/3 cups (US measure) capacity. Approximate dimensions: 5 1/2″ inside diameter top with 1/2″ wide rim, 2″ effective depth, circular arc curve (concave interior surface) to 2 7/8″ diameter flat bottom, with integral hollow cylindrical section base of 1/4″ height and 2 7/8″ diameter. Base design minimizes heat transfer with surface underneath. Usually used for cold to warm contents (ice cream to oatmeal) but not boiling hot items.
1 Master Forge Wireless Thermometer #0023557, originally purchased at Lowes, consists of display-less transmitting base with probe and receiving hand unit which displays temperature, set for °F. Normally used for grilling/roasting. Has timer count-up and count-down functions displaying minutes and seconds. Used for temperature readings and timing.
2 cups (US measure) room temperature tap water, from well.
1 Sears Kenmore 30″ Electric Free Standing Range, 240VAC, Model # 911.93411000, broiler element used as infrared heater.
Procedure:
Water in bowl, thermometer probe in water. Initial reading 75°F (no decimal), room temperature. Bowl placed inside oven chamber. Bowl resting at center of factory-original steel grid oven rack with water surface at approximately 8″ from broiler element.
Broiler element was turned on, door was left ajar to minimize heating of the bowl and water by heated air in the chamber. Water surface was still.
Results in CSV format:
Time,Temperature
min:sec,°F
0:00,75
0:30,75
1:00,75
1:30,75
2:00,75
2:30,75
3:00,75
3:30,75
4:00,75
4:30,76
5:00,77
5:30,77
6:00,78
6:30,78
7:00,79
7:30,80
8:00,80
8:30,81
9:00,83
9:30,84
10:00,85
Experiment was terminated due to concern over notable acceleration of rate of warming. After shutting off the infrared heater and examining the chamber with the goal of removing the bowl, it was determined the bowl rim had begun thermal-based deformation. Containment of the water had not been lost. After partial cool-down the bowl with water was removed from the chamber. It was observed that the integral base did not have any deformation marks from the individual rods of the steel rack, indicating the steel rack was cooler than the bowl rim.
Except for the rim, there is no noticeable deformation of the bowl. As the rim deformed and solidified into a flexible state apparently unchanged from before, the material is identified as a thermoplastic plastic, not a thermoset plastic.
There was no noticeable production of steam or any other form of water vapor.
Discussion: Output of infrared heater was applied to surface of water. Chamber was not preheated. Temperature of water increased after an apparent warm-up period. It was already known that excessively prolonging the experiment would lead to likely catastrophic containment failure, thus it was planned for it to be terminated upon possible signs of possible container deformation, and it was.
It is clear the water was warmed. But it is also clear the plastic of the bowl absorbed the emissions of the infrared heater, as the plastic nearest to the heater that was not able to effectively use the water as a thermal sink did deform.
As it cannot be determined how much of the heating of the water was due to direct absorption by the water of infrared heater emissions and how much of the heating was from the heating of the bowl due to the emissions, it is evident the bowl used was not made of the proper material for use with this heat source.
Conclusion: Due to deficiencies in experimental apparatus, the proposition has been neither confirmed nor falsified.
Additional: A proper design for this experiment would use a container that will be unaffected by the expected temperatures and that will not directly absorb the emissions of the infrared heater, which would indicate a metal like stainless steel, that will not allow the water to absorb ambient chamber heat, which would not indicate metal.
The recommendation would therefore be for a stainless steel bowl (or similar container) that sits on a base of an insulating and non-heat retaining material such as that used for lightweight fire bricks (examples) or a supporting mat of a material like Kaowool (examples). The insulating material would have to cover the exterior of the bowl up to at least the water level. Ideally the insulation would go to the rim with the water level up to the rim, but a relatively small amount of container above the insulation and the water surface would yield a negligible difference.
Without such a setup to control the confounding possibility of the container heating the water, it is unlikely any meaningful conclusion can be drawn from the attempted heating of water by an infrared heater.

September 5, 2013 6:15 am

Interestingly enough my data confirm a trend of about 0.15 degree C per decade warming both on maxima and mean temperatures.(note that my tables are laid out in degrees C per annum)
http://blogs.24.com/henryp/2013/02/21/henrys-pool-tables-on-global-warmingcooling/
However, a problem turns up if we look at the warming from 2000?
Stop worrying about the global warming, start worrying about the global cooling.
http://blogs.24.com/henryp/2013/04/29/the-climate-is-changing/

Steven Hill from Ky (the welfare state)
September 5, 2013 6:20 am

1st it was Hansen’s Ice Age followed by the boiling planet. Can we get him some Prozac, in fact, Obama needs some as well. It’s treason what these people have done to our nation. All the weather channel talks about is climate change this and that. Tornado’s are down, Hurricanes are down and the insurance companies are cleaning up on Gore’s constant lying. Wake up people, nobody can even get close to what the earth is going to do next…….dah!!!!!

Steven Hill from Ky (the welfare state)
September 5, 2013 6:21 am

Man is nothing more than an ant in a tiny corner of the universe….that’s it, nothing more, nothing less.

September 5, 2013 6:25 am

Steven Hill says
Man is nothing more than an ant in a tiny corner of the universe….that’s it, nothing more, nothing less.
henry@steven
Where is your faith?
http://blogs.24.com/henryp/2013/03/01/where-is-your-faith/

Rich
September 5, 2013 6:34 am

richardscourtney: Thank you for trying to make that clear. Can I summarize it as, “There’s more noise in the system than we assumed”? If so, aren’t we just back with Lorenz’s discovery that chaotic systems produce output that looks like noise? If that’s the case then it’s the noise that has to be modelled not condensed into “error bars”. (I do know it’s not you I’m arguing with. Thanks for your efforts to explain the climate modellers’ thinking).

Bruce Cobb
September 5, 2013 6:34 am

It’s as though they mean to say “stopped”, but somehow it comes out as “slowdown”. Probably something to do with knowing on which side their bread is buttered.

Nick Boyce
September 5, 2013 6:35 am

Reply to Richard Courtney
richardscourtney says:
September 5, 2013 at 3:20 am
You say that my comment is both irrelevant and disruptive in this thread. I don’t see how it can be both, although it might be one or the other. As it happens, my comment seems to have passed by without disrupting the discussion. So that leaves its irrelevancy. Part of the main article in question makes the following claim.
“Global mean surface temperature over the past 20 years (1993–2012) rose at a rate of 0.14 ± 0.06 °C per decade (95% confidence interval)1.”
I’m no expert in 95% confidence intervals as used by statistitians, but even so I’m 100% confident that the (+/-(0.06))*C margin of error mentioned is ludicrously small. I’m just an old twerp who only became interested in global warming etc upon becoming an old age pensioner, so what do I know (?) except that these margins of error are of capital importance when estimating mean global surface temperatures. If I have bee in my bonnet, it’s that these margins of error are always ridiculously small, and I’ll not apologise for that.

Julian Flood
September 5, 2013 6:36 am

Steve Keohane says:
quote
Gail, isn’t the ‘normal’ state of the ocean surface ‘frothed’, due to wave and wind action?
unquote
While we’re doing experiments:
Do various heating-from-above experiments with water that has been rigorously cleaned and the same water that has been polluted with a mix of light oil and surfactant.
Difficult to simulate wave action though, as the bowl won’t be big enough, but we can observe from nature that the mix suppresses waves. I wonder what happens to heating when the surface frothing is suppressed?
JF

Gene Selkov
September 5, 2013 6:36 am

Steven Hill from Ky (the welfare state) says:
> So, when this is all over, who’s going to take Gore to court for all the damages he has caused?
I hope somebody properly skilled tries that, but I am sceptical of the outcome. How can you punish somebody for delivering something that was so universally acclaimed? With ecstatic audiences screaming for more? Can’t charge one for rape if it was consensual, I’m afraid.
The real damages were caused by Gore and nearly half the population of the planet. Can we sue them all?

September 5, 2013 6:45 am

“Commentary from Nature Climate Change, by John C. Fyfe, Nathan P. Gillett, & Francis W. Zwiers
 
Recent observed global warming is significantly less than that simulated by climate models.
–This difference might be explained by some combination of errors in:
—- external forcing,
—–model response and
—–internal climate variability.
 
Global mean surface temperature over the past 20 years (1993–2012) rose at a rate of 0.14 ± 0.06 °C per decade (95% confidence interval)
—1. This rate of warming is significantly slower than that simulated by the climate models participating in Phase 5 of the Coupled Model Intercomparison Project (CMIP5).
–To illustrate this, we considered trends in global mean surface temperature computed from 117 simulations of the climate by 37 CMIP5
models

 
These models generally simulate natural variability — including
—-that associated with the El Niño–Southern Oscillation and
—-explosive volcanic eruptions
—– as well as estimate the combined response of climate to changes in greenhouse gas
concentrations,
—-aerosol abundance (of sulphate, black carbon and organic carbon,
——for example), ozone concentrations (tropospheric and stratospheric),
——-land use (for example, deforestation) and solar variability.
By averaging simulated temperatures only at locations where corresponding observations exist, we find an average simulated rise in global mean surface temperature of 0.30 ± 0.02 °C
per decade (using 95% confidence intervals on the model average). The
observed rate of warming given above is less than half of this simulated rate, and
only a few simulations provide warming trends within the range of observational
uncertainty…
 
…For this period, the observed trend of 0.05 ± 0.08 °C per decade is more than four
times smaller than the average simulated trend of 0.21 ± 0.03 °C per decade (Fig. 1b).
 
It is worth noting that the observed trend over this period — not significantly
different from zero — suggests a temporary ‘hiatus’ in global warming.
 
The divergence between observed and CMIP5-simulated global warming begins in the
early 1990s,
as can be seen when comparing observed and simulated running trends
from 1970–2012 (Fig. 2a and 2b for 20-year and 15-year running trends, respectively).
The evidence, therefore, indicates that the current generation of climate models
(when run as a group, with the CMIP5 prescribed forcings) do not reproduce
the observed global warming over the past 20 years, or the slowdown in global
warming over the past fifteen years.
 
This interpretation is supported by statistical tests of the null hypothesis that the
observed and model mean trends are equal

 
Worthless. Waffle and weasel words that appears scientific, but is not.
 
Begin with the divergence begin in the early 1990’s; or when the models began running. The models were wrong from the get go and should have been tested, qualified and certified before ever running simulations for use.
 
With uncertified unqualified models that have no accuracy to observations; these folks than have the nerve to claim that the models “…generally simulate natural variability…”. The operative word is “generally”, meaning in their opinion, not verified testing.
 
“…By averaging simulated temperatures only at locations where corresponding observations exist…” This is a statistically valid method? Do the errors from the other locations carry on through? This phrase looks like ‘cherry pick’ in capitals and they still can’t get what they want.
 
This averaging is after running the models 117 times. Why 117 runs? Why not 125 runs or 300 runs or 10 runs… Such an odd number that 117 runs, smells like…
 
They have what can only be termed massive observational evidence against the models and in their final sentences they slip in a word ‘suggests’ and then a phrase ‘temporary hiatus in global warming’ with their use of the word ‘hiatus’ within apostrophes for emphasis.
 
Only absolute faith in the unproven theory of anthro global warming can underlay that temporary phrase, as it certainly isn’t in the evidence. Instead, the authors should have declared the models useless until corrected and independently certified. They should also be seriously considering whether anthro contributions to AGW can truly be accurately identified outside of natural variability. I agree with RichardsCourtney about how this phrase is correctly described, (lie), but differ slightly on what the authors should have said.

MikeN
September 5, 2013 6:46 am

I’m confused by the numbers here. If the 15 year trend in the models is .21C per decade, while the 20 year trend is .30C per decade, then that would mean the models calculated .315C warming for 15 years and .60C for 20 years, and thus a cooling of .285C from 1993 to 1998.

JJ
September 5, 2013 6:47 am

Listen to this crap:
It is worth noting that the observed trend over this period — not significantly
different from zero — suggests a temporary ‘hiatus’ in global warming.

The observed trend does not suggest that the cessation of warming is temporary. That’s a lie. And the use of the ‘hiatus’ makes the lie redundant.
The evidence, therefore, indicates that the current generation of climate models
(when run as a group, with the CMIP5 prescribed forcings) do not reproduce
the observed global warming over the past 20 years, or the slowdown in global
warming over the past fifteen years.

“Do not reproduce the observed global warming”? WTF kind of stilted sentence construction is that? And ‘slowdown in global warming’? It didn’t slow down. It stopped. These sort of linguistic tricks to hide the truth and imply lies are propaganda techniques, not honest scientific communication.
Stated plainly, the evidence indicates that the current climate models grossly exaggerated the observed warming over the last 20 years, and predicted even greater warming still, when in fact there was none at all for the past 15 years. This, therefore, demonstrates that the models’ predictions were bad, and have become even worse. The observed trend over this period suggests that anything that these models predict for the future is absolute bullshit.

richardscourtney
September 5, 2013 6:56 am

Geoff Sherrington:
In your post at September 5, 2013 at 5:16 am
http://wattsupwiththat.com/2013/09/05/statistical-proof-of-the-pause-overestimated-global-warming-over-the-past-20-years/#comment-1408491
you say

I think it is weak to argue that the Earth is recovering from an LIA unless a mechanism is given, one that is consistent with measurements.

It seems you don’t understand this thing called ‘science’.
The first stage of a scientific investigation is to admit you don’t understand an observed effect.
After that you can start the process of determining what you don’t understand.
And that process is prevented by pretending that
(a) the effect doesn’t exist because it is not understood
or
(b) that you understand the effect when you don’t.
I wonder where you obtained your mistaken and anti-science idea that a mechanism should be ignored unless its mechanism is understood. Perhaps from climastrologists?
In reality it is a STRENGTH (n.b. not “weak”) to acknowledge what is observed but not understood because that can lead to understanding “which is consistent with measurements”. Indeed, if an understanding is not “consistent with measurements” then it is not a true understanding.
And that is what this thread is all about. The modellers built their climate models to represent their understandings of climate mechanisms. If their understandings were correct then the models would behave as the climate does. The fact that the climate models provide indications which are NOT “consistent with measurements” indicates that the understanding of climate mechanisms of the modellers is wrong (or, at least, the way they have modeled that understanding is in error).
Richard

Gail Combs
September 5, 2013 7:02 am

richardscourtney says: @ September 5, 2013 at 4:04 am
I fear we may be straying too far from the important subject of this thread….
>>>>>>>>>>>>>>>>>
I think it is all related since the inability of the models to preform as advertised is because they completely miss the boat on how the climate actually works.

September 5, 2013 7:05 am

richardscourtney says:
September 5, 2013 at 1:55 am
Until now the modellers have assumed effects of internal variability sum to insignificance over periods of ~15 years.
====================
that really is the crux of the problem. the assumption that natural variability is simply noise around a mean. and thus will average out to zero over short periods of time. chaos tells us something entirely different.
chaos tells us that averages are an illusion of your sample period. as you increase the sample period longer term attractors will come to dominate, changing the long term average without any change in the forcings.
this is completely overlooked in the climate models, which assume that any long term change can only be a result of a change in the forcings.

Scott
September 5, 2013 7:09 am

We sometimes use a 150 gallon metal cattle trough heated with a propane weedburner as a makeshift jacuzzi at our cabin in the woods. The first time we tried to heat it in the winter, we blasted the weedburner at the side of the trough for hours and to our amazement it barely heated the water at all. We wisened up, placed the trough over a shallow trench in the sand, blasted it lengthwise across the bottom and it nicely heated up to temperature in 45 minutes. I suspect if we attempted to blast the weedburner at the waters surface we’d still be waiting for the water to heat up.
I concluded that a large volume of water is best heated from the bottom.

September 5, 2013 7:13 am

I am pleasantly astounded at how quickly discussion of the ‘pause’ has passed from heresy to mainstream. Now all someone has to do is publish the ultimate taboo: natural variability can push temperatures up as well as down.
I am also hugely enjoying KD Knoebel’s rather off-topic but superbly dry experimental reports. There is some ground-breaking determination of the properties of plastics going on right before our eyes: “… the material is identified as a thermoplastic plastic, not a thermoset plastic.” Insightful. I’m sure the Slayers are learning a lot, if they can keep up.

Gail Combs
September 5, 2013 7:14 am

Steve Keohane says: @ September 5, 2013 at 5:15 am
Gail, isn’t the ‘normal’ state of the ocean surface ‘frothed’, due to wave and wind action?
>>>>>>>>>>>>>>>>
It varies. The Horse Latitudes (between 30 and 35 degrees, north and south) were called that because of all the dead horses tossed overboard when the sailing ships got stuck in a no wind situation.
That is why both experiments are of interest.

richardscourtney
September 5, 2013 7:16 am

Nick Boyce:
Your post at September 5, 2013 at 6:35 am begins

Reply to Richard Courtney
richardscourtney says:
September 5, 2013 at 3:20 am
You say that my comment is both irrelevant and disruptive in this thread. I don’t see how it can be both, although it might be one or the other.

Say what!?
The subject of this thread is far too important for semantic disputes.
This links to my post so anybody can easily read what my post said
http://wattsupwiththat.com/2013/09/05/statistical-proof-of-the-pause-overestimated-global-warming-over-the-past-20-years/#comment-1408432
Richard

Gail Combs
September 5, 2013 7:19 am

Geoff Sherrington says: @ September 5, 2013 at 5:16 am
I think it is weak to argue that the Earth is recovering from an LIA unless a mechanism is given, one that is consistent with measurements.
>>>>>>>>>>>>>>>>>>>>
See my comment above on Dansgaard-Oeschger (D-O) events. They are called Bond events during an interglacial.

September 5, 2013 7:28 am

The initial wildly exaggerated claims of climate disaster was deliberate so that draconian restrictions on human freedom could be quickly imposed. Had that been successful, then the inevitable pause could be claimed on the success of the freedom killing regiment imposed on the people. This would then be justified in making them permanent. Fortunately they failed in their efforts and exposed the big lie of AGW and ACC.

richardscourtney
September 5, 2013 7:32 am

Rich:
Thankyou for your reply to me at September 5, 2013 at 6:34 am which says in full

richardscourtney: Thank you for trying to make that clear. Can I summarize it as, “There’s more noise in the system than we assumed”? If so, aren’t we just back with Lorenz’s discovery that chaotic systems produce output that looks like noise? If that’s the case then it’s the noise that has to be modelled not condensed into “error bars”. (I do know it’s not you I’m arguing with. Thanks for your efforts to explain the climate modellers’ thinking).

As to your first question; viz.
“Can I summarize it as, “There’s more noise in the system than we assumed”?”
I answer, Yes.
But your second question is a bit more tricky. It asks,
“If so, aren’t we just back with Lorenz’s discovery that chaotic systems produce output that looks like noise?”
The answer is, possibly.
Please note that I am not avoiding your question. A full answer would contain so many “ifs” and “buts” that it would require a book. However, I addressed part of the answer in my post which supported Gail Combs and is at September 5, 2013 at 4:04 am
http://wattsupwiththat.com/2013/09/05/statistical-proof-of-the-pause-overestimated-global-warming-over-the-past-20-years/#comment-1408455
Indeed, that linked post leads to the entire issue of what is – and what is not – noise.
(Incidentally, I point out that when Gail Combs gets involved in a thread it is useful to read her posts although they are often long: she usually goes to the real crux of an issue.)
I fully recognise that this answer is inadequate and trivial, but I think it is the best I can do here. Sorry.
Richard

Bob L.
September 5, 2013 7:37 am

kadaka: I’m thoroughly enjoying your posts. If the IPCC brought 1/10th of the scientific rigor and honesty to climate issues that you invest in spur-of-the-moment experiments in your kitchen, there would be no AGW movement. Cheers!

richard verney
September 5, 2013 7:38 am

Geoff Sherrington says:
September 5, 2013 at 5:16 am
“..Note, however, that there is no compelling argument that temperatures taken from a Stevenson screen 2.5 m above the surface of the Earth should be the same as (not offset from) those from a satellite measuring microwaves from a thickness of oxygen some distance above the Earth”
///////////////////////
One would not expect the temperature measurement (ie., the absolute temperature) to be the same since as you state, they are measuring temperatures at different locations. However, one would expect the trend of their respective temperature anomalies to be the same. If not, where is the temperature increase that has been observed 2.5m above the ground going, if not upwards to where the satellite is making measurements?

Gail Combs
September 5, 2013 7:38 am

Ric Werme says: @ September 5, 2013 at 5:32 am
….Of course, there’s the claim that visible light doesn’t heat objects, only infrared does that, probably the most blatantly idiotic claim.
>>>>>>>>>>>>>>>>>>>
That claim is quickly refuted by touching a white vs a black surface in the south at about the same time you get treatment for the burns.

richard verney
September 5, 2013 7:43 am

Steven Hill from Ky (the welfare state) says:
September 5, 2013 at 6:21 am
Man is nothing more than an ant in a tiny corner of the universe….that’s it, nothing more, nothing less.
//////////////////////////////////
And ants and termites emit more CO2 than man!
Dangerous thing ants.

Gail Combs
September 5, 2013 7:47 am

Gene Selkov says: @ September 5, 2013 at 6:36 am
….The real damages were caused by Gore and nearly half the population of the planet. Can we sue them all?
>>>>>>>>>>>>>>>>>>>>>>>
Depends on whether or not you can equate it to someone yelling FIRE in a crowded theater. it’s called Reckless Endangerment and is illegal in all (US) states. You have to prove it was done intentionally and knowing that there is no such danger.

Gene Selkov
Reply to  Gail Combs
September 5, 2013 8:34 am

Gail: Thank you for reminding me of Reckless Endangerment. I hoped something like that would apply. Now I recall there were efforts made at one time to trap the persons triggering fire alarms:
http://blog.modernmechanix.com/fire-box-traps-pranksters/

TomRude
September 5, 2013 7:49 am

Got to love it: “This difference might be explained by some combination of errors in external forcing, model response and internal climate variability.”
Yeah Gillett and Co… simply put, your pal AGW science is hardly settled.

richard verney
September 5, 2013 7:50 am

All of those discussing warming the oceans by heat from above are overlooking that the temperature of the air above the open oceans is at about the same temperature as the ocean below.
It is rare for there to be as much as 1 degC difference (usually far less), so nothing like a hot hair drier, or hot IR lamp over a bowl or bucket of water.

September 5, 2013 7:53 am

Gösta Oscarsson says:
September 5, 2013 at 12:29 am
There are a few “model trends” which correctly describes “observed trends”. Wouldn´t it be intresting to analyse in what way they differ from the rest?
####################
Yes that’s what some of us are doing. Contrary to popular belief “the models” are not falsified.
The vast vast majority over estimate the warming and need correction. The question is are those that match observations any better when you look at additional metrics and additional time periods. Or can you learn something from those that do match observations to correct those that dont?
If you are interested in looking at model ‘failures” with a mind toward improving our understanding then this is what you do. If you are interested in preserving the IPCC storyline
then you ignore the failures, and if you just interested in opposing the IPPC storyline then you
just ignore the fact that some do better and you argue that the whole lot are bad.
So in between the triumphalism of “the models are falsified” and the blind allegiance to the IPCC storyline, there is work to do.

BrianR
September 5, 2013 7:53 am

How could the error range for modeled data be a third of observational data? That just seems counterintuitive to me.

Ian L. McQueen
September 5, 2013 7:58 am

david eisenstadt wrote about the incorrect phrase “is more than four times smaller than…..” David, you stole my thunder.
I see this kind of error frequently, and was prepared to comment here. I wrote to Scientific American some time ago about their (mis)use of the phrase and then saw it repeated several months later, so they obviously did not pay attention.
As you point out, if anything becomes one time smaller, it disappears.
IanM

milodonharlani
September 5, 2013 8:01 am

Gene Selkov says:
September 5, 2013 at 6:36 am
Steven Hill from Ky (the welfare state) says:
Re suing Gore:

Theo Goodwin
September 5, 2013 8:01 am

The posts above show that people who post at WUWT have achieved a degree of clarity about the differences between the views of modelers and skeptics that does not exist elsewhere. Richard S Courtney deserves a large portion of the credit for this. I want to emphasize just a point or two and I am confident that Richard will correct my errors.
1. What modelers mean by “internal variability” has nothing to do with what everyone else understands as natural variability. Take ENSO as an example. For modelers, ENSO is not a natural regularity that exists in the world apart from their models; at least, it is not worthy of scientific (empirical) investigation as a natural regularity in the world. Rather, it is a range of temperatures that sometimes runs higher and sometimes runs lower and is treated as noise. Modelers assume that these temperatures will sum to zero over long periods of time. They have no interest in attempting to predict the range of temperatures or lengths of periods. In effect, ENSO is noise for modelers. Given these assumptions, it is clear that the natural regularity cannot serve in any fashion as a bound on models. That is, a natural regularity in the real world cannot serve as a bound on models.
2. Obviously, the way that modelers think about ENSO is the way that they think about anything that a skeptic might recognize as a natural regularity that is worthy of scientific investigation in its own right and that serves as a bound on models. Modelers think of clouds the same way that they think of ENSO. They admit that the models do not handle clouds well and maybe not at all. But this admission does not really matter to them. If they could model clouds well they would treat them as noise; that is, they would assume that cloud behavior averages to zero over longer periods of time and amounts to noise. Consequently, no modeler has professional motivation to create a model that is that ingeniously captures cloud behavior. (Clouds are an especially touchy topic for them because changes in albedo directly limits incoming radiation. However, if you are assuming that it all sums to zero then there is no problem.)
3. Modelers care only for “the signal.” The signal, in practical terms for modelers, is the amount of change in global average temperature that can be assigned to CO2. Theoretically, the signal should include all GHGs but modelers focus on CO2. So, what are modelers trying to accomplish? They are trying to show that some part of global temperature change can be attributed to CO2. Is that science?
4. Modelers’ greatest nightmare is a lack of increase in global average temperature. If there is no increase then there is no signal of CO2 forcing. If there is no signal for a lengthy period then that fact counts, even for modelers, as evidence that their models are wrong. The length of that period cannot be calculated. Why?
5. The length of period cannot be calculated because models embody only “internal variability” and not natural variability. Recall that internal variability is noise. If all representations of natural regularities, such as ENSO, must sum to zero over long periods of time then models cannot provide an account of changes to temperature that are caused by natural variability. In other words, modelers assume that there is not some independently existing world that can bound their models.
6. The only hope for modelers is to drop their assumption that ENSO and similar natural regularities are noise. Modelers must treat ENSO as a natural phenomenon that is worthy of empirical investigation in its own right and do the same for all other natural regularities. They must require that their models are bounded by natural regularities. Modelers must drop the assumption that the temperature numbers generated by ENSO must sum to zero over a long period of time. Once they can model all or most natural regularities then they will have a background of climate change against which a signal for an external forcing such as CO2 will have meaning.

September 5, 2013 8:09 am

“Anthony and his team of volunteers found problems with the US system. Since these two systems would be considered ‘Top of the Line’ the rest of the surface station data can only be a lot worse.”
Actually there is little evidence that the US system is “top of the line”
In terms of long term consistency the US system is plagued by several changes that almost no other country has gone through. The most notable being the TOBS change.
There are only a couple other countries that have had to make TOBS adjustments and in no case is the adjustment in other countries as pervasive as it is in the US.
On the evidence one could argue that while the US has a very dense network of stations the homogeniety of that network and the adjustments required put it more toward the BOTTOM
of the station piles than the Top of the line.
Of course that can also be answered objectively by looking at the number of break points that US systems generate as opposed to the rest of the world.
I’ll leave it at this. there is no evidence that the us system is top of the line. There is more evidence that it has problems that other networks done have, for example, you have to TOBS adjust the data. And finally there is an objective way of telling how “top of the line” a network is. I suppose when I get some time I could take a look at that. But for now I think folks would be wise to suspend judgement ( its not settled science ) about the quality of the US network as opposed to others. could be. could not be.

September 5, 2013 8:09 am

Speaking of climate models, I made this comment some time ago.
http://wattsupwiththat.com/2012/05/12/tisdale-an-unsent-memo-to-james-hansen/#comment-985181

Gunga Din says:
May 14, 2012 at 1:21 pm

joeldshore says:
May 13, 2012 at 6:10 pm
Gunga Din: The point is that there is a very specific reason involving the type of mathematical problem it is as to why weather forecasts diverge from reality. And, the same does not apply to predicting the future climate in response to changes in forcings. It does not mean such predictions are easy or not without significant uncertainties, but the uncertainties are of a different and less severe type than you face in the weather case.
As for me, I would rather hedge my bets on the idea that most of the scientists are right than make a bet that most of the scientists are wrong and a very few scientists plus lots of the ideologues at Heartland and other think-tanks are right…But, then, that is because I trust the scientific process more than I trust right-wing ideological extremism to provide the best scientific information.

=========================================================
What will the price of tea in China be each year for the next 100 years? If Chinese farmers plant less tea, will the replacement crop use more or less CO2? What values would represent those variables? Does salt water sequester or release more or less CO2 than freshwater? If the icecaps melt and increase the volume of saltwater, what effect will that have year by year on CO2? If nations build more dams for drinking water and hydropower, how will that impact CO2? What about the loss of dry land? What values do you give to those variables? If a tree falls in the woods allowing more growth on the forest floor, do the ground plants have a greater or lesser impact on CO2? How many trees will fall in the next 100 years? Values, please. Will the UK continue to pour milk down the drain? How much milk do other countries pour down the drain? What if they pour it on the ground instead? Does it make a difference if we’re talking cow milk or goat milk? Does putting scraps of cheese down the garbage disposal have a greater or lesser impact than putting in the trash or composting it? Will Iran try to nuke Israel? Pakistan India? India Pakistan? North Korea South Korea? In the next 100 years what other nations might obtain nukes and launch? Your formula will need values. How many volcanoes will erupt? How large will those eruptions be? How many new ones will develop and erupt? Undersea vents? What effect will they all have year by year? We need numbers for all these things. Will the predicted “extreme weather” events kill many people? What impact will the erasure of those carbon footprints have year by year? Of course there’s this little thing called the Sun and its variability. Year by year numbers, please. If a butterfly flaps its wings in China, will forcings cause a tornado in Kansas? Of course, the formula all these numbers are plugged into will have to accurately reflect each ones impact on all of the other values and numbers mentioned so far plus lots, lots more. That amounts to lots and lots and lots of circular references. (And of course the single most important question, will Gilligan get off the island before the next Super Moon? Sorry. 😎
There have been many short range and long range climate predictions made over the years. Some of them are 10, 20 and 30 years down range now from when the trigger was pulled. How many have been on target? How many are way off target?
Bet your own money on them if want, not mine or my kids or their kids or their kids etc.

richardscourtney
September 5, 2013 8:09 am

Steven Mosher:
At September 5, 2013 at 7:53 am
http://wattsupwiththat.com/2013/09/05/statistical-proof-of-the-pause-overestimated-global-warming-over-the-past-20-years/#comment-1408576
you assert

Contrary to popular belief “the models” are not falsified.

Oh, dear! NO!
It seems I need to post the following yet again on WUWT.
None of the models – not one of them – could match the change in mean global temperature over the past century if it did not utilise a unique value of assumed cooling from aerosols. So, inputting actual values of the cooling effect (such as the determination by Penner et al.
http://www.pnas.org/content/early/2011/07/25/1018526108.full.pdf?with-ds=yes )
would make every climate model provide a mismatch of the global warming it hindcasts and the observed global warming for the twentieth century.
This mismatch would occur because all the global climate models and energy balance models are known to provide indications which are based on
1.
the assumed degree of forcings resulting from human activity that produce warming
and
2.
the assumed degree of anthropogenic aerosol cooling input to each model as a ‘fiddle factor’ to obtain agreement between past average global temperature and the model’s indications of average global temperature.
More than a decade ago I published a peer-reviewed paper that showed the UK’s Hadley Centre general circulation model (GCM) could not model climate and only obtained agreement between past average global temperature and the model’s indications of average global temperature by forcing the agreement with an input of assumed anthropogenic aerosol cooling.
The input of assumed anthropogenic aerosol cooling is needed because the model ‘ran hot’; i.e. it showed an amount and a rate of global warming which was greater than was observed over the twentieth century. This failure of the model was compensated by the input of assumed anthropogenic aerosol cooling.
And my paper demonstrated that the assumption of aerosol effects being responsible for the model’s failure was incorrect.
(ref. Courtney RS An assessment of validation experiments conducted on computer models of global climate using the general circulation model of the UK’s Hadley Centre Energy & Environment, Volume 10, Number 5, pp. 491-502, September 1999).
More recently, in 2007, Kiehle published a paper that assessed 9 GCMs and two energy balance models.
(ref. Kiehl JT,Twentieth century climate model response and climate sensitivity. GRL vol.. 34, L22710, doi:10.1029/2007GL031383, 2007).
Kiehl found the same as my paper except that each model he assessed used a different aerosol ‘fix’ from every other model. This is because they all ‘run hot’ but they each ‘run hot’ to a different degree.
He says in his paper:

One curious aspect of this result is that it is also well known [Houghton et al., 2001] that the same models that agree in simulating the anomaly in surface air temperature differ significantly in their predicted climate sensitivity. The cited range in climate sensitivity from a wide collection of models is usually 1.5 to 4.5 deg C for a doubling of CO2, where most global climate models used for climate change studies vary by at least a factor of two in equilibrium sensitivity.
The question is:
if climate models differ by a factor of 2 to 3 in their climate sensitivity, how can they all simulate the global temperature record with a reasonable degree of accuracy.
Kerr [2007] and S. E. Schwartz et al. (Quantifying climate change–too rosy a picture?, available at
http://www.nature.com/reports/climatechange, 2007)
recently pointed out the importance of understanding the answer to this question. Indeed, Kerr [2007] referred to the present work and the current paper provides the ‘‘widely circulated analysis’’ referred to by Kerr [2007]. This report investigates the most probable explanation for such an agreement. It uses published results from a wide variety of model simulations to understand this apparent paradox between model climate responses for the 20th century, but diverse climate model sensitivity.


And, importantly, Kiehl’s paper says:

These results explain to a large degree why models with such diverse climate sensitivities can all simulate the global anomaly in surface temperature. The magnitude of applied anthropogenic total forcing compensates for the model sensitivity.

And the “magnitude of applied anthropogenic total forcing” is fixed in each model by the input value of aerosol forcing.
Thanks to Bill Illis, Kiehl’s Figure 2 can be seen at
http://img36.imageshack.us/img36/8167/kiehl2007figure2.png
Please note that the Figure is for 9 GCMs and 2 energy balance models, and its title is:

Figure 2. Total anthropogenic forcing (Wm2) versus aerosol forcing (Wm2) from nine fully coupled climate models and two energy balance models used to simulate the 20th century.

It shows that
(a) each model uses a different value for “Total anthropogenic forcing” that is in the range 0.80 W/m^-2 to 2.02 W/m^-2
but
(b) each model is forced to agree with the rate of past warming by using a different value for “Aerosol forcing” that is in the range -1.42 W/m^-2 to -0.60 W/m^-2.
In other words the models use values of “Total anthropogenic forcing” that differ by a factor of more than 2.5 and they are ‘adjusted’ by using values of assumed “Aerosol forcing” that differ by a factor of 2.4.
So, each climate model emulates a different climate system. Hence, at most only one of them emulates the climate system of the real Earth because there is only one Earth. And the fact that they each ‘run hot’ unless fiddled by use of a completely arbitrary ‘aerosol cooling’ strongly suggests that none of them emulates the climate system of the real Earth.
Richard

Gene Selkov
September 5, 2013 8:14 am

ferd berple says:
> that really is the crux of the problem. the assumption that natural variability is simply noise around a mean and thus will average out to zero over short periods of time.
This assumption is taught at school but is almost never tested. There is something profoundly counter-intuitive in the way averages are assessed today. I would allow some slack here a hundred, two hundred years ago, when all measurements were tedious, time-consuming, and difficult to track, so we had to replace actual data with the central limit theorem.
There is no such hurdle today, in most cases. Many different types of measurements can be automated and the question of whether they converge or not, and how they vary (chaotically or not), can be resolved in straightforward ways. Instead, everybody still uses estimators, often preferring those that hide the nature of variability.

Jim G
September 5, 2013 8:18 am

I guess Niels Bohr was right when he said, “Prediction is very difficult, especially about the future” . And Yogi Berra, he said, “‘It’s tough to make predictions, especially about the future”, very similar. A philosopher and a scientist in agreement.

richardscourtney
September 5, 2013 8:26 am

Theo Goodwin:
Thankyou for your obviously flattering mentions of me (I wish they were true) in your post at
September 5, 2013 at 8:01 am.
You ask for me to comment on points I disagree in that post. I have several knit-picking points which do not deserve mention, but there is one clarification which needs to be made.
The models parametrise effects of clouds because clouds are too small for them to be included in the large grid sizes of models. Hence, if clouds were understood (they are not) then their effects could only be included as estimates and averages (i.e. guesses).
Also, I have made a post which refutes the climate models on much more fundamental grounds than yours but – for some reason – it is in moderation.
Richard
PS Before some pedant jumps in saying “knit-picking” should be “nit-picking” because nits are insects, I used the correct spelling. Knit-picking was a fastidious task in Lancashire weaving mills. Small knots (called “knits”) occurred and reduced the value of cloth. For the best quality cloth these knits had to be detected, picked apart and the cloth repaired by hand. It was a detailed activity which was pointless for most cloth and was only conducted when the very best cloth was required.

milodonharlani
September 5, 2013 8:26 am

ENSO variability during the Little Ice Age & the “Medieval Climate Anomaly”, as the MWP is now politically correctly called:
http://repositories.lib.utexas.edu/handle/2152/19622
Climate scientists are only now getting around to addressing the question of natural variability that should have preceded any finding of an “unprecedented human fingerprint”.

Jean Parisot
September 5, 2013 8:27 am

“See my comment above on Dansgaard-Oeschger (D-O) events. They are called Bond events during an interglacial.”
What we really need is a tool or decision matrix that attempts to identify the start of one of these D-O or Bond events. All of the effort invested in trying to measure, explain, and manage the change in slope of the global temperature trend isn’t important in comparison to the need for a tool to detect these events as soon as possible. I’ve been impressed with how modern agriculture in the US and Canada responded to this years cooling change – but a global event will take more time.
We know they happen regularly and we know the magnitude, it seems to be a bit more important then a tiny warming trend, regardless of the cause.

September 5, 2013 8:34 am

If Bart said that 2+2 was 3 and Sally said it was 5, would we conclude that “on average” they’d been taught good math skills?
The notion of averaging the output of different models and then comparing them to observations is ludicrous unto itself.

Theo Goodwin
September 5, 2013 8:35 am

richardscourtney says:
September 5, 2013 at 8:26 am
Thanks for your clarification. I look forward to your post. I did not mean to flatter you. You are a tireless and gifted explainer. That is not flattery. (Oh, it occurs to me I can offer a bit of advice. Beware the trolls lest they distract you.)

David S
September 5, 2013 8:38 am

How many times have we seen new evidence that AGW is baloney? Many times of course. And yet the government continues its claim that AGW is a huge problem that must be dealt with. So here’s the problem: We live in something similar to George Orwell’s 1984. Reality and truth no longer matter. The correct answer is whatever the government says it is, reality notwithstanding. Anyone who disagrees gets electric shocks until he does agree. Ok they haven’t started the electric shocks yet but the skeptics are labeled “deniers” and some folks suggest they be sent to re-education camps.

Theo Goodwin
September 5, 2013 8:42 am

Gene Selkov says:
September 5, 2013 at 8:14 am
I know. I want to ask them if they have not heard of computers. On the other hand, it is no surprise that data management is the weakest link in their computing chain. Setting aside the question of their basic assumptions for the moment.

Theo Goodwin
September 5, 2013 8:45 am

Jean Parisot says:
September 5, 2013 at 8:27 am
Yes, work on the matrix is very important. However, to do that you must have reasonably good historical data and you must believe that nature exists outside your computer. Alarmists have trouble with both ideas.

JJ
September 5, 2013 8:50 am

ferd berple says:
that really is the crux of the problem. the assumption that natural variability is simply noise around a mean.

That, and the assumption that natural variability started in 1998…

John Whitman
September 5, 2013 9:07 am

Commentary from Nature Climate Change, by John C. Fyfe, Nathan P. Gillett, & Francis W. Zwiers said,
“For this period, the observed trend of 0.05 ± 0.08 °C per decade is more than four times smaller than the average simulated trend of 0.21 ± 0.03 °C per decade (Fig. 1b). It is worth noting that the observed trend over this period — not significantly different from zero — suggests a temporary ‘hiatus’ in global warming. The divergence between observed and CMIP5-simulated global warming begins in the early 1990s, as can be seen when comparing observed and simulated running trends from 1970–2012 (Fig. 2a and 2b for 20-year and 15-year running trends, respectively). The evidence, therefore, indicates that the current generation of climate models (when run as a group, with the CMIP5 prescribed forcings) do not reproduce the observed global warming over the past 20 years, or the slowdown in global warming over the past fifteen years.”

– – – – – – – –
Why use the term ‘global warming’ in that passage when the dispassionate and / or indifferent term would be something like ‘temperature changes’. If one constructs that passage with ‘temperature changes’ as a context instead of ‘global warming’ as a context then the passage would be clearer science communication with minimum implied presumption of things like ‘global warming’.
For me, the repeated use of ‘global warming’ (4 times in just that passage alone) is the essence of hidden flawed premises. Where there are hidden flawed premises one expects circumstantial conclusions at best and at worst convenient conclusions.
A good strategy is to now stop playing the already gamed GW game. To do so one needs to disallow the biased terminology that predetermines a general context and spin of the outcomes.
Better to use a different set of terms which are scientifically dispassionate and non-spun.
John

James Strom
September 5, 2013 9:08 am

richard verney says:
September 5, 2013 at 2:36 am
Bingo! A “pause” in warming is compatible with theory, but a pause coincident with a substantial increase in the main forcing factor is much less so.

John F. Hultquist
September 5, 2013 9:08 am

Gail Combs says:
September 5, 2013 at 7:14 am
>>>>>>>>>>>>>>>>
“It varies. The Horse Latitudes (between 30 and 35 degrees, north and south) were called that because of all the dead horses tossed overboard when the sailing ships got stuck in a no wind situation.

This is one of at least 3 explanations for the term “Horse Latitudes” and not my favorite.
First – Spanish ships perhaps carried many horses over the years but these sailors were not inclined to sail into the subtropical high pressure zones as they knew they were there, and a Spaniard would be disinclined to throw horses overboard. Some of the crew might go over first. Also, why is the zone named in English and not Spanish? [Paintings exist of unloading horses where there were no docks by forcing the off the deck and into the ocean and then leading them on to land. The difference was important to the horse.]
Next – the English ships carried “napped” crew members.
http://www.worldwidewords.org/topicalwords/tw-nap1.htm
Some of these taken from pubs to which was owed a bar bill – paid by the ship’s crew gathering agents. The new “sailor” did not earn wages until this “dead horse” payment was recovered by the ship’s purse. About 2 -3 weeks out from England the “dead horse” was paid off and the sailor would begin to be paid. Paying off the dead horse was cause for celebration so an effigy of a horse (straw horse) would be hoisted over the water and cut loose to drift in the sea. Songs were sung – shanties. This one is well known – The Dead Horse Shanty:
http://shanty.rendance.org/lyrics/showlyric.php/horse
—–
Another explanation for the term “horse latitudes” comes from the phrase “to horse” in the sense of “to push” or “to pull” something that doesn’t want to go. Sails without wind would present such an occasion and might induce a crew to try to pull (by rowing) a ship out of a calm area. This explanation requires that one believe the English sailors were unaware of the STHP zones and frequently found themselves therein. Thus, would begin an argument about whether Spanish or English were the better sailors. Don’t do there. But it would explain the use of English words for the phrase.
There is also the confusion between the “doldrums” and the horse latitudes.
Day after day, day after day,
We stuck, nor breath nor motion;
As idle as a painted ship
Upon a painted ocean.

See the Rime of the Ancient Mariner by the English poet Samuel Taylor Coleridge – in the lines above speaking of the equatorial area doldrums and not the STHP “horse latitudes.”

david eisenstadt
September 5, 2013 9:10 am

Steven Mosher says:
September 5, 2013 at 7:53 am
…..if you just interested in opposing the IPPC storyline then you
just ignore the fact that some do better and you argue that the whole lot are bad.
so…which models do you feel do the better job?

Theo Goodwin
September 5, 2013 9:17 am

david eisenstadt says:
September 5, 2013 at 9:10 am
Steven Mosher says:
September 5, 2013 at 7:53 am
…..if you just interested in opposing the IPPC storyline then you
just ignore the fact that some do better and you argue that the whole lot are bad.
“so…which models do you feel do the better job?”
Interesting question because it might elicit an interesting answer. But, as you know, all the models are based on the same circular reasoning. What is the probability that a worthless model will produce a curve that seems to match reality?

Theo Goodwin
September 5, 2013 9:24 am

ferd berple says:
September 5, 2013 at 7:05 am
Very well said. The “radiation-only theory,” used by all Alarmists is purely deterministic. No chaos there, no attractors. Worse, it is pure unwilling to posit the existence of natural regularities that affect temperatures. It is not bounded by reality.

richardscourtney
September 5, 2013 9:28 am

davidmhoffer:
I am disappointed that there have been no congratulations for your excellent post at September 5, 2013 at 8:34 am which says in total

If Bart said that 2+2 was 3 and Sally said it was 5, would we conclude that “on average” they’d been taught good math skills?
The notion of averaging the output of different models and then comparing them to observations is ludicrous unto itself.

Perhaps this will help be people to understand your profound point.
Average wrong is wrong.
Richard

Frank K.
September 5, 2013 9:28 am

“The point is that there is a very specific reason involving the type of mathematical problem it is as to why weather forecasts diverge from reality. And, the same does not apply to predicting the future climate in response to changes in forcings. It does not mean such predictions are easy or not without significant uncertainties, but the uncertainties are of a different and less severe type than you face in the weather case.
No they are NOT, but we’ve been through this before [sigh]…
* Climate models are highly non-linear, coupled sets of differential equations, with associated boundary and initial conditions which are, for many variables, poorly known.
* Climate models are NOT boundary value problems but initial value problems, and are prone to numerical instabilities and error after running for many time steps. To squash these errors, modelers introduce unphysical smoothing and other numerical tricks.
* There are NO guaranteed solutions to these equations, numerically or otherwise. The models as formulated may even be ill-posed, though that is often difficult to assess to to the very poor documentation provided by the developers in some cases (the most prominent of which is NASA GISS and their awful “Model E”).

Gail Combs
September 5, 2013 9:31 am

David S says: @ September 5, 2013 at 8:38 am
…. Ok they haven’t started the electric shocks yet but the skeptics are labeled “deniers” and some folks suggest they be sent to re-education camps.
>>>>>>>>>>>>>>>>>>
NAH, they will just send in a Swat Team to scare you.

David Ball
September 5, 2013 9:35 am

This is a much better experiment;

Richard Barraclough
September 5, 2013 9:40 am

Knit-picking ??? Unlike climate science, in language, consensus is all-important. Nitpicking, with no hyphen, is the accepted word.

James Strom
September 5, 2013 9:42 am

kadaka (KD Knoebel) says:
September 5, 2013 at 5:49 am
gnomish said on September 5, 2013 at 2:59 am:
kadaka, your experiment will not make your desired point unless a) your container has no bottom and b) your “water” has no contaminants–just like the ocean.

September 5, 2013 9:52 am

@Steven Mosher 8:09 am

On the evidence one could argue that while the US has a very dense network of stations the homogeniety of that network and the adjustments required put it more toward the BOTTOM of the station piles than the Top of the line.
Of course that can also be answered objectively by looking at the number of break points that US systems generate as opposed to the rest of the world.

A counter hypothesis is that the Berkley Earth scalpel runs amok with high density data, because the homogeniality of the network is an invalid assumption.
what to me appears to be minimally discussed wholesale decimation and counterfeiting of low frequency information happening within the BEST process. Dec. 13, 2012 (Circular Logic….)
—-
The [AGU Dec 2012] poster does NOT assuage my concerns. It reinforces I have not misunderstood the BEST process. “Results” amounts to comparing two untrustworthy methods with similar assumptions against each other. ….
The Rohde 2013 paper uses synthetic error free data. The scalpel is not mentioned. My concern is the use of the scalpel on real, error riddled data.
Jan 21, 2013 5:58pm (Berkley Earth finally….)
A class of events called Recalibration…. A property of this “recalibration class” is that there is slow buildup of instrument drift, then quick, discontinuous offset to restore calibration [Scalpels cuts at what appear to be discontinuities] Not only will Instrument drift and climate signal be inseparable, we have multiplied the drift in the overall record by discarding the correcting recalibration at the discontinuities.. The Scalpel is discarding the recalibrations, keeping the drift. Jan 23, 2013 11:30 am (ibid)
The denser the network, the more likely the scalpel will make the cut at recalibration events because most of the neighbors are not recalibrated at the same time.

Pamela Gray
September 5, 2013 9:53 am

Steven Mosher
Simple steps of research: The null hypothesis is that there is no significance between models and observations. Define the measure. In this case, the degree in which models are discrepant from observations. Run the experimental models and take measures of discrepancy between models and observations of temperature and CO2. Look at results. Dang. We must reject the null hypothesis and accept that there is a significant discrepancy. The models are falsified in that they do not reflect observed temperature measures in the face of observed increasing CO2. The next phase should be to examine the why’s and do more work on the models, possibly even rejecting or severely trimming the case for CO2.
There is no reason to go all adolescent angsty over the term “falsified”. If really good experimenters did that we would not have a lightbulb that works. And let me add, one that works a %$#*& of a lot better than the “is the light on I can’t tell” twisty ones.

richardscourtney
September 5, 2013 9:56 am

Richard Barraclough:
Thanks for your post at September 5, 2013 at 9:40 am.
I enjoyed that.
Richard

Pamela Gray
September 5, 2013 9:58 am

Or, the modeled average is what you expect but never get. ROTFLMAO!

September 5, 2013 10:00 am

If they had plotted the SST data,(which is the best metric for climate change) from 2003 when the warming peaked they would see the current cooling trend – but that would be a step too far for Nature. For an estimate of the coming cooling see
http://climatesense-norpag.blogspot.com/2013/07/skillful-so-far-thirty-year-climate.html

wobble
September 5, 2013 10:03 am

I always thought the CAGW modelers were claiming that their failed predictions were simply a matter of variability.
I would “correctly” model winnings from a coin toss game by predicting that I make nothing on each toss. Obviously, this isn’t going to happen. I’m obviously going to win some and lose some. In the long term, I should win/lose nothing, but in the near term I might win or lose quite a bit. For example, after 15 tosses, it’s possible that I’ve won 10 and only lost 5 for net winnings of 5. This would erroneously suggest a trend of 0.33 wins per toss.
Likewise, I thought the modelers were claiming that their models are correct and that time will eventually prove this.

September 5, 2013 10:05 am

davidmhoffer says:
September 5, 2013 at 8:34 am
If Bart said that 2+2 was 3 and Sally said it was 5, would we conclude that “on average” they’d been taught good math skills?
The notion of averaging the output of different models and then comparing them to observations is ludicrous unto itself.

==================================================================
Depends who you ask.
http://www.foxnews.com/us/2013/08/30/new-age-education-fuzzy-math-and-less-fiction/

September 5, 2013 10:13 am

@richardscourtney at 9:56 am
Re: Richard Barraclough: at 9:40 am. Knit-picking vs. Nitpicking
I think Knit-picking is the better term.
Pull loose threads of logic and data and see what comes unraveled.

September 5, 2013 10:29 am

At least dr. Norman Page got it right
http://wattsupwiththat.com/2013/09/05/statistical-proof-of-the-pause-overestimated-global-warming-over-the-past-20-years/#comment-1408688
henry says
but there are are only a few of us
http://blogs.24.com/henryp/2013/04/29/the-climate-is-changing/
who really know what is coming:
HOW CAN WE STOP THIS GLOBAL COOLING?
It looks like all the media and the whole world still believe that somehow global warming will soon be back on track again. Clearly, as shown, this is just wishful thinking. All current results show that global cooling will continue. As pointed out earlier, those that think that we can put more carbon dioxide in the air to stop the cooling are just not being realistic. There really is no hard evidence supporting the notion that (more) CO2 is causing any (more) warming of the planet, whatsoever. On same issue, there are those that argue that it is better to be safe than sorry; but, really, as things are looking now, they are now also beginning to stand in the way of progress. Those still pointing to melting ice and glaciers, as “proof” that it is (still) warming, and not cooling, should remember that there is a lag from energy-in and energy-out. Counting back 88 years i.e. 2013-88= we are in 1925.
Now look at some eye witness reports of the ice back then?
http://wattsupwiththat.com/2008/03/16/you-ask-i-provide-november-2nd-1922-arctic-ocean-getting-warm-seals-vanish-and-icebergs-melt/
Sounds familiar? Back then, in 1922, they had seen that the arctic ice melt was due to the warmer Gulf Stream waters. However, by 1950 all that same ‘lost” ice had frozen back. I therefore predict that all lost arctic ice will also come back, from 2020-2035 as also happened from 1935-1950. Antarctic ice is already increasing.
To those actively involved in trying to suppress the temperature results as they are available on-line from official sources, I say: Let fools stay fools if they want to be. Fiddling with the data they can, to save their jobs, but people still having to shove snow in late spring, will soon begin to doubt the data…Check the worry in my eyes when they censor me. Under normal circumstances I would have let things rest there and just be happy to know the truth for myself. Indeed, I let things lie a bit. However, chances are that humanity will fall in the pit of global cooling and later me blaming myself for not having done enough to try to safeguard food production for 7 billion people and counting.
It really was very cold in 1940′s….The Dust Bowl drought 1932-1939 was one of the worst environmental disasters of the Twentieth Century anywhere in the world. Three million people left their farms on the Great Plains during the drought and half a million migrated to other states, almost all to the West. http://www.ldeo.columbia.edu/res/div/ocp/drought/dust_storms.shtml
I find that as we are moving back, up, from the deep end of the 88 year sine wave, there will be standstill in the speed of cooling, on the bottom of the wave, and therefore naturally, there will also be a lull in pressure difference at that > [40 latitude], where the Dust Bowl drought took place, meaning: no wind and no weather (read: rain). However, one would apparently note this from an earlier change in direction of wind, as was the case in Joseph’s time. According to my calculations, this will start around 2020 or 2021…..i.e. 1927=2016 (projected, by myself and the planets…)> add 5 years and we are in 2021.
Danger from global cooling is documented and provable. It looks we have only ca. 7 “fat” years left……
WHAT MUST WE DO?
1) We urgently need to develop and encourage more agriculture at lower latitudes, like in Africa and/or South America. This is where we can expect to find warmth and more rain during a global cooling period.
2) We need to tell the farmers living at the higher latitudes (>40) who already suffered poor crops due to the cold and/ or due to the droughts that things are not going to get better there for the next few decades. It will only get worse as time goes by.
3) We also have to provide more protection against more precipitation at certain places of lower latitudes (FLOODS!),

milodonharlani
September 5, 2013 10:38 am

Richard Barraclough says:
September 5, 2013 at 9:40 am
Knit-picking ??? Unlike climate science, in language, consensus is all-important. Nitpicking, with no hyphen, is the accepted word.
————————
Accepted because it’s the actual word, referring to the eggs of lice. “Knit-picking” is bogus folk etymology with no historical basis whatsoever. There is however a form of knitting called picking.

Bruce Cobb
September 5, 2013 10:53 am

@milodon, yes. It helps to know the entomology of a word.

milodonharlani
September 5, 2013 10:55 am

Bruce Cobb says:
September 5, 2013 at 10:53 am
Ouch!

Tad
September 5, 2013 11:01 am

I feel that these types of analyses aren’t appropriate because the mean temperature does not follow a linear process over time. It’s some combination of orbital movements, weather patterns, ocean circulation, perhaps volcanic activity, and a bit due to mankind’s activities of one sort of another. That said, the author is using the alarmists’ own methods against them and I guess it’s fine for that.

September 5, 2013 11:08 am

David Ball says: September 5, 2013 at 9:35 am
……………….
An excellent experiment.
Heat absorbed by the world oceans is moved around by the major currents, most notably by the Gulf Stream and its extension in the North Atlantic, Kuroshio-Oyashio currents system in the North Pacific and yhe equatorial currents in the Central Pacific.
In order to influence global climate these major currents (assuming relatively steady solar input) heat transport (current’s velocity, volume or both) has to change.
One could speculate about causes of such changes, either of the global or local proportions.
It is somewhat odd to think that a local cause is the primary factor, but from data I have looked at, that appear to be the case as listed here:
AMO – Far North Atlantic Tectonics
PDO – Kamchatka – Aleutian Archipelago Tectonics
ENSO (SOI) – Central Pacific Tectonics
http://www.vukcevic.talktalk.net/TDs.htm

jbird
September 5, 2013 11:09 am

@Jonnya99: “I am pleasantly astounded at how quickly discussion of the ‘pause’ has passed from heresy to mainstream. Now all someone has to do is publish the ultimate taboo: natural variability can push temperatures up as well as down.”
Good observation, although there are still a few who claim that there is no actual pause. The AGW faithful will cling to the idea for as along as they can without addressing the obvious questions:
(1) Was natural variability addressed in the models? If not, why not?
(2) If the models cannot accurately address natural variability are they reliable at all?
(3) If the pause is caused by natural variability then (as you note) can they “push temperatures up as well as down?”
My guess is that the MSM will quietly let this issue die by simply publishing less and less about it in the coming months. Funding for continued “research” and for anti-fossil fuel, environmental “advocacy” will dry up. Both of these things are happening now.

September 5, 2013 11:14 am

@TheoGoodwin Thank you for your description.
Question: Do the models essentially assume that all temperature variability is caused by CO2 levels plus noise?
If so and the correct model is that there are a multitude of factors (sun, oceans, clouds, volcanos…..) affecting temperatures, then we could observe the results we have observed.
When the alarmists say “the models are based on the laws of physics”, how can they make that claim and leave out the “forcings” from the sun, the oceans, clouds, etc.?

BBould
September 5, 2013 11:29 am

Richardscourtney: The Bayesian Treatment of
Auxiliary Hypotheses – This paper examines the standard Bayesian solution to the Quine-Duhem
problem, the problem of distributing blame between a theory and its auxiliary
hypotheses in the aftermath of a failed prediction.
http://joelvelasco.net/teaching/3865/strevens%20auxhyp.pdf
This is a bit over my head and I may have missed the mark and it is not relevant, but it may be why the models have not been falsified.

September 5, 2013 11:53 am

Nits or Knits will tie you in knots or not… if you remember the purpose of words.
That is, they communicate.
And they carry several forms of knowledge;
-The literal meaning (we all know what words mean)
-The emotional content (for which remembrances of things past are important)
-The tone (for which a confrontational change of rhythm may be important)
-The beat (for which word and sound length matter and it helps readability)
-Probably more…
So nits or knots or gnats are nuts.
What matters is the ease of conveying your message as persuasively, or as entertainingly, as possible.

September 5, 2013 11:55 am

@kellyhaughton at 11:14 am
When the alarmists say “the models are based on the laws of physics”, how can they make that claim
They are based upon laws of physics. Just not ALL known Laws of physics.
Isaac Newton modeled the flight of cannonballs using his laws of motion….. neglecting air resistance. Cannoneers weren’t impressed.

milodonharlani
September 5, 2013 12:00 pm

M Courtney says:
September 5, 2013 at 11:53 am
I have to agree that in context “knit” is more entertaining than “nit” at communicating the same message, although Mann does like to compare skeptics to pine bark beetle larvae.

milodonharlani
September 5, 2013 12:02 pm

PS: I eschewed inserting the term nit-wit into the above copy.

September 5, 2013 12:05 pm

milodonharlani says at September 5, 2013 at 12:02 pm… Prudent call. My father does like a fight if any is offered. Or even if it just seems to be.
You know, it’s fun – in a way.

September 5, 2013 12:06 pm

Dr Darko Butina says:
September 5, 2013 at 1:24 am
The Vukcevic’s histogram is also based on the annual average and therefore not on ‘actual’ temperatures.
Hi Darko
Only actual temperature I take seriously is my own when goes to 38C or above.
Prije 3-4 mjeseca vido sam tvoj prilog na ovaj blog, ali veci dio je dalek iznad moje expertise, koja je blago budi receno, najcesce manje nego povrsna, bes obzira na predmet discusije.
Sve najbolje.

September 5, 2013 12:09 pm

kadaka: repeat your IR experiment with an unwaxed paper cup with water. You can boil water in a paper cup on hot coals without burning the cup. Every Boy Scout knows that trick. I leave it to the experts whether the heat transfer to water is by convection through the container, transfer from the heated air, radiative transfer or a combination of the three. In any event enough heat is transferred from the paper to the water to prevent the paper from burning.

milodonharlani
September 5, 2013 12:10 pm

M Courtney says:
September 5, 2013 at 12:05 pm
milodonharlani says at September 5, 2013 at 12:02 pm… Prudent call. My father does like a fight if any is offered. Or even if it just seems to be.
You know, it’s fun – in a way.
—————————–
I was thinking of Little Mikey Mann, not the distinguished Senior Courtney.

September 5, 2013 12:18 pm

@ Bruce Cobb says: September 5, 2013 at 10:53 am
*GROAN* – Bad pun! 😉

Theo Goodwin
September 5, 2013 12:26 pm

kellyhaughton says:
September 5, 2013 at 11:14 am
In practical terms, they are looking for forcing from CO2 and treating everything else as noise. They would dispute my claim. They would say that they are aware of the “forcings and feedbacks calculation” that must be done. However, they will get nowhere on that calculation. As explained in my first post above, cloud behavior is a natural regularity and their handling of cloud variation in their models will be subject to the same circularity as their handling of ENSO. They will treat cloud variation as summing to zero over the long run. The proof is in the pudding. How many models are trumpeting their skill at reproducing cloud behavior and, among them, how many are trumpeting their novel conclusions showing that cloud behavior is an important negative feedback (that cloud behavior seriously lowers the effects of CO2)?

Theo Goodwin
September 5, 2013 12:27 pm

M Courtney says:
September 5, 2013 at 12:05 pm
Please suggest to him that his time is better spent explaining the failings of models and other parts of CAGW.

Theo Goodwin
September 5, 2013 12:33 pm

kellyhaughton says:
September 5, 2013 at 11:14 am
“When the alarmists say “the models are based on the laws of physics”, how can they make that claim and leave out the “forcings” from the sun, the oceans, clouds, etc.?”
The only physics that they consider is the physics of radiation among Sun, Earth, and GHGs. They have no place for an experimental physics of natural regularities. They do not cover the physics of ENSO, AMO, you name it except to treat them as numerical indexes that will sum to zero. Several Alarmists have published articles arguing that the AMO and ENSO must sum to zero and cannot influence climate.

September 5, 2013 12:34 pm

Theo Goodwin says at September 5, 2013 at 12:27 pm…
I agree entirely and my work email records my conversation with my father on much the same theme (well, in the particular) even though I am paid to have other priorities at that time.
But I am not my father’s minder. He is his own man – don’t ask me to be responsible for focussing him, please (pretty please).

Theo Goodwin
September 5, 2013 12:36 pm

milodonharlani says:
September 5, 2013 at 10:38 am
I was there when ‘knit’ became ‘nit’. The invention of the nit was a very Sixties thing.

richardscourtney
September 5, 2013 12:40 pm

BBould:
Thankyou for your post addressed to me at September 5, 2013 at 11:29 am which says in total.

Richardscourtney: The Bayesian Treatment of
Auxiliary Hypotheses – This paper examines the standard Bayesian solution to the Quine-Duhem
problem, the problem of distributing blame between a theory and its auxiliary
hypotheses in the aftermath of a failed prediction.
http://joelvelasco.net/teaching/3865/strevens%20auxhyp.pdf
This is a bit over my head and I may have missed the mark and it is not relevant, but it may be why the models have not been falsified.

Firstly, for some reason my computer locks up when I try to download that link. So, at the moment I cannot answer your specific question.
Do you have another link or a reference so I can access the paper another way?
For the moment, I draw your attention to a recent excellent post from Robert Brown on another thread. It starts by (rightly) chastising me for failing to caveat the ‘all other things being equal fallacy’ but if you get past that he deals with the oversimplification of models. It is here
http://wattsupwiththat.com/2013/09/03/another-paper-blames-enso-for-global-warming-pause-calling-it-a-major-control-knob-governing-earths-temperature/#comment-1406638
Richard

richardscourtney
September 5, 2013 12:47 pm

Friends:
It seems that some want to address me through my son.
That is not reasonable. Does any of you have a son who agrees with you, and would you want one?
Please talk to him about his views and to me about mine. Otherwise he and me may lose the fun of our arguments with each other 🙂
Richard

September 5, 2013 12:54 pm

richardscourtney says at September 5, 2013 at 12:47 pm…
Obviously, I disagree.

Theo Goodwin
September 5, 2013 1:07 pm

richardscourtney says:
September 5, 2013 at 12:47 pm
How could I have forgotten that most fundamental point? My bad. Never again will I address you through your son.

Theo Goodwin
September 5, 2013 1:09 pm

M Courtney says:
September 5, 2013 at 12:34 pm
You are correct. Please pardon me.

September 5, 2013 1:18 pm

Theo Goodwin. No Worries, Sir.
But I really do agree with you when you imply that my father should focus more on the real issues rather than smashing everyone who is wrong on the internet.
He would get to bed earlier.

BBould
September 5, 2013 1:20 pm

Richardscourtney: Thanks for the link. I’m sure this is obvious to you but it’s a PDF file and needs adobe acrobat reader to download. Other than that I can’t understand why you can’t access the link as it works fine for all my computers except tablets.

milodonharlani
September 5, 2013 1:26 pm

Theo Goodwin says:
September 5, 2013 at 12:36 pm
At least in the US, “nitpicking” has been in the language since the 1950s, if not before.

richardscourtney
September 5, 2013 1:44 pm

BBould:
Repeated attempts to download the file have each locked-up my computer so I have had to restart it.
As you suspected, I do have Adobe Acrobat and that loads before the problem arises. I notice from the header that the paper is 42 pages so I am wondering if the problem has something to do with the large file size.
If you cannot provide a reference for me so I can try to access it elsewhere, perhaps – as a start – you can copy its abstract to here so I can at least understand what you are asking about?
Sorry to be a nuisance about this.
Richard

rgbatduke
September 5, 2013 1:46 pm

I’ve said this on other threads, but it is especially true on this one. CIMP5 is an aggregation of independent models. There is no possible null hypothesis for such an aggregate, nor is the implied use of ordinary statistics in the analysis above axiomatically supportable.
GCMs are not independent, identically distributed samples.
Consequently, the central limit theorem has absolutely no defensible application to the mean or variance obtained for a single projective parameter extracted from an ensemble of GCMs. This is equally true in both directions. One cannot reject “CIMP5” per se or any of the participating models on the grounds of a hypothesis test based on an assumed normal distribution and the error function used to obtain a p-value, nor can one assert that the mean of this projective distribution and its variance enable any statement about how “likely” it is to have any given temperature in the underlying distribution.
What one can do is take each model in the collection, where each model typically produces a spread of outcomes for a Monte Carlo random perturbation of the initial conditions and parameters, analyze the mean of that spread and its statistical moments and properties, and compare the result to observation because in this case the Monte Carlo perturbation is indeed selection of random iid samples from a known distribution and hence both the central limit theorem applies and — given a Monte Carlo generated statistical envelope of model results — one doesn’t really need it. One can assess the p-value directly by comparing the actual trajectory to the ensemble of model generated trajectories even if the latter is not Gaussian.
It is this last step that is never done. Based on the spaghetti snarl of GCM results that I’ve seen in e.g. AR4 or 5 for specific models (compared to the actual temperature, most of them would individually fail a correctly implemented hypothesis test when compared to the data, where now there is a meaningful null hypothesis and p-value for each model separately (the hypothesis of the model itself being correct and the contingent probability of producing the observed result). Indeed, a lot of them would fail with a very high degree of confidence (very small p-values, well below an e.g. 0.05 cutoff).
If those models were removed from CIMP5, one would — at a guess, since I do not have access to the actual distribution of trajectories for all of the contributing models and have to generalize from the samples I’ve seen — one would give up pretty much all of the models to the right of the primary peak, by throwing them into the garbage can (especially the secondary and tertiary peaks, as the distribution isn’t even cleanly unimodal in figure b). I’m guessing that while one could not actively fail all of the ones in between that cutoff and reality, a lot of the remaining ones would have systematically poor p-values — low enough to ensure that they aren’t all right for sure without necessarily being able to tag specific ones as wrong.
Even this analysis is faulty, because executing the strategy above is a form of data dredging, and the criterion for passing the hypothesis test has to be accordingly strengthened because you have so many opportunities to pass the hypothesis test that it isn’t surprising that some models might seem to do so even though they truly fail as a statistical accident. This would knock off a few more. I’m guessing that by the time one was done with this process, one would have a much, much smaller set of models that survived the “I guess I need to reasonably agree with empirical observation, don’t I?” cut, and the model mean would be much, much closer to reality (and still irrelevant).
At that point, though, one could examine the moderately successful (or at least, “close”) models to see what features they have in common and to help craft the next generation of models. It would also be an excellent time to re-assess the input variables (thinking about omitted variables in particular) and consider backwards (Bayesian) inference of parameters like the sensitivity, finding a value of the sensitivity that (for example) makes the centroid of the Monte Carlo distribution agree with the observed record. At least, it would be an excellent time to do this sort of thing if the GCM owners were doing science instead of performing political theater. And some of them are! Climate sensitivity is in free-fall for precisely that reason, because the clearly evident warming bias in almost all the models has driven a lot of honest (and formerly honestly mistaken) researchers back to the drawing board to determine the highest the sensitivity could reasonably be expected to without making the p-value TOO small, a form of Bayesian analysis that perhaps overweights the the former prior. This IMO will only mean that they have to move it again in the future (sigh) but that is their call. That’s what the future is for, anyway — to validate the better and falsify the worser.
rgb

RACookPE1978
Editor
September 5, 2013 1:46 pm

kadaka (KD Knoebel) says:
September 5, 2013 at 5:49 am (replying to )
gnomish said on September 5, 2013 at 2:59 am:

Proposed: Thermal radiation from an infrared heater applied to the surface of water cannot heat the water.

Referencing the problems in the experimental setup above (melting bowl, little effect, difficulty i setup and measurements).
Try it again, but with a much larger “area” of the water exposed to the heat/volume of water in total. That is, use a steel baking pan much larger than the IR heater area. That way, the IR heats heats the water under the heater, but the sides of the aluminum or steel pan are far away for the edges of the IR heater. If the water is as high as possible in the pan, then the pan edges will be both further away (increasing r^2 losses) and have less area exposed to the IR radiation coming the center of the IR heater.

RMB
Reply to  RACookPE1978
September 7, 2013 5:58 am

I would argue with your proposal. Radiation enters water but physical heat does not.

John F. Hultquist
September 5, 2013 2:03 pm

M Courtney says:
September 5, 2013 at 11:53 am
“-The literal meaning (we all know what words mean)

Not so. Explaining, I think, the recent gaffe of Tony Abbott –
“No one — however smart, however well-educated, however experienced — is the suppository of all wisdom.” Mr Abbott appeared to mix up the word “suppository” with the word repository.
http://www.independent.co.uk/news/world/australasia/australian-election-tony-abbott-hits-bum-note-with-suppository-of-all-wisdom-gaffe-8757527.html
Now I return you to your regular programming.

Aphan
September 5, 2013 2:04 pm

richardscourtney –
I’ve been thinking for a long time now that those of us who are not satisfied with the science on climate change need to do some simple, strategic “marketing” to better represent ourselves. For example, we do not deny the climate changes-we’re the ones who usually point that out. We do not deny that it warms when it actually warms, nor do we deny that it cools when it cools etc. But we’ve allowed the “other side” to define us for so long, that even the general public accept their definitions of us. It has to stop. But we have to have other things to fill that void. Definitions and statements that reflect the truth. Things that are simple and solid and consistent and completely irrefutable that we just keep saying and saying and saying and saying until their definitions of us lose all traction.
With that said, I LOVED something you said earlier:
“The modellers built their climate models to represent their understandings of climate mechanisms. If their understandings were correct then the models would behave as the climate does. The fact that the climate models provide indications which are NOT “consistent with measurements” indicates that the understanding of climate mechanisms of the modellers is wrong (or, at least, the way they have modeled that understanding is in error).”
I wanted to ask you if I could use that statement-over and over and over again? (I’ll credit you every time if you wish.) If we, as those who are “skeptical about what is being called climate science” just state something like this over and over again, we drive home several truths all at once. If we could cause people in general to start asking themselves…and then others…:
“Is this study done with models?” or
“Why are scientists using models they know are wrong?” or
“Do the scientists even know their models are wrong?” or
“So how much of what we’re being told is based on EVIDENCE and FACT and how much is based on flawed models”?
…we’d be creating a whole world full of skeptics and critical thinkers. Imagine…..:)

September 5, 2013 2:22 pm

John F. Hultquist says:
September 5, 2013 at 2:03 pm
M Courtney says:
September 5, 2013 at 11:53 am
“-The literal meaning (we all know what words mean)
Not so. Explaining, I think, the recent gaffe of Tony Abbott –
“No one — however smart, however well-educated, however experienced — is the suppository of all wisdom.” Mr Abbott appeared to mix up the word “suppository” with the word repository.
http://www.independent.co.uk/news/world/australasia/australian-election-tony-abbott-hits-bum-note-with-suppository-of-all-wisdom-gaffe-8757527.html
Now I return you to your regular programming.

==========================================================================
Well, I can think of a few for whom “suppository” would be the word to describe the source of some of their bits of “wisdom”.

richardscourtney
September 5, 2013 2:25 pm

Aphan:
re your question to me at September 5, 2013 at 2:04 pm.
Yes, of course you can use it if you want to. I would not have written it if I did not want people to read it.
And feel free to adopt it as your own if you want. I take great pleasure when I see phrases I invented or first applied long ago. And the pleasure is greatest when people tell me it is something I should hear (I smile inside and say nothing).
Richard

Aphan
Reply to  richardscourtney
September 5, 2013 2:59 pm

richardscourtney-
“And the pleasure is greatest when people tell me it is something I should hear (I smile inside and say nothing).”
LOL! You scamp you! I think that at least once, you should respond along the lines of “Why…that is a truly brilliant point!” If you do it within earshot of your son, you can tell him I put you up to it. 🙂

BBould
September 5, 2013 2:39 pm

Richardscourtney: Here is the abstract.
This paper examines the standard Bayesian solution to the Quine-Duhem
problem, the problem of distributing blame between a theory and its auxiliary
hypotheses in the aftermath of a failed prediction. The standard
solution, I argue, begs the question against those who claim that the problem
has no solution. I then provide an alternative Bayesian solution that
is not question-begging and that turns out to have some interesting and
desirable properties not possessed by the standard solution. This solution
opens the way to a satisfying treatment of a problem concerning ad hoc
auxiliary hypotheses.

richardscourtney
September 5, 2013 2:42 pm

BBould:
In light of my difficulty downloading the file to comment on a paper as you requested, I write to say I now think you probably have an answer to the question you are really asking.
I think that answer is probably provided by considering the information in my post at September 5, 2013 at 8:09 am
http://wattsupwiththat.com/2013/09/05/statistical-proof-of-the-pause-overestimated-global-warming-over-the-past-20-years/#comment-1408595
together with the information in the short post from davidmhoffer at September 5, 2013 at 8:34 am
http://wattsupwiththat.com/2013/09/05/statistical-proof-of-the-pause-overestimated-global-warming-over-the-past-20-years/#comment-1408620
and the information in the long post from rgbatduke at September 5, 2013 at 1:46 pm
http://wattsupwiththat.com/2013/09/05/statistical-proof-of-the-pause-overestimated-global-warming-over-the-past-20-years/#comment-1408910
I especially commend study of the long post from Prof Brown, aka rgbatduke (see, I told you his comments are good).
Richard

Steve Keohane
September 5, 2013 2:49 pm

Gail Combs says: September 5, 2013 at 7:14 am
Thanks for the reminder about the Horse Latitudes, hadn’t heard of them in, I guess, decades.

richardscourtney
September 5, 2013 2:50 pm

BBould:
Thankyou for your post addressed to me at September 5, 2013 at 2:39 pm. It came in while I was writing my post to you at September 5, 2013 at 2:39 pm.
OK. I see why you want me to read the paper: its abstract claims

The standard solution, I argue, begs the question against those who claim that the problem has no solution.

Well, “those who claim that the problem has no solution” certainly includes me and I think includes Prof Brown, so I really do need to find a way to get at that paper.
Richard

Theo Goodwin
September 5, 2013 2:53 pm

BBould says:
September 5, 2013 at 2:39 pm
Richardscourtney: Here is the abstract.
“This paper examines the standard Bayesian solution to the Quine-Duhem
problem, the problem of distributing blame between a theory and its auxiliary
hypotheses in the aftermath of a failed prediction.”
Quine created the Duhem-Quine thesis but would have no patience for Bayesians. The Duhem-Quine thesis does not reference auxillary hypotheses. It applies to all the hypotheses in the theory and to the evidence for the theory.
“The standard
solution, I argue, begs the question against those who claim that the problem
has no solution. I then provide an alternative Bayesian solution that
is not question-begging and that turns out to have some interesting and
desirable properties not possessed by the standard solution. This solution
opens the way to a satisfying treatment of a problem concerning ad hoc
auxiliary hypotheses.”
This might be good work but it is not part-and-parcel of Quine’s work. Seems to me that it just takes Quine’s logic and adds to it. To those who pursued probabilities for various hypotheses, he remarked that he had no interest in colored marbles in an urn.

Theo Goodwin
September 5, 2013 2:56 pm

milodonharlani says:
September 5, 2013 at 1:26 pm
Do you have first hand evidence? I do not trust dictionaries on the matter of street language.
If you are right, I still claim that the nit was a creation of the Sixties. Do you remember nits?

Theo Goodwin
September 5, 2013 3:01 pm

rgbatduke says:
September 5, 2013 at 1:46 pm
Thanks so much for this very important work. Some among mainstream climate scientists who are usually willing to take skeptics seriously do not understand what you have just explained.

Theo Goodwin
September 5, 2013 3:05 pm

M Courtney says:
September 5, 2013 at 1:18 pm
When Lincoln referred to “the better angels of our nature” he was thinking of the first hour after a long, good night’s sleep.

RACookPE1978
Editor
September 5, 2013 3:19 pm

rgbatduke says:
September 5, 2013 at 1:46 pm

I’ve said this on other threads, but it is especially true on this one. CIMP5 is an aggregation of independent models. There is no possible null hypothesis for such an aggregate, nor is the implied use of ordinary statistics in the analysis above axiomatically supportable.
GCMs are not independent, identically distributed samples.

OK. So check me here, and see if I have interpreted what you wrote correctly.
You CANNOT add all of the GCM outputs together and “average” them together for each year because they are not independently measured properties subject to statistical theory. That is, if they really were accurate computer models of an accurately modeled physical process, EVERY run of the same model parameters would be identical: A calculator does NOT add 2+2+2 = 6 differently every time. Further, this future temperature vs CO2 growth is NOT a biologically statistical values like a tree height and trunk diameter: you CANNOT measure a lot of them and get a “more accurate” diameter or average height because these are not models based on “average growth rates per ton of fertilizer or per thousand gallons of water” right?
However, since model inputs DO vary statistically because of their Monte Carlo internal calculators, it IS a valid comparison to run each model separately several thousand times, then compare THAT “average” model output to itself to see if it is putting out random or valid predictions. (Ignore Oldberg for the rest of this, OK?) At this point, one should compare the 24-odd average model runs against real world (no volcanos since 1993, known aerosol changes, known ENSO and PDO changes, known Arctic and Antarctic polar ice cover changes) and see which are most accurate.
If any are not-as-bad-as-the-rest-but-not-right (within 2 std deviation at least), we should throw out the worst 20 models, modify the remaining 4 and continue to re-run them until they duplicate the past 50 years accurately. Then wait and see which of the corrected 4 is best. In the meantime, fix the bad 20 that were originally trashed.
Correct?

Gail Combs
September 5, 2013 3:24 pm

John F. Hultquist says: @ September 5, 2013 at 9:08 am
….This is one of at least 3 explanations for the term “Horse Latitudes”
>>>>>>>>>>>>>>>>>>>
I always figured they would eat the beasts not throw them over board….
Isn’t the rewriting of history great? (Just don’t tell all those kids sweating through there history finals)

jorgekafkazar
September 5, 2013 3:49 pm

Theo Goodwin says: “Very well said. The “radiation-only theory,” used by all Alarmists is purely deterministic….”
And yet these same Alarmists claim there is such a thing as thermal inertia in a system driven by instantaneous radiative transfer.

John Whitman
September 5, 2013 3:52 pm

rgbatduke on September 5, 2013 at 1:46 pm
I’ve said this on other threads, but it is especially true on this one. CIMP5 is an aggregation of independent models. There is no possible null hypothesis for such an aggregate, nor is the implied use of ordinary statistics in the analysis above axiomatically supportable.
GCMs are not independent, identically distributed samples.
[. . .]

– – – – – – – –
rgbatduke,
Your comment was helpful. Thanks.
Considering your whole comment, what if a large voluntary group of independent (of government)) academic institutions decide to start an evaluation of climate. Decide to do it without relying on the current body of IPCC bureaucracies, processes and reports. Lets say their mission would be the integration of a very widely balanced sample of climate research into a consistent comprehensive overview; a mission without a mandate to look for evidence of any particular climate factor (for example anthropogenic from burning fossil fuels). Lets say the product of the consortium is for itself but is freely available to anyone or any government. Government isn’t the target of the product.
Question for RGB => In that scenario would you expect models would have the significance given to them by the IPCC? What reasonable role for models would you suggest in the scenario?
John

BBould
September 5, 2013 3:55 pm

Richardscourtney: If you do a google search of the title more pop up and you should be able to choose which one works. Please post about this paper if you can.
Much thanks.

milodonharlani
September 5, 2013 4:02 pm

Theo Goodwin says:
September 5, 2013 at 2:56 pm
Can’t tell if you’re kidding or not, but yes, I do have first hand evidence, or first ear, from the late 1950s.
Also direct documentary evidence, too, from early ’50s or late ’40s DoD jargon. A November 1951 Colliers magazine is quoted on the Net as saying: “Two long-time Pentagon stand-bys are fly-speckers and nit-pickers. The first of these nouns refers to people whose sole occupation seems to be studying papers in the hope of finding flaws in the writing, rather than making any effort to improve the thought or meaning; nit-pickers are those who quarrel with trivialities of expression and meaning, but who usually end up without making concrete or justified suggestions for improvement.”
You could check at a big library or buy the Nov 3, 10, 17 or 24, 1951 issues of Colliers on Amazon or eBay.

richardscourtney
September 5, 2013 4:05 pm

RACookPE1978:
You conclude your post at September 5, 2013 at 3:19 pm asking
http://wattsupwiththat.com/2013/09/05/statistical-proof-of-the-pause-overestimated-global-warming-over-the-past-20-years/#comment-1408995

If any are not-as-bad-as-the-rest-but-not-right (within 2 std deviation at least), we should throw out the worst 20 models, modify the remaining 4 and continue to re-run them until they duplicate the past 50 years accurately. Then wait and see which of the corrected 4 is best. In the meantime, fix the bad 20 that were originally trashed.
Correct?

Obviously, rgb will make whatever answer he wants. This is my ‘two pence’.
The problem is the ‘Texas sharpshooter fallacy’.
The Texas sharpshooter fires a scatter-gun at a wall, then paints a target around the middle of the impacts on the wall, and points to the target as evidence he is a good shot.
The models which failed to make an accurate forecast need to be rejected or amended because they are known to lack forecasting skill.
But removing the models which missed the target of an accurate prediction does not – of itself – demonstrate that the remaining models have forecasting skill: the models which seem to have made an accurate forecast may only have done that by chance (removing the ‘failed’ models is ‘painting the target’ after the gun was fired).
Therefore, and importantly, the remaining models may not accurately forecast the next 20 years.
There is an infinite number of possible futures. A model must emulate the dominant mechanisms of the modelled system if it is to be capable of agreement with the future that will eventuate. And each model is unique (e.g. each incorporates a unique value of climate sensitivity). Therefore, at most only one of them emulates the Earth’s climate system.
Hence, the outputs of the models cannot be averaged because average wrong is wrong.
Furthermore, there is no reason to suppose a model can forecast if it cannot hindecast, but an ability to hindecast does not indicate an ability to forecast. This is because there are many ways a model can be ‘tuned’ to match the past, and none of those ways may make the model capable of an accurate forecast.
Therefore, a model has no demonstrated forecast skill until it has made a series of successful forecasts.
Richard

richardscourtney
September 5, 2013 4:09 pm

BBould:
re your suggestion to me at September 5, 2013 at 3:55 pm.
Yes, having read the abstract I really do want to read that paper. I will do the search in the morning. It is now past midnight here. And, of course, I will reply to you when I have read it.
Richard

richardscourtney
September 5, 2013 4:11 pm

BBould:
What is the title and who is the author, please?
Richard

milodonharlani
September 5, 2013 4:11 pm

I see that issues of Collier’s are on line:
http://www.unz.org/Pub/Colliers-1951nov03

milodonharlani
September 5, 2013 4:14 pm

Wordola’s first recorded use is 1954:
http://www.wordola.com/wusage/nitpicking/f1950-t1959.html

September 5, 2013 4:30 pm

A physics-based equation, using only one external forcing, calculates average global temperature anomalies since before 1900 with R2 = 0.9. The equation is at http://climatechange90.blogspot.com/2013/05/natural-climate-change-has-been.html . Everything not explicitly considered must find room in that unexplained 10%.

BBould
September 5, 2013 4:35 pm

Richardscourtney: The Bayesian Treatment of
Auxiliary Hypotheses
Michael Strevens
British Journal for the Philosophy of Science, 52, 515–537, 2001
Copyright British Society for the Philosophy of Science

BBould
September 5, 2013 4:35 pm

Theo Goodwin: Thank you very much!

Editor
September 5, 2013 5:47 pm

milodonharlani says:
September 5, 2013 at 12:02 pm
> PS: I eschewed inserting the term nit-wit into the above copy.
Ah. I thought there was something witless about that previous comment.

Theo Goodwin
September 5, 2013 7:38 pm

milodonharlani says:
September 5, 2013 at 4:02 pm
Thanks. You are a class act.

September 5, 2013 7:58 pm

Ric Werme says:
September 5, 2013 at 5:47 pm

milodonharlani says:
September 5, 2013 at 12:02 pm
> PS: I eschewed inserting the term nit-wit into the above copy.

Ah. I thought there was something witless about that previous comment.

=====================================================================
Perhaps nitlamps were invented to help nitwits see the light?

Brian H
September 6, 2013 12:29 am

“hiatus”, the fall-back defense? Like luke-warmism, agreeing cedes the unspoken assumptions, which are comprehensively false.

Richard Barraclough
September 6, 2013 1:15 am

Good to see a little etymological sparring in amongst the science.
Now, if only we could all distinguish between “its” and it’s”……..

Reply to  Richard Barraclough
September 6, 2013 6:44 am

Barraclough – knowing the difference between those 2 won me a IT contract at the State Library. 😉

richardscourtney
September 6, 2013 2:59 am

BBould:
I have now downloaded the paper
Strevens M, ‘The Bayesian Treatment of Auxiliary Hypotheses’, British Journal for the Philosophy of Science, 52, 515–537, 2001
At September 5, 2013 at 11:29 am you suggested to me

it may be why the models have not been falsified.

I have made a cursory study of the paper and will continue to give it much more thought. However, I am writing now to say that I do not think the paper is relevant to the discussion in this thread.
Firstly, I was surprised that I was unaware of a paper published 12 years ago which had the importance you suspected. My initial impression is that it does not have that importance.
Secondly, as a general rule, the importance of a paper is inversely related to its length. This paper is 42 pages long. My first reading of the paper suggests that it obeys that general rule.
The purpose of the paper seems to be to express a personal reaction of Michael Strevens to the work of Newstein. I do not know what if any personal or professional interactions Strevens has had with her, but he makes some personal remarks; e.g.

Newstein, a brilliant but controversial scientist, has asserted both that h is true and that e will be observed. You do not know Newstein’s reasons for either assertion, but if one of her claims turns out to be correct, that will greatly increase your confidence that Newstein is putting her brilliance to good use and thus your confidence that the other claim will also turn out to be correct.

Section 2.4, page12
The subject of the paper is an attempt to quantify to what degree evidence refutes a theory.
In the 1950s Pierre Duhem cogently demonstrated that a scientific hypothesis is not directly refuted by evidence. This is because the evidence also represents additional hypotheses concerning how the evidence was produced and observed.
Duhem’s argument is plain when stated; e.g. if is assumed a long-jumper broke the world record, then the measurement to assess that assumption assumes the tape measure was accurate.
The Quine-Duhem thesis expands on that seminal work of Duhem.
There is always more than one assumption concerning the evidence (e.g. in the long-jump illustration, in addition to assumptions about the tape measure there are assumptions about how it was used). And there is a central hypothesis (e.g. the long-jump measurement provided a correct indication). In essence, the Quine-Duhem thesis says there is no way to determine how an individual assumption affects the importance of the indication provided by the evidence.
Hence, it cannot be known to what degree a piece of evidence refutes a theory because the acceptance of the evidence is adoption of unquantified assumptions.
This, of course, is undeniably true and it affords a get-out to pseudoscientists. Indeed, it has been used by climastrologists (e.g. unmeasured heat must be in deep ocean where it cannot be measured). As you imply, it could also be used as a get-out to falsification of climate models (i.e. the models are right so the evidence must be wrong).
Avoidance of such get-outs requires clear recognition of what is – and what is not – being assessed. An example of this need for clarity is stated by my post in this thread at September 5, 2013 at 3:20 am
http://wattsupwiththat.com/2013/09/05/statistical-proof-of-the-pause-overestimated-global-warming-over-the-past-20-years/#comment-1408432
That post is directly pertinent to the subject of Srevens’ paper because it argues that the uncertainties in the data are a separate issue from whether the climate models emulate the data.
Strevens’ paper claims it is possible to assign individual assessments to the assumptions included in a piece of evidence. In his Introduction on page 1 he writes

I will present a kind of posterior objectivist argument: that on the correct Bayesian understanding of the Quine-Duhem problem, Bayesian conditionalization provides a way to assess the impact on a hypothesis h of the falsification of ha that behaves in certain objectively desirable ways, whatever the values of the prior probabilities.
I will argue that the standard Bayesian solution to the Quine-Duhem problem is incorrect (section 2.4).
I then show, in section 2.5, that given the standard, incorrect Bayesian solution to the Quine-Duhem problem, no posterior objectivist argument of the sort I intend to provide would be possible.

Those are bold claims which the paper fails to fulfill.
This failure seems to be because those claims are not the true purpose of the paper which says in Section 2.4, page 14

A Bayesian might reply that, in the scenarios sketched by Dorling and others, there are no Newstein effects. More generally, the probabilities have been chosen in such a way that δc is zero, so that the entire probability change can be attributed to δ qd . But how is one to evaluate this claim?

Indeed, a Bayesian would reply that. And would not see a need to refute Newstein.
In a peer review of the paper I would discuss the purported refutation, but that does not seem to be needed here. That is because, as the paper admits, the refutation is pointless. It admits in Section 5, page 26

The Quine-Duhem problem is, in many ways, the central problem concerning the role of auxiliary hypotheses in science. One might hope, then, that a Bayesian solution to the Quine-Duhem problem would provide answers to many other questions involving auxiliary hypotheses. My solution cannot be directly employed in a Bayesian treatment of other problems in confirmation theory, however, because it provides a formula for what I call a partial posterior probability rather than for the posterior probability itself.

In other words, Strevens’ admits his analysis only affords a solution to one limited type of assessment and is not generally applicable.
I hope this brief and cursory reply is sufficient answer to your request.
Richard

Sleepalot
September 6, 2013 3:16 am

@ Kadaka: I applaud you for doing the experiment, however …
In your first experiment you used enough energy to boil the water [1], yet only warmed it 3C. Yes, you falsified the proposition, but you rather proved the point – imo.
[1] if you’d put your roughly 0.5kg of water in a 1600 Watt kettle for 5 minutes it assuredly would’ve boiled.

richardscourtney
September 6, 2013 5:19 am

Dan Pangburn:
I see nobody has answered your question in your post at September 5, 2013 at 4:30 pm which says in total

A physics-based equation, using only one external forcing, calculates average global temperature anomalies since before 1900 with R2 = 0.9. The equation is at http://climatechange90.blogspot.com/2013/05/natural-climate-change-has-been.html . Everything not explicitly considered must find room in that unexplained 10%.

The model is merely an exercise in curve fitting. As the link says

The word equation is: anomaly = ocean oscillation effect + solar effect – thermal radiation effect + CO2 effect + offset.

This matches the data because the ‘effects’ are tuned to obtain a fit with the anomaly.
Hence, the model demonstrates that those ‘effects’ can be made to match the anomaly, but it does not demonstrate there are not other variables which may be similarly tuned to obtain a match with the anomaly.
The model matches the form of the anomaly. But, importantly, it only explains the opinion of its constructor: it does NOT explain anything about climate behaviour. Therefore, it does not have a residual of “10%” of climate behaviour which is unexplained.
The model – as every model – represents the understanding of its constructor. But the model has no demonstrated predictive skill and, in that sense, it is similar to the GCMs.
Richard

kadaka (KD Knoebel)
September 6, 2013 6:14 am

Sleepalot said on September 6, 2013 at 3:16 am:

@ Kadaka: I applaud you for doing the experiment, however …
In your first experiment you used enough energy to boil the water [1], yet only warmed it 3C. Yes, you falsified the proposition, but you rather proved the point – imo.
[1] if you’d put your roughly 0.5kg of water in a 1600 Watt kettle for 5 minutes it assuredly would’ve boiled.

If all of the air molecules exiting the heat gun impacted the water, and all energy gained from passage through the heat gun was transferred to the water resulting in the temperature of the air that bounced off the water surface being no greater than room temperature, you might have a point.
Except mere hot air blowing on a surface is far less efficient than the direct heating of an electric kettle, where the heating element may be immersed in the water with some designs. All that energy is not transferred, all the air molecules did not hit the surface. As the heat gun was agitating the water surface, there was likely energy lost as latent heat due to vaporization, to a degree far in excess of that of an electric kettle. Etc.
So, since a process that is many times more efficient could have delivered enough energy to boil the water in that time, and the process used that was far less efficient only warmed the water a few degrees, what is shown is… A more efficient heating method could have heated the water faster. And that’s about it for your comparison.

September 6, 2013 6:18 am

richardscourtney says: September 5, 2013 at 1:03 am
Friends:
The paper is reported to say
It is worth noting that the observed trend over this period — not significantly different from zero — suggests a temporary ‘hiatus’ in global warming.
NO! That is an unjustifiable assumption tantamount to a lie.
Peer reviewed should have required that it be corrected to say something like:
It is worth noting that the observed trend over this period — not significantly different from zero — indicates a cessation of global warming. It remains to be seen when and if warming will resume or will be replaced by cooling.
Richard
________
Hello Richard,
I agree with your above assessment. Furthermore:
In several recent papers, we are witnessing an undignified scramble by the warmist establishment to spin the story one more time. It is just more warmist nonsense, espoused by people who have ABSOLUTELY NO PREDICTIVE TRACK RECORD. I suggest that one’s predictive track record is perhaps the only objective measure of scientific competence.
In 2002, we wrote with confidence:
“Climate science does not support the theory of catastrophic human-made global warming – the alleged warming crisis does not exist.”
http://www.apegga.org/Members/Publications/peggs/WEB11_02/kyoto_pt.htm
The above statement was based on strong evidence available at that time that the Sensitivity of Earth Temperature to increased atmospheric CO2 was not significant and was vastly over-estimated by the climate models cited by the IPCC.
The term “temporary warming hiatus” implies that warming will resume. I submit that it will not, and Earth is entering a natural cooling period. I wrote this global cooling prediction in an article, also published in 2002.
The above global cooling prediction was based on strong evidence available at that time that global warming and cooling cycles were primarily natural in origin and Earth was nearing the end of a natural warming cycle and about to enter a natural cooling cycle. These natural cycles are somewhat irregular and the timing of our prediction (cooling to start by 2020-2030) may be a bit late – global cooling may have already begun, although we will likely only know this with certainty in hindsight.
We do know that SC24 is a dud and similar periods of solar inactivity (e.g. the Dalton and Maunder Minimums) have coincided with severe global cooling and major population declines due to famine in Northern countries. I suggest that IF this imminent global cooling is severe, and we are woefully unprepared due to global warming nonsense, the price society pays for our lack of preparedness will be much more grave.
Warmist nonsense has resulted in the squandering of over a trillion dollars of scarce global resources, mostly on inefficient and ineffective “green energy” schemes.
In 2002 we wrote with confidence:
“The ultimate agenda of pro-Kyoto advocates is to eliminate fossil fuels, but this would result in a catastrophic shortfall in global energy supply – the wasteful, inefficient energy solutions proposed by Kyoto advocates simply cannot replace fossil fuels.”
The policy makers of Europe, Ontario and California could have benefitted from this advice – instead, they severely damaged their economies by foolishly adopting worthless green energy schemes and are now having to reverse these decisions, due to soaring energy prices.
Another point – the satellite temperature record suggests a probable warming bias in the surface temperature record of about 0.2 C since 1979, or about 0.07C per decade, so one should regard the alleged surface temperature warming trends as of being questionable accuracy.
The global warming camp has much to answer for. They have promoted false alarm and have profited from it. They have squandered significant global resources. They have caused us to focus our attentions on a non-crisis – global warming – and thus have caused us to ignore a much greater potential threat, probable imminent global cooling. They have acted like imbecilic thugs, and have caused several eminent scientists to be dismissed from their academic positions.
At a minimum, I suggest that these thuggish university dismissals should be reversed without delay, with suitable apologies.
Foolish “green energy” schemes and the lavish subsidies that make them attractive should cease immediately.
I also suggest that serious study of probable global cooling and its possible mitigation, if it is severe, should be commenced without delay.
Regards, Allan

richardscourtney
September 6, 2013 6:35 am

Allan MacRae:
Thankyou for your post addressed to me at September 6, 2013 at 6:18 am
http://wattsupwiththat.com/2013/09/05/statistical-proof-of-the-pause-overestimated-global-warming-over-the-past-20-years/#comment-1409424
It says

I suggest that one’s predictive track record is perhaps the only objective measure of scientific competence.

Hmmmm.
Well, if you are talking about the taking of empirical measurements then, no, I don’t see how that can be true.
But if you are talking about theoretical modeling then your assertion must be true. And it goes to the nub of this thread.
Richard

September 6, 2013 6:44 am

Global warming climatology is notable for the absence of the statistical populations underlying its models. The absence of these populations wounds this discipline. Casualties from this wound include probability theory, information theory, mathematical statistics and logic.
In their paper, Fyfe et al show how conclusions may be drawn from a global temperature time series despite this seemingly unsurmountable barrier. One makes a bunch of assumptions and buries these assumptions!

richardscourtney
September 6, 2013 6:57 am

Friends:
Please resist temptation to answer the post from Terry Oldberg at September 6, 2013 at 6:44 am.
You know he is wrong.
I know he is wrong.
And he knows he is wrong because on previous IPCC threads he has been unable to define the “the statistical populations” he claims are “absent”.
Any attempt to engage with him is like entering Alice’s rabbit hole. And it completely destroys a thread.
If anybody doubts the need for my request I suggest that – as a recent example – they peruse the recent thread at
http://wattsupwiththat.com/2013/08/31/wuwt-hot-sheet-for-saturday-august-31st-2013/
Richard

September 6, 2013 7:00 am

John F. Hultquist says: September 5, 2013 at 2:03 pm
“No one — however smart, however well-educated, however experienced — is the suppository of all wisdom.”
– Tony Abbott
Disagree: I respectfully submit that the IPCC is the suppository of all wisdom.

Pamela Gray
September 6, 2013 7:10 am

A pondering about the pause:
During La Nina/La Nada conditions, when there are fewer clouds overhead but more wind, the ocean surface is roughened up, which leads to a less warm surface due to top layers mixing as well as shoving warmer top water away. That’s not of interest to me in the pause. What is of interest is the amount of warming that happens below the surface due to SWIR penetration under these conditions. If the skies are not under “clear sky” conditions during these recharge periods, we should see less warming of the water below the surface. Eventually, the needle goes to the positive side of the ENSO dial (IE El Nado or El Modoki), and the surface calms down to the point that this now less warmed water again sits on top, If these conditions continue, we should see a stable pause in subsequent land temperatures. However, if the swing back to La Nina/La Nada gets even less defined with more clouds, and the oceans get less and less recharged due to equatorial cloud cover, we could even see a stepping down process in subsequent land temperatures.
So then the question is, what data do we have on subsurface recharging warming during non El Nino conditions over this time period?

Pamela Gray
September 6, 2013 7:24 am

Maybe we need more definition and descriptive names for these ENSO periods, such as: La Nina, La Modiki, La Nada, Neutral, El Nado, El Modiki, and El Nino. It seems to me that the goodies in the pause could be found in the waters of La Modiki, La Nada and Neutral.

BBould
September 6, 2013 7:30 am

Richardscourtney: Thanks for taking the time to explain the paper I brought up, its truly appreciated. This is the reason I started looking into it, post (not addressed to me) from realclimate – “Read up on Quine and the issue of auxiliary hypotheses. In practice, all theories are ‘wrong’ (as they are imperfect models of reality), and all tests involve multiple hypotheses. Judging which one (or more) are falsified by a mismatch is non-trivial. I have no problem agreeing that mismatches should be addressed, but wait for the post. – gavin]”
Hopefully this will help explain my interest.

BBould
September 6, 2013 7:33 am

Pamela Gray: Thanks you made me think of another question. Does anyone study how much energy the ocean loses at night? I know my swimming pool warms and cools much slower than the surrounding air but its always much cooler at dawn.

rgbatduke
September 6, 2013 7:45 am

This might be good work but it is not part-and-parcel of Quine’s work. Seems to me that it just takes Quine’s logic and adds to it. To those who pursued probabilities for various hypotheses, he remarked that he had no interest in colored marbles in an urn.
Sacrilege! Polya would be turning in his grave! Taleb, on the other hand, might not (partly because “he’s not dead yet!”:-). As his character “Joe the cab driver” (IIRC) in The Black Swan” might say to an analysis of the data above, “It’s a mugs game”. If you flip a two-sided coin 20 times and get heads every time, only an idiot would apply naive probability theory with the assumption of an unbiased coin and claim that the probability of the next flip being heads is 0.5. A Bayesian, however firmly they might have believed it was an unbiased coin initially, would systematically adjust the prior estimate of 0.5 until the maximum likelihood of the outcome coincides with the data and at this point would be deeply suspicious that the coin actually had two heads, that the coin was a magical coin, that the coin was so amazingly weighted and carefully flipped that it had p_head -> 0.999999, that it’s all a horrible dream and there is no real coin. A mugs game.
At this point, I’ll simply amplify what I said above in two ways. First, one can, with an enormous amount of effort, attempt an actual statistical analysis of a (essentially meaningless) composite hypothesis, but nothing of the sort has been attempted for CIMP5 in part because doing so would be spectacularly difficult — right up there with the difficulty of the problem that the GCMs are attempting to solve (which is already one of the most difficult computational problems humans have attempted to solve). The difficulty arises because the theories are highly multivariate and have an abundance of assumptions. Every assumption in every model is subject to Bayes theorem as a Bayesian prior! That is, when one assigns a specific functional form to the radiation profile of the atmosphere with various CO_2 levels, one is — since we cannot precisely compute this and are forced to use one of several approximations (see e.g. Petty’s book) we have to statistically weight the probability that those approximations are correct and downgrade the certainty of our results (our eventual error estimate) accordingly.
To put it more formally, the assertion is that
If all the assumptions made constructing the computation are correct, then the model predicts thus and such. However, the assumptions are not certain, and the best estimate of the probability that the model prediction is correct is strictly decreased according to their uncertainty. This can be summarized in the aggregate assumption that “the internal, highly nonlinear, dynamically chaotic differential equations solved by the computer code is correct and sufficiently insensitive to the range of possible error in the Bayesian priors that the output is meaningful in all dimensions (because the code doesn’t just predict temperature, it predicts lots of other things about the future climate as well). Analyzing this precisely for a single theory is enormously difficult, which is why most of the models resort to Monte Carlo to attempt to measure it instead of theoretically predict it. But there are further assumptions built into e.g. the ranges explored by the Monte Carlo itself (more Bayesian priors), into the selection of input variables itself (hard to “Monte Carlo” the omission of a variable that in fact is important), into the granularity and geometry selected (again, difficult to Monte Carlo as the codes may not be written to be length-scale renormalizable in an unbiased way), and one cannot escape the fundamental assumption “this code will correctly predict the multidimensional future in a consistent way and within a useful precision” no matter what you do.
That is the basis for the ultimate null hypothesis, per model. In the end, the model produces an ensemble of results that supposedly span the range of model uncertainty given the priors stated and unstated that went into its construction. If reality fails to lie well within that range in any significant dimension/projection, the model should be considered suspect (weak failure) or overtly fail a hypothesis test depending on how badly reality fails to live within that range.
Imagine attempting to extend this process collectively for all the models in CIMP5! How can one rigorously assess whether approximation A or approximation B for e.g the radiative properties of CO_2 in the atmosphere is most probably correct when the answer could be both are adequate as the basis of a correct theory if everything else is done correctly or neither of them will work because a correct (predictive) theory requires an exact treatment of CO_2’s radiative properties. And then, of course, there are the rest of the greenhouse gases, the non-greenhouse atmosphere, water vapor, clouds, the ocean, aerosols, soot and other particulates, the extent and effect of the biosphere — it is difficult to even count the underlying assumptions that are built into each model and not all of them are in all of the models.
So how can one frame the null hypothesis for CIMP5? “Somewhere in the collection of contributing GCMs is a model that is a reliable predictor of the actual climate”, so that we can then assess a probability of getting the current result if that is true? No, that won’t work. The implicit null hypothesis in the figures above and used by the IPCC in the assessment reports is that “The mean of the collection of models in CIMP5 is a reliable predictor of the actual climate, and the standard deviation of the distribution of results is a measure of the probability of the actual climate arising from the common initial condition given this correct computation of its time evolution”. Which is nonsense, unsupported by statistical theory indeed (as I argue above) unsupportable by statistical theory, and at the end of the day, all that the figure above really demonstrates is that the GCMs are very likely not independent and unbiased in their underlying assumptions because they do produce a creditable Gaussian (with the wrong mean, but seriously, this is enormously unlikely in a single projective variable obtained by a collection of supposedly independent models that otherwise significantly differ in their predictions of e.g. rainfall).
IMO the one conclusion that is immediately justified by the distribution of CIMP5 results is that the GCMs are enormously incestuous, sharing whole blocks of common assumptions, and that at least one of those common assumptions is badly incorrect and completely unexplored by the Monte Carlo perturbations of initial conditions in the individual models. If one performed a similar study of the projective distribution of results in other dimensions one might even gain some insight into just what shared assumptions are most suspect, but that would require a systematic deconstruction of all of the models and code and some sort of gross partitioning of the shared and different features — an awesomely complex task.
The second amplification is a simple observation that I’ll make on the process of highly multivariate predictive modeling itself (wherein I’m moderately expert). There are two basic kinds of multivariate predictive models. An a priori model assumes that the relevant theory is correctly known and attempts to actually compute the result using that theory, using tools like Monte Carlo to assess the uncertainty in the computed outcome where one can (as noted above, one cannot assess the uncertainty linked to some of the prior assumptions implicit in the implementation by Monte Carlo or any other objective way as there is no “assumption space” to sample and nonlinear chaotic dynamics can amplify even small errors into huge end stage differences (see e.g. the “stiffness” of a system of coupled ODEs and chaos theory, although the amplification can easily be significant even for well-behaved models).
In order to set up the Monte Carlo, one has to assign values and uncertainty ranges to the many variable parameters that the model relies upon. This is typically done by training the model — using it to compute a known sequence of outcomes from a known initialization and tweaking things until there is a good correspondence between the actual data and the model-produced data. The tweaking process typically at least “can” provide a fair amount of information about the sensitivity of the model results to these assumptions and hence give one a reasonable knowledge of the expected range of errors predicting the future. One then applies the model to a trial set of data that (if one is wise) one has reserved from the training data to see if the model continues to work. This second stage “validates” the model within the prior assumption “the training and trial set are independent and span the set of important features in the underlying a priori assumed known dynamics”. Finally, of course, the validation process in science never ends.
It doesn’t matter a whit if your model perfectly captures the training data, nails the trial data square on the money, if the first time you compare it to new trial data from the real world it immediately goes to hell well outside of the probable range of outcomes you expected. If you are prospecting for oil with a predictive model, it doesn’t matter if your code can predict previously drilled wells 95% of the time if you only get one oil strike in 100 drilling attempts the first time you use it to direct oil exploration efforts. You are fired and go broke no matter how fervently you argue that your code is good and you are just unlucky. Ditto predicting the stock market, predicting pretty much anything of value. Science is even more merciless than commerce in this regard everywhere but climate science!. Classical physics was “validated” by experiment after experiment in pretty good agreement for a century after its discovery, but then an entire class of experiments could not be explained by classical a priori models. By “could not be explained”, I mean specifically that even if one built a hundred classical models, each slightly different, to e.g. try to predict electronic spectra or the outcome of an electron diffraction experiment, the mean of all of those distinct a priori models would never converge to or in the end have meaningful statistical overlap with the actual accumulating experimental data. They in fact did not, and a lot of effort was put into trying!
The problem, of course, was that a common shared assumption in the “independent” models was incorrect. Further, it was an assumption that one could never “sample” with mere Monte Carlo or any sort of presumed spanning of a space of relevant assumptions, because it was one of the a priori assumed known aspects of the computation that was incorrect, even though (at the time) it was supported by an enormous body of evidence involving objects as large as molecules or small clumps of molecules on up. We had to throw classical physics itself under the bus in order to make progress.
You’d think that we would learn from this sort of paradigm-shifting historical result not to repeat this sort of error, and in the general physics community I think the lesson has mostly been learned, as this sort of process occurs all the time in the delicate interplay between experimental physics and theoretical physics. In some sense we expect new experiments to overturn both great and small aspects of our theoretical understanding, which is why people continue to study e.g. neutrinos and look for Higgs bosons, because even the Standard Model in physics is not now and will never be settled science, at best it will continue to be consistent with observations made so far. A single (confirmed) transluminal neutrino or new heavy particle or darkon can ruin your whole theoretical day, and then it is back to the drawing board to try, try again.
In the meantime, this example shows the incredible stupidity of claiming that the centroid of the projection of a single variable from a collection of distinct priori models with numerous shared assumptions, many of which cannot be discretely tested or simulated, has any sort of statistically relevant connection to reality. Each model one at a time is subject to the usual process of falsification that all good science is based on. Collectively they do not have more weight, they have less. By stupidly including obviously failed models in the average, you without question pull the average away from the true expected behavior.
rgb

September 6, 2013 7:48 am

Richard Barraclough says:
September 6, 2013 at 1:15 am
Good to see a little etymological sparring in amongst the science.
Now, if only we could all distinguish between “its” and it’s”……..

=======================================================================
I’m getting better at it. Someone here (maybe you?) gave a tip a while back to keep them straight.
If it’s the possessive, treat it like “his” or “hers”, no apostrophe.

richardscourtney
September 6, 2013 7:50 am

BBould:
Thankyou for the acknowledgement in your post addressed to me at September 6, 2013 at 7:30 am.
I am grateful that you brought the paper to my attention because I was not aware of it despite its having been published so long ago (i.e. in 2001). And that lack of awareness is not surprising considering the serious flaws the paper contains and the limited – to the degree of being almost useless – conclusion it reaches. However, if RC and the like are intending to use that paper as an excuse for model failure then they really, really must be desperate!
In the light of why you say you raised the paper I now consider the trouble I had obtaining the paper was well worth it. If anybody attempts to excuse model failure by resurrecting that paper from obscurity then I can now refute the laughable attempt.
Thankyou.
Richard

richardscourtney
September 6, 2013 8:03 am

rgbatduke:
Thankyou for your brilliant post at September 6, 2013 at 7:45 am
http://wattsupwiththat.com/2013/09/05/statistical-proof-of-the-pause-overestimated-global-warming-over-the-past-20-years/#comment-1409484
I commend everyone interested in the subject of this thread to read, study and inwardly digest it.
And I ask you to please amend it into a form for submission to Anth0ny for him to have it as a WUWT article.
Richard

September 6, 2013 8:14 am

rgbatduke (Sept. 6, 2013):
To your list of shortcomings in the methodology of global warming research, you could have added that the general circulation models are insusceptible to validation because the events in the underlying statistical populations do not exist.

Pamela Gray
September 6, 2013 8:25 am

In model research, the question is, does the model adequately simulate the workings of the underlying statistical population. The null hypothesis would therefore be: There is no statistical difference between the model results and observations. Logically, it is thus susceptible to validation.

Reply to  Pamela Gray
September 6, 2013 11:27 am

Pamela Gray:
Your understanding of the meaning of “validation” is identical to mine. The populations underlying the general circulation models do not exist; thus, these models are insusceptible to being validated.
In the paper entitled “Spinning the Climate,” the long-time IPCC expert reviewer Vincent Gray reports that he once complained to IPCC management that the models were insusceptible to being validated yet the IPCC assessment reports were claiming they were validated. In tacit admission of Vincent’s claim, IPCC management established the policy of changing the word “validated” to the similar sounding word “evaluated.” Evaluation” is a process that can be conducted in lieu of the non-existent statistical population.” Confused by the similarity of the sounds that were made by the two word, many people continued to assume the models were validated.
To dupe people into thinking that similar sounding words with differing meanings are synonyms is an oft used technique on the part of the IPCC and affiliated climatologists. When words with differing meanings are treated as synonyms, each word in the word pair is polysemic (has more than one meaning). When either word in such a word-pair is used in making an argument and this word changes meaning in the midst of this argument, the argument is an example of an “equivocation.” By logical rule, one cannot draw a proper conclusion from an equivocation. To draw an IMPROPER conclusion is the equivocation fallacy. IPCC-affiliated climatologists use the equivocation fallacy extensively in leading dupes to false or unproved conclusions ( http://wmbriggs.com/blog/?p=7923 ) .

Pamela Gray
September 6, 2013 8:30 am

Solid surfaces lose heat more rapidly. Water loses heat more slowly. However, because Earth is more of a water planet than a land planet, it is an interesting question. My hunch is that heat belched up from the oceans become our land temperatures, which at night send that heat up and outa here! Especially under clear sky night conditions (strong radiative cooling).

September 6, 2013 9:23 am

richardscourtney says: September 6, 2013 at 6:35 am
Hello Richard,
To be clear, we are talking about one’s predictive track based on modeling.
Specifically, the GCMs cited by the IPCC greatly over-estimate the sensitivity of Earth’s climate to atmospheric CO2, and under-estimate the role of natural climate variability. This was obvious a decade ago from the inability of these models to hindcast the global cooling period that occurred from ~1945 to 1975, until they fabricated false aerosol data to force their models to conform. As a result of these fatal flaws, these “IPCC GCMs” have grossly over-predicted Earth’s temperature and have demonstrated NO PREDICTIVE SKILL – this is their dismal “predictive track record”..
The IPCC wholeheartedly endorsed this global warming alarmism and so did much of the climate science establishment. Anyone who disagreed was ridiculed as a “denier”, and due to the extremist position of the global warming camp, some leading academics were dismissed from their universities, some received death threats, and some suffered actual violence. The imbecilic, dishonest and thuggish behaviour of the global warming camp was further revealed in the Climategate emails.
Our conceptual model is based on very different input assumptions from the IPCC GCMs. We assumed, based on substantial evidence that was available a decade ago, that climate sensitivity to increased atmospheric CO2 is insignificant, and that natural variability was the primary characteristic of Earth’s climate. We further assumed, based on credible evidence, that solar variability was a significant driver of natural climate variability. Therefore, we wrote in 2002 that there was no global warming crisis, and the lack of warming for the past 10-15 years demonstrates this conclusion to be plausible.
We further wrote in 2002 that global cooling to start by 2020-2030, and it remains to be seen whether this will prove correct or not – but warming has ceased for a significant time, and I suggest that global temperatures are at a plateau and are about to decline. We did not predict the severity of this global cooling trend, but if the solar driver hypo holds, then cooling could be severe. This we do not know, but we do know from history that global cooling is a much greater threat to humanity than (alleged) global warming.
Regards, Allan

richardscourtney
September 6, 2013 9:39 am

Allan MacRae:
re your post at September 6, 2013 at 9:23 am.
Allan, you begin your post by saying to me, “To be clear …”.
To be clear, yes, I agree.
Richard

September 6, 2013 10:48 am

Terry Oldberg says:
September 6, 2013 at 8:14 am
rgbatduke (Sept. 6, 2013):
To your list of shortcomings in the methodology of global warming research, you could have added that the general circulation models are insusceptible to validation because the events in the underlying statistical populations do not exist.

====================================================================
Mr. layman here. To me it sounds like you just said, “The models can’t be wrong because the models say they are right.”
If that is not what you meant would you please explain in layman’s terms?
(Feel free to insult me if you wish as long as you explain.)

richardscourtney
September 6, 2013 11:01 am

Gunga Din:
re your post at September 6, 2013 at 10:48 am.
Can you see that disc of light behind you?
You have entered Alice’s rabbit hole and that disc is where you entered.
It is light from the outside. Enjoy it while you can. You may never see it again.
Richard

Aphan
Reply to  richardscourtney
September 6, 2013 12:41 pm

richardscourtney:
“Can you see that disc of light behind you?
You have entered Alice’s rabbit hole and that disc is where you entered.
It is light from the outside. Enjoy it while you can. You may never see it again.”
You’re killing me here. Smart AND clever AND humble? I feel a science crush coming on….

September 6, 2013 11:16 am

http://wattsupwiththat.com/2013/09/05/statistical-proof-of-the-pause-overestimated-global-warming-over-the-past-20-years/#comment-1409624
==============================================================
As long as it’s brighter than a “nitlamp” I think I can fing my way out. 😎

September 6, 2013 11:58 am

Gunga Din (Sept 6 at 10:48):
Thank you for giving me the opportunity to clarify. I did not mean to say “The models can’t be wrong because the models say they are right.” I did mean to say that the models are insusceptible to being validated. This has the significance that the method by which models were created was not the scientific method of investigation. A consequence is that many IPCC conclusions, including the conclusion that global warming is man-made, must be discarded. The previous sentence should not be taken to mean that we know the warming is not man-made.
The widespread view that the models were created by the scientific method is a product of successful use of the deceptive argument known as the “equivocation fallacy” on the part of the IPCC and affiliated climatologists. An equivocation fallacy is a conclusion that appears to be true but that is false or unproved. For details, please see the peer-reviewed article at http://wmbriggs.com/blog/?p=7923 .

September 6, 2013 12:33 pm

Terry Oldberg
http://wattsupwiththat.com/2013/09/05/statistical-proof-of-the-pause-overestimated-global-warming-over-the-past-20-years/#comment-1409676
==================================================================
Thank you.
“Equivocation fallacy” sounds very similar to “bait and switch”.
(Guess I didn’t need the nitlamp afterall.)

September 6, 2013 1:01 pm

kadaka (KD Knoebel) says:
September 5, 2013 at 2:32 am
If you have a one meter squared body of water, how much would a 1600W (1.6 kilowatt) hair dryer heat the body of water over 60 seconds, 60 minutes, from the surface?

September 6, 2013 1:32 pm

Rich – The equation is physics-based. The physics is the first law of thermodynamics, conservation of energy. This is discussed more completely starting on page 12, Anomaly Calculation (an engineering analysis), in an early paper made public 4/10/10 at http://climaterealists.com/attachments/database/2010/corroborationofnaturalclimatechange.pdf . This shows an early version of the equation which has since been refined.
The equation contains only one external forcing, the time-integral of sunspot numbers which serves as an excellent proxy for average global temperature. The mechanism has been attributed to influence of change to low altitude clouds, average cloud altitude, cloud area, and even location of cloud ‘bands’ as modulated by the jet stream. I expect it will eventually be found to be some combination of these. The high sensitivity of average global temperature to tiny changes in clouds is calculated at http://lowaltitudeclouds.blogspot.com . It is not necessary to know the mechanism to calculate the proxy factor.
Determining the value of a single proxy factor is not ‘curve fitting’.
Graphs that show (qualitatively because proxy factors are not applied) the correlation between the time-integral of sunspot numbers and average global temperature can be seen at http://hockeyschtick.blogspot.com/2010/01/blog-post_23.html or at http://climaterealists.com/attachments/ftp/Verification%20Dan%20P.pdf (this shows an earlier version of the equation. HadCRUT4 data was not used.
The only hypothesis that was made is that average global temperature is proportional to the time-integral of the sunspot time-integral. The rest is arithmetic. The coefficient of determination, R2, = 0.9 demonstrates that the hypothesis was correct.
The past predictive skill of the equation is easily demonstrated. Simply determine the coefficients at any time in the past using data up to that time and then use the equation, thus calibrated, to calculate the present temperature. For example, the predicted anomaly trend value for 2012 (no CO2 change effect) using calibration through 2005 (actual sunspot numbers through 2012) is 0.3888 K. When calibrated using measurements through 2012 the calculated value is 0.3967 K; a difference of only 0.008 K.
The future predictive skill, after 2012 to 2020, depends on the accuracy of predicting the sunspot number trend for the remainder of solar cycle 24 and the assumption that the net effective ocean oscillation will continue approximately as it has since before 1900.
This is an equation that calculates average global temperature. It is not a model, especially not a climate model…or a weather model. An early version of it, made public in 2010, predicted a downtrend from about 2005.
Part of the problem in trying to predict measured temperatures is that the measurements have a random uncertainty with standard deviation of approximately ±0.1 K so only trends of measurements are meaningful for comparison with calculations.

rgbatduke
September 6, 2013 1:50 pm

To your list of shortcomings in the methodology of global warming research, you could have added that the general circulation models are insusceptible to validation because the events in the underlying statistical populations do not exist.
I could have waxed poetic considerably longer, for example pointing out the recently published comparison of four GCMs to a toy problem that is precisely specified and known and that should have a unique answer. All four got different answers. The probability that any of those answers/models is correct is correspondingly strictly less than 25% and falling fast even if we do NOT know what the correct answer is (the best one could say is that one of the four models got it right and the others got it wrong, but of course all four could have gotten it wrong as well, hence strictly less than). This example alone is almost sufficient to demonstrate a lack of “convergence” in any sort of “GC model space”, although 4 is too small a number to be convincing.
I could also have ranted a bit about the stupidity of training and validating hypothesized global warming models using data obtained from a single segment of climate measurements when the climate was monotonically warming, which may be what you are trying to say here (sometimes I have difficulty understanding you but I think that sometimes I actually agree with what you say:-). When training e.g. Neural Network binary classification models, it is often recommended that one use a training set with balanced number of hits and misses, yesses and noes, because if you have an actual population that is (say) 90% noes and train with it, the network quickly learns that it can achieve 90% accuracy (which is none too shabby, believe me) by always answering no!
Of course this makes the model useless for discriminating the actual yesses and noes in the population outside of the training/trial set, but hey, the model is damn accurate! And of course the solution is to build a good discriminator first and correct it with Bayes theorem afterwards, or use the net to create an ordinal list of probable yes-hood and empirically pursue it to an optimum payoff.
GCMs appear to have nearly universally made this error. Hindcasting the prior ups and downs in the climate record is not in them, to the extent that we even have accurate data to use for a hindcast validation. By making CO_2 the one significant control knob at the expense of natural variations that are clearly visible in the climate record and that are not be predicted by the GCMs as far as I know, certainly not over multicentury time scales, all the models have to do is predict monotonic warming and they will capture all of the training/trial data and there are LOTS of ways to write, tune, initialize a model to have monotonic behavior without even trying. The other symptoms of failure — getting storms, floods and drought, SLR, ice melt, and many other things wrong were ignored, or perhaps they expected that future tweaks would fix this while retaining the monotonic behavior that the creators of the models all expected to find and doubtless built into the models in many ways. Even variables that might have been important — for example, solar state — were nearly constant across the training/trial interval and hence held to be irrelevant and rejected from the models. Now that many of those omitted variables — ENSO, the PDO, solar state are radically changing, now that the physical science basis for the inclusion and functional behavior of other variables like clouds and soot is being challenged, it can hardly be that surprising that the monotonic models that all were trained and validated by the same monotonic interval and insensitive to all of these possibly important drivers continue to show monotonic behavior while the real climate does not as those drivers have changed state?
If the training set for a tunable model does not span the space of possible behaviors of the system being modeled, of course you’re going to be at serious risk of ending up with egg on your face, and with sufficiently nonlinear systems you will never have sufficient data to use as a training set. Nicholas Nassim Taleb’s book The Black Swan is a veritable polemic against the stupidity of believing otherwise and betting your life or fortune on it. Here we are just betting the lives of millions and the fortunes of the entire world on the plausibility that the GCMs built in a strictly bull market can survive the advent of the bear, or are bear-proof, or prove that bears have irrevocably evolved into bulls and will never be seen again. This bear is extinct. It is an ex-bear.
Until, of course, it sits up and bites you in the proverbial ass.
So yeah, Terry, I actually agree. One of many troubling aspects of GCMs is that they have assumptions built into them supported by NO body of data or observation or even any particularly believable theory. They have assumptions that contradict or are neutral to the existing observational data, such as “the PDO can safely be ignored”, or “the 1997-1998 warming that is almost all of the warming observed over the training interval was all due to an improbable ENSO event, not CO_2 per se”, or “solar variability is nearly irrelevant to the climate”. And every one of them is an implicit Bayesian prior, and to the extent that the assumptions are not certain, they weaken the probable reliability of the predictions generated by the models that incorporate them, even by omission.
rgb

rgbatduke
September 6, 2013 2:00 pm

If you have a one meter squared body of water, how much would a 1600W (1.6 kilowatt) hair dryer heat the body of water over 60 seconds, 60 minutes, from the surface?
Well, let’s see, that’s one metric ton (1000 kg) of water. Its specific heat is 4 joules per gram degree centigrade. 1000 kg is a million grams. To raise it 1 one degree requires 4 million joules. If you dumped ALL 1600 W into the water, prevented the water from cooling or heating (adiabatically isolated it), it would take 42 minutes to raise it by a degree. If you tried heating it with warm blowing air from a hair drier on the TOP SURFACE, however, you would probably NEVER warm the body of water by a degree. I say “probably” because the wind from the hair drier (plus its heat) would encourage surface evaporation. Depending on the strength of the wind and how it is applied, it might COOL the water due to latent heat of evaporation, or the heat provided by the hair drier might be sufficient to replace it by a bit. However, even in the latter case, since water will cheerfully stratify, all you’d end up doing is warming the top layer of water until latent heat DID balance the hair drier’s contribution to the water (probably at most a few degrees) and it would then take a VERY long time for the heat to propagate to the bottom of the cubic meter, assuming that that bottom is adiabatically insulated. Days. Maybe longer. And it would as noted probably not heat without bound — it would just shift from one equilibrium temperature at the surface to another slightly warmer on.
rgb

rgbatduke
September 6, 2013 2:09 pm

Your understanding of the meaning of “validation” is identical to mine. The populations underlying the general circulation models do not exist; thus, these models are insusceptible to being validated.
No scientific model can be verified in the strong sense. All scientific models can be falsified in the strong sense. So what is the point? We could have validated any given GCM in the weak sense by observing that it is “still” predicting global climate reasonably accurately (outside its training/trial where this is not surprising). No interval of observing that this is true is sufficient to verify the model in the strong sense (so that we believe that it can never be falsified, the data proves the model). But plenty of models, including GCMs, could be validated in the weak sense up to the present.
It’s just that they (mostly) aren’t. They are either strongly falsified or left in limbo, not definitively (probably) correct or incorrect, so far.
I do not understand your point about the populations underlying the GCMs, after all. You’ll have to explain in non-rabbit-hole English, with examples, if you want me to understand. Sorry.
rgb

richardscourtney
September 6, 2013 2:35 pm

Dan Pangburn:
re your post addressed to me at September 6, 2013 at 1:32 pm
http://wattsupwiththat.com/2013/09/05/statistical-proof-of-the-pause-overestimated-global-warming-over-the-past-20-years/#comment-1409791
in response to my answer to you at September 6, 2013 at 5:19 am
http://wattsupwiththat.com/2013/09/05/statistical-proof-of-the-pause-overestimated-global-warming-over-the-past-20-years/#comment-1409387
Sorry, but the model IS a curve fitting exercise. I remind that the link says

The word equation is: anomaly = ocean oscillation effect + solar effect – thermal radiation effect + CO2 effect + offset.

The link is
http://climatechange90.blogspot.com/2013/05/natural-climate-change-has-been.html
and the mathematical representation of that equation is there (if I knew how to copy it to here then I would).
If the model were not a curve fitting exercise then there would be accepted definitions of
ocean oscillation effect,
solar effect,
thermal radiation effect,
CO2 effect,
and the offset to be applied.
There are no such agreed definitions. The parameters are each compiled to fit the curve.
As you say, they do not disagree with known physics. But other curve fitting exercises could, too. And they would also ‘wiggle the elephants trunk’.
This is not to say the model is wrong. But there is no reason to think it is right. I explain this in my post which you have answered.
Sorry, but that is the way it is.
Richard

Gary Pearse
September 6, 2013 2:48 pm

rgbatduke says:
September 6, 2013 at 2:09 pm
On top of all that, the GCM, largely right or largely wrong, cannot be even trained over any interval because the world’s temp record keepers have an algorithm that keeps changing the record. Probably the GCMs in existence were “trained” over HadCrut 2 or 3 and now we have Hadcrut 4 for example. Man, a large team has to come in and re-correct the temperature records going back to the raw data. If its dangerous global warming we are trying to quantify, I contend that there is little need for adjustments even if there is some reasonable case for it if we are to be facing runaway warming and seas rising metres. Correcting here or there by 0.2 -0.4 (I call it the thumbtack method – stick the tack in to about 1945 and rotate counter clockwise a half a degree) won’t even matter if we are going to have unbearable heat rise. We haven’t even got our feet wet and GISS was calling for the West Side Highway to be under water before now and its about 10 feet above the water in Manhattan at the present time. The GCMs are easy – we can just throw them out.

September 6, 2013 5:11 pm

Rich – The constants and variables in the math equation are defined just after the math equation. I’ll connect the terms in the word equation with the math equation and try to expand on them a bit more.
ocean oscillation effect = (A,y) “There is some average surface temperature oscillation that accounts for all of the oceans considered together of which the named oscillations are participants.” Page 1, 3rd paragraph from bottom.
solar effect = B/17 * summation of sunspot numbers from 1895 to the calculation year. This accounts for the energy gained by the planet above or below break-even and expresses it as temperature change.
thermal radiation effect = B/17 * summation of 43.97*(T(i)/286.8)^4 from 1895 to the calculation year. This accounts for the energy radiated by the planet above or below break-even and expresses it as temperature change.
CO2 effect = C/17 * summation of ln(CO2 level in the calculation year/CO2 level in 1895 from 1895 to calculation year)
Offset to be applied = D (see the paper)
“…agreed definitions” These are the definitions of the terms in the equation. What matters is the results of the equation. The results match the down-up-down-up-too soon to tell of reported average global temperatures (which have sd≈±0.1 K). The whole point is that these are not for anyone else to ‘agree’ on. I don’t know of anyone else who has thought to look at the time-integral of sunspots.
I’m not sure what you mean by ‘parameters’. The coefficients are ‘tuned’ (tediously) to maximize R2 but, except for the proxy factor, they can be estimated fairly closely by a look at anomaly measurements.
“But other curve fitting exercises could, too.” I don’t think so. Here is the challenge. Fit the measured anomalies back to 1895 with R2=0.9. Approximate the accepted average global temperature trend back to 1610. Use only one external forcing.
I think the equation is right because it does all those things and also gave a good prediction of 2012 measurements based on data through 2005. I have no interest in an ‘is not’ ‘is too’ argument. The equation and graph with prediction are made public and waiting for future measurements.

September 6, 2013 7:51 pm

http://wattsupwiththat.com/2013/09/05/statistical-proof-of-the-pause-overestimated-global-warming-over-the-past-20-years/#comment-1409063
Dan Pangburn says: September 5, 2013 at 4:30 pm
“A physics-based equation, using only one external forcing, calculates average global temperature anomalies since before 1900 with R2 = 0.9. The equation is at http://climatechange90.blogspot.com/2013/05/natural-climate-change-has-been.html
Everything not explicitly considered must find room in that unexplained 10%.”
_____________
Thank you Dan,
This was interesting. You say “About 41.8% of reported average global temperature change results from natural ocean surface temperature oscillation and 58.2% results from change in the rate that the planet radiates energy to outer space, as calculated using a proxy, which is the time-integral of sunspot numbers.” So Solar is your “one external forcing”.
You used Hadcrut4 Surface Temperature record in this analysis.
I suggest that this Surface Temperature record probably exhibits a significant warming bias – my rough estimate for Hadcrut3 was about 0.07C per decade, at least back to ~1979 and possibly much further.
How would your analysis change if you were to decrease your surface temperature record by 0.07C/decade from about 1945 to present, and particularly how would this change the inferred impact of increased atmospheric CO2 and other parameters in your equation?
If you want to email me, you can contact me through my website at http://www.OilSandsExpert.com
Thank you, Allan

nevket240
September 6, 2013 9:08 pm

http://www.smh.com.au/environment/climate-change/rising-ocean-acidity-may-spur-climate-action-20130907-2tbe7.html
as an avowed Climate Cycler and denier of Climate Goring I cannot understand how useless the media have been in executing their responsibility to journalism, instead they have been nothing better than glorified story tellers. As per this ‘article’ the Ph has moved to 8.1 from the 8.2 of pre industrial levels. OH?? really?? what a drastic change. I am saddened by this massive shift and ask for all Greens to avoid electrical power and carbon based products immediately. NOW!!!
regards

September 6, 2013 9:33 pm

Among the probability theory specialists being present here
e.g. Richard, Alan, Dan, dr Brown
I do have an interesting problem for you.
I took a random sample of 47 weather stations, carefully selected to be suitably globally representative.
I analysed all daily data, determining the change in temperature noted over time.
http://blogs.24.com/henryp/2013/02/21/henrys-pool-tables-on-global-warmingcooling/
I observed that the change in the speed of warming/cooling can be set out against time giving binomials with high correlation, >0.95. In the case in the drop of the speed maximum temp.correlation was >0,995.
Unfortunately, the binomial fit would show tremendous cooling coming up in the future… I therefore came up with the sine wave best fit for the drop in the speed of maximum temp.
http://blogs.24.com/henryp/2012/10/02/best-sine-wave-fit-for-the-drop-in-global-maximum-temperatures/
Now back in 1985 William Arnold reported a connection between sunspots and planet alignment.
Observe from my a-c curves (can be determined and easily estimated)
1) change of sign: (from warming to cooling and vice versa)
1904, 1950, 1995, 2039
2) maximum speed of cooling or warming = turning points
1927, 1972, 2016
Then I put the dates of the various positions of Uranus and Saturn next to it:
1) we had/have Saturn synodical with Uranus (i.e. in line with each other)
1897, 1942, 1988, 2032
2) we had complete 180 degrees opposition between Saturn and Uranus
1919, 1965, 2009,
In all 7 of my own results & projections, there is an exact 7 or 8 years delay, before “the push/pull ” occurs, that switches the dynamo inside the sun, changing the sign or direction of the warming/cooling….!!!! Conceivably the gravitational pull of these two planets has some special lob sided character, causing the actual switch. Perhaps Uranus’ apparent side ward motion (inclination of equator by 98 degrees) works like a push-pull trigger. Either way, there is a clear correlation. Other synodical cycles of planets probably have some interference as well either delaying or extending the normal cycle time a little bit. So it appears William Arnold’s report was right after all….(“On the Special Theory of Order”, 1985).
http://www.cyclesresearchinstitute.org/cycles-astronomy/arnold_theory_order.pdf
My reasoning now is that the probability of there not being a relationship of the alignment of the planets Uranus and Saturn with the speed of incoming energy, is only 1 / 7 to the power 7
Am I right or am I wrong?

kadaka (KD Knoebel)
September 6, 2013 10:48 pm

From HenryP on September 6, 2013 at 9:33 pm:

I took a random sample of 47 weather stations, carefully selected to be suitably globally representative.

They were a carefully selected random sample.
Please provide what you think is the meaning of “random”. You keep using that word. I do not think it means what you think it means.

richardscourtney
September 7, 2013 2:29 am

Dan Pangburn:
Thankyou for your post addressed to me at September 6, 2013 at 5:11 pm.
http://wattsupwiththat.com/2013/09/05/statistical-proof-of-the-pause-overestimated-global-warming-over-the-past-20-years/#comment-1409960
Unfortunately, I have nothing more to add because I have explained my view in my previous two replies to you.
As I explained, the model you promote is a curve fitting exercise.
I write to try to help you understand the problem with the model.
I ask you to consider the post to you from Allan MacRae at September 6, 2013 at 7:51 pm
http://wattsupwiththat.com/2013/09/05/statistical-proof-of-the-pause-overestimated-global-warming-over-the-past-20-years/#comment-1410046
He says

You used Hadcrut4 Surface Temperature record in this analysis.
I suggest that this Surface Temperature record probably exhibits a significant warming bias – my rough estimate for Hadcrut3 was about 0.07C per decade, at least back to ~1979 and possibly much further.

I add that the other global temperature data sets vary, too. This is GISS
http://jonova.s3.amazonaws.com/graphs/giss/hansen-giss-1940-1980.gif
Does the model only work for Hadcrut4?
If so, then it will not work soon because the Hadcrut4 data are altered most months.
Does the model work for Hadcrut4, Hadcrut3 and GISS which are different?
If so, then – as I said – it is a curve fitting exercise which provides no information.
Please note that the transient nature of the global temperature data sets is why in my above post at
http://wattsupwiththat.com/2013/09/05/statistical-proof-of-the-pause-overestimated-global-warming-over-the-past-20-years/#comment-1408432
I argued that there are two separate issues when considering performance of numerical climate models, and these are
(a) the data
and
(b) comparison of model results with the data.
Curve fitting deliberately combines those issues and, therefore, it is not possible to assess one by using the other.
Richard

richardscourtney
September 7, 2013 2:41 am

Henry P:
re your question to me an others.
Sorry, but I cannot provide an answer to your question until you have provided the clarification requested of you by kadaka (KD Knoebel) at September 6, 2013 at 10:48 pm
http://wattsupwiththat.com/2013/09/05/statistical-proof-of-the-pause-overestimated-global-warming-over-the-past-20-years/#comment-1410115
I have provided a reply to Dan Pangburn but – for some mysterious reason – it (and another reply I attempted to someone else on another thread) is trapped in moderation.
Richard

richardscourtney
September 7, 2013 3:40 am

nevket240:
re your post at September 6, 2013 at 9:08 pm
http://wattsupwiththat.com/2013/09/05/statistical-proof-of-the-pause-overestimated-global-warming-over-the-past-20-years/#comment-1410078
You say concerning so-called ‘ocean acidification’

As per this ‘article’ the Ph has moved to 8.1 from the 8.2 of pre industrial levels

The pH of the ocean surface layer varies so much in space and in time that there is no possibility of such a small average change having been detected.
However, if that pH change has happened then it would induce an argument concerning which is cause and which is effect.
A change of only 0.1 in the average pH of the ocean surface layer would alter the equilibrium between atmospheric CO2 and oceanic CO2 concentrations to induce a rise in atmospheric CO2 which would be greater than the claimed rise from 280 ppmv to ~400 ppmv since the industrial revolution. Such a pH change could be (but probably is not) a result in variation of sulphur emission from volcanoes beneath the sea followed by the sulphur taking centuries before the thermohaline circulation conveys it to the ocean surface layer.
The subject of the carbon cycle is interesting but not pertinent to this thread. I have answered your post so it has not been ignored, so there is no reason to pursue the matter here.
If you want more info. on the carbon cycle then I suggest you use the WUWT Search facility for
Salby
then read the threads which that provides.
Richard

kadaka (KD Knoebel)
September 7, 2013 6:36 am

RMB said on September 7, 2013 at 5:58 am:

I would argue with your proposal. Radiation enters water but physical heat does not.

If I drop a pebble into water, there are ripples in the water.
The object doesn’t have to enter the water. I can skip a stone across a pond, and everywhere the stone touches the water there will be ripples.
The ripples are evidence of the transfer of kinetic energy.
The thermal energy of a gaseous molecule is basically just kinetic energy.
So picture a pebble as small as a molecule that hits the water, whether it enters or just bounces off the surface. It can transfer kinetic energy to the water, which is transferring physical heat.
Thus it is shown that air that is warmer can transfer physical heat to water that is cooler.
Thus you are wrong.

RMB
Reply to  kadaka (KD Knoebel)
September 7, 2013 8:46 am

“it can transfer kinetic energy to the water”. I thought so too but when I tried to heat water through the surface using a heat gun thats not the result I got. At 450degsC the heat should quickly boil the water but the water remains cold including the point on the surface where the heat is being directed. The rejection of heat is very convincing. If you want to heat water through the surface, the only way to do it is to apply the heat source through a metal floating object. The floating object kills the surface tension and heat will flow. I don’t pretend to know exactly why the heat is so convincingly blocked but my guess is that we just don’t know enough about the properties of surface tension, after all not many people fire heat guns at buckets of water.

Aphan
Reply to  RMB
September 7, 2013 12:32 pm

RMB-
Just some thoughts.
Getting a set amount of water to “boil” takes more than just a heat source that is 430 degrees. Boiling water is the result of convection and conduction, and only results with ALL of the water in a given container reach the boiling point.
Unless you replicate ALL of the conditions that impact the Ocean and it’s temperatures in your home experiment, you haven’t proven anything about the Ocean’s heat/energy absorption.
For example-
The saline/salt and nutrient content in Ocean water is different than tap water. This makes the way it conducts and radiates heat different than tap water.
The mineral content of ocean water also affects it’s surface tension, as does it’s movements.
Surface tension DECLINES when temperatures INCREASE.
Boiling water in a pan from the bottom introduces not just a heat source to the water, but the conductivity of that heat through metal, and the convection cycle of hot water and air on the bottom of the pan rising quickly to the top, overturning, and bringing the cooler water down to the bottom to be heated quickly.
Unless you have introduce a way for the warm water at the surface of your container to be forced to circulate to the lower depths so that the cold water can then come to the surface and be heated, all of the water will never reach the boiling point at the same time.
And those are just a few differences that I can think of off the top of my head.
Thermal energy DOES enter the Ocean, from the Sun above, AND from the Earth’s radiation below and around it. This radiation causes the molecules in the water to vibrate-which then release that energy as heat. But heat RISES, so that energy/heat remains and circulates in the top layers of the ocean, and does not “sink” to the bottom or hide etc. Warm water dragged to the depths of the oceans by currents, interacts with HUGE, and much larger amounts of much colder water, AND pressure and when it does, it cools, thus releasing the additional energy/heat, which then rises to the surface over time.

rgbatduke
September 7, 2013 7:22 am

My reasoning now is that the probability of there not being a relationship of the alignment of the planets Uranus and Saturn with the speed of incoming energy, is only 1 / 7 to the power 7
Am I right or am I wrong?

You are wrong, and until you learn what post hoc ergo propter hoc means, and understand the difference between curve fitting numerology and science, you will never, ever correct your mistake. We’ve had this discussion before.
You might try reading books on this or something — there is too much to teach you easily online and you haven’t demonstrated any eagerness to learn, as you are too enamored of your own ideas to listen to any others. Also, you’re apparently a half-dozen college level math courses short of having what you need to really understand your mistakes. I will try just one time and then quit.
Fitting any small segment of data with any combination of functions and then using those functions to extrapolate outside of the fit region is a process fraught with peril. Interpolation — filling in in between the data points — has some basis if there is reason to believe that the function being fit is smooth on the granularity of the data (and can lead to well-known errors even then if your assumption turns out to be wrong). Extrapolation not only fails, but often one KNOWS that it will fail. If one tries to fit, e.g. a polynomial to a smooth curve, there is a theorem (Weirstrauss Theorem) that one can always do so within systematically reducing bounds. Indeed, there is a constitutive relation — Taylor series in calculus — that can accomplish such a fit either directly or piecewise, up to a point. But the Taylor series contains within it the prediction of its own FAILURE if you attempt to extrapolate outside of a certain radius of convergence or (more generally) the data range used to fit a known function. The higher order, neglected terms come back to haunt you by eventually increasing without bound unless the function being fit IS a finite polynomial.
The exact same thing happens if you use other bases (a basis in this context is a spanning set of functions that represent unit vectors in an infinite dimensional linear vector space that contains all arbitrary smooth functions, as you would know if you’d taken a university level linear algebra class or a class on functional analysis or ordinary differential equations) or mixtures of bases, such as a few polynomial terms plus harmonic functions (either of which can be turned into an orthonormal basis on any fixed interval or with some effort on the entire real line). You can fit something quite beautifully by accident in some finite region, but there is no reason at all a priori to think that the fit can be extrapolated!
You should look up Koutsoyiannis lovely hydrology paper that I’ve posted a dozen or so times on various threads addressing this point. The first page of his paper is the best illustration of this point I’ve ever seen, as he shows three successive blow ups of an actual data set that at first looks constant, then like it is linear, then exponential, then like a harmonic function, and beyond that it could turn out that ALL of this noise on a function that really is linear, or anything else. Think of a polynomial fit as always having an infinite number of terms with unconstrained coefficients waiting to jump out and snare you as soon as you get outside of the fit region.
The point of this is that what you are trying is actually even less justified then the GCMs. At least they actually incorporate a priori believed-to-be-known physics, reasons for the functional forms they try to apply and compute with. What you are doing is also known to be numerically unstable to extrapolation. And then there is post hoc ergo propter hoc, a.k.a. correlation is not causality.
I doubt that any of this will make the slightest dent in your armor, but we have to try, we have to try.
rgb

BBould
September 7, 2013 7:28 am

kadaka (KD Knoebel): Then why does water get colder at night, even when the air temp is significantly warmer? I don’t know the answer but I would suspect that the kinetic energy you speak of must be very very weak and most of the warming of water is by IR?

rgbatduke
September 7, 2013 7:32 am

I would argue with your proposal. Radiation enters water but physical heat does not.
Every glass of cold water that has ever warmed to room temperature disagrees with you.
Every pot placed on a stove and brought to a boil disagrees with you.
rgb

RMB
Reply to  rgbatduke
September 7, 2013 8:20 am

You can heat water through the sides of a glass, you can heat water through the bottom of a pot but you can not heat water through it’s surface because surface tension blocks the heat. Try heating water through the surface using a heat gun. At 450degsC the water should quickly boil, instead it remains cold. To heat water through the surface it is necessary to float something on the surface like a pan or metal dish to cancel the surface tension then heat will flow. Exactly why the heat is so convincingly rejected is not something I claim to understand but I suspect that the scientific community have seriously underestimated the properties of surface tension. Not many people fire heat guns at buckets of water but Trenberth’s missing heat made me curious, so Itried heating water from above. I recommend it.

Aphan
Reply to  rgbatduke
September 7, 2013 2:40 pm

RMB,
“Heat is not a property or component or constituent of a body itself. Heat refers only to a process of transfer of energy.”
Radiative/infared energy (from the Sun) penetrates the water’s surface right through the surface tension and causes the molecules in the water to excite and then release energy as heat. The energy transfer and thus heat, occurs in the water past the surface’s tension.
Hot air, blown by a heat gun, gets blocked by the surface tension of the water. The “air heating process” takes place within the gun and most of the energy that air could transfer to the water is lost in the battle with the surface tension. What energy is left is not enough of the energy required to penetrate the surface and then excite /heat the water molecules and cause drastic heating.
Try heating your glass/container of water with an infared heater instead of hot air and see what happens.

RMB
Reply to  Aphan
September 14, 2013 7:58 am

You are a man after my own heart. You and I seem to be the only people on the planet who understand that physical heat will not penetrate the surface, surface tension blocks it, only radiation penetrates surface tension. The climate is therefore a locked box involving only the ocean and radiation, mankinds shenanigans don’t count. AGW is a complete nonsense.

RMB
Reply to  Aphan
September 14, 2013 8:34 am

I’ve already replied but I think it went to the wrong address after the update so here goes again. You and I must be the only people on the planet who understand that surface tension blocks physical heat but allows radiation to pass through. It may be simplistic to say this but the climate’s behaviour is controlled by the sun’s radiation and only that penetrating and energizing the the ocean. The climate is a locked box and mankind is only along for the ride. AGW is a complete nonsense.

richardscourtney
September 7, 2013 7:57 am

rgbatduke:
re your post at September 7, 2013 at 7:22 am.
Yes, you are right and nobody could rationally dispute your arguments.
I write to explain why I have refused to provide a proper answer to Henry P but – instead – I am supporting kadaka (KD Knoebel) in his request for clarification from Henry P.
Your arguments were successful in explaining reality to Terry Oldberg when my arguments failed.
However, with respect, I suggest that in this case your arguments are a likely to fail, and the method initiated by kadaka (KD Knoebel) is more likely to be successful.
As you say to Henry P,

You are wrong, and until you learn what post hoc ergo propter hoc means, and understand the difference between curve fitting numerology and science, you will never, ever correct your mistake. We’ve had this discussion before.

{emphasis added: RSC}
Similarly, I failed to explain matters to Terry Oldberg despite my many attempts over many months. You did it in a few days. Horses for courses.
In this case, I don’t think Henry P will consider the fundamental theoretical objections to his work which you present. This is because he fails to recognise the assumptions he has made and, therefore, he cannot question them (how can anybody question what they fail to recognise exists?). Hence, your arguments flow over him like water from a duck’s back.
kadaka (KD Knoebel) has asked Henry P to explain a basic flaw in his understanding. I have ‘piled in’ to support that because if Henry P does does try to explain that error then he may start to question his assumptions. Therefore, I will refuse to answer the assertions of Henry P until he explains what he means by “random”, and I will keep pressing him to explain.
As you say to Henry P

I doubt that any of this will make the slightest dent in your armor, but we have to try, we have to try.

I agree, in concern for him and for others who may want to learn, we have to try.
Richard

BBould
September 7, 2013 8:32 am

RMB: I watch my swimming pool get heated everyday from the sun and cool down at night. I also know that the local lake is warmer in the summer than in the winter because people swim in it during summer and never in the winter, or almost never. Why? I will take answers from anyone.

RMB
Reply to  BBould
September 18, 2013 8:45 am

Get yourself a heatgun and a bucket of water and try heating the water.

BBould
September 7, 2013 8:49 am

RMB: The Sun heats water just fine it would seem.

RMB
Reply to  BBould
September 15, 2013 8:48 am

You are absolutely right but only by radiation, not by a transfer of physical heat. Thats why all those scientists are looking for heat that they are sure should be there.

September 7, 2013 10:14 am

Henry@Kadaka/Knoebel & Richard
will try to explain my sample procedure.
1 I took a random sample of weather stations that had daily data
2 I made sure the sample was globally representative (most data sets aren’t!!!) ……
a) balanced by latitude (longitude does not matter, as we are looking a average yearly temps. which includes the effect of seasonal shifts)
b) balanced 70/30 in or at sea/ inland
c) all continents included (unfortunately I could not get reliable daily data going back 38 years from Antarctica,so there always is this question mark about that, knowing that you never can get a “perfect” sample)
d) I made a special provision for months with missing data (not to put in a long term average, as usual in stats but to rather take the average of that particular month’s preceding year and year after)
e) I did not look only at means (average daily temp.) like all the other data sets, but also at maxima and minima… …
3) I determined at all stations the average change in temp. per annum from the average temperature recorded, over the period indicated.
4) the end results on the bottom of the first table (on maximum temperatures),
http://blogs.24.com/henryp/2013/02/21/henrys-pool-tables-on-global-warmingcooling/
clearly showed a drop in the speed of warming that started around 38 years ago, and continued to drop every other period I looked//…
5) I did a linear fit, on those 4 results for the drop in the speed of global maximum temps,
ended up with y=0.0018x -0.0314, with r2=0.96
At that stage I was sure to know that I had hooked a fish:
I was at least 95% sure (max) temperatures were falling. I had wanted to take at least 50 samples but decided this would not be necessary which such high correlation.
6) On same maxima data, a polynomial fit, of 2nd order, i.e. parabolic, gave me
y= -0.000049×2 + 0.004267x – 0.056745
r2=0.995
That is very high, showing a natural relationship, like the trajectory of somebody throwing a ball…
7) projection on the above parabolic fit backward, ( 5 years) showed a curve:
happening around 40 years ago. Dr. Brown is right in saying that you have to be careful with forward and backward projection, but you can do this with such high correlation (0.995)
8) ergo: the final curve must be a sine wave fit, with another curve happening, somewhere on the bottom…
Now, what is not to understand about this?
@Dr. Brown
If you do not understand the basics of sampling techniques, statistics and (justified) curve fitting, and probability theory I cannot help you, either.
I hope this clarifies.

Reply to  HenryP
September 7, 2013 10:35 am

HenryP:
Your methodology sounds as though there are no events. If so, that’s a mistake.

September 7, 2013 10:28 am

Surface tension prevents water from being heated from above?
Then it would seem the most efficient attic insulation would be a “plate” holding a thin layer of water with perhaps a layer with a vacuum beneath it.
Anyone who wants to is free to develop this idea further with the caveat that if they make $1,000,000 from it, please pay off my mortgage.

September 7, 2013 10:29 am

(I’m still waiting for my little check from Big Oil.)

richardscourtney
September 7, 2013 10:32 am

HenryP:
Thankyou for your post at September 7, 2013 at 10:14 am
http://wattsupwiththat.com/2013/09/05/statistical-proof-of-the-pause-overestimated-global-warming-over-the-past-20-years/#comment-1410485
Sorry, but your long answer did not “help” because it failed to answer the question put to you by kadaka (KD Knoebel); viz.

Please provide what you think is the meaning of “random”.

Your ansawer says

1 I took a random sample of weather stations that had daily data
2 I made sure the sample was globally representative (most data sets aren’t!!!) ……

Let me spell out how your answer completely ignores the question.
We understand that you think you “took a random sample of weather stations that had daily data”. but we are asking you the following.
1.
How did you define “a random sample”?
2.
What procedure did you adopt to obtain that “random sample”?
3.
Importantly, how could your sample be “random” when you “made sure the sample was globally representative”?
I hope the question is now clear.
Richard

September 7, 2013 10:47 am


Sorry, yes, if that was not clear enough,
in this respect random means any place on earth, with a weather station with complete or almost complete daily data, subject to the given sampling procedure, as stated in 2)
a)
b)
c)

September 7, 2013 10:55 am

henry@all
let us not forget that my original problem was that up to now (it seems) only me William Arnold can predict global warming- and global cooling periods by looking simply at the alignment of the planets Saturn and Uranus.
Is there nobody else in this world who has seen this?

richardscourtney
September 7, 2013 11:41 am

HenryP:
At September 7, 2013 at 10:55 am you ask

let us not forget that my original problem was that up to now (it seems) only me William Arnold can predict global warming- and global cooling periods by looking simply at the alignment of the planets Saturn and Uranus.
Is there nobody else in this world who has seen this?

I answer:
no, nobody can. When you are able to explain what you mean by “a random sample” then there is reason to think you cannot.
So, can you please provide specific answers to the questions I itemised as 1 to 3?
To save you needing to find them, I copy them here.

1.
How did you define “a random sample”?
2.
What procedure did you adopt to obtain that “random sample”?
3.
Importantly, how could your sample be “random” when you “made sure the sample was globally representative”?

Answers would be of the form
Question 1 followed by Answer 1.
Question 2 followed by Answer 2.
Question 3 followed by Answer 3.
Richard

richardscourtney
September 7, 2013 11:42 am

Ouch!
Done it again, I really miss the preview function.
I intended to write
When you are NOT able …
Sorry.
Richard

September 7, 2013 12:03 pm

@ Richard,
maybe you missed my earlier reply on what I regard as random, here
http://wattsupwiththat.com/2013/09/05/statistical-proof-of-the-pause-overestimated-global-warming-over-the-past-20-years/#comment-1410521
if I say any place on earth it means any place on earth:
that could be miles apart or hundreds of miles apart
provided that in the end I ended up with a more or less balanced sample,
by latitude (i.e. NH lat + SH lat. = ca. zero) and 70/30 @sea/onland
If you scroll down here:
http://blogs.24.com/henryp/2012/10/02/best-sine-wave-fit-for-the-drop-in-global-maximum-temperatures/
you will find another graph of a single place on earth with good daily (maxima) data going back to 1940. proving (to me) that each place on earth is on its own sine wave, depending on its compostion TOA>
These are the kind of double checks I make that proves that the projections I made before from the sample were true and can be relied upon….
Dr. Brown’s dismissal of my results cannot be relied upon.

richardscourtney
September 7, 2013 12:51 pm

HenryP:
At September 7, 2013 at 12:03 pm you say to me
maybe you missed my earlier reply on what I regard as random, here
No, I have seen it and read it.
Henry, I know you find this hard to understand, but I am trying to help you.
I really am.
Please remember that I am a sceptic. If you can convince me then you will know how to convince others.
So, please try to answer the specific questions in the way I have asked.
If you cannot, then what does that tell you?
And if you can then you will have told me something.
Richard

September 7, 2013 1:35 pm


clearly, in your order
1) I have explained the sampling procedure and – technique to you in complete details in previous posts
2) e.g. here you can see the original data for New York Kennedy airport
http://www.tutiempo.net/clima/New_York_Kennedy_International_Airport/744860.htm
Note that in this particular example you will have to go into the individual month’s data for 2002 and 2005 to see which months are missing (or have only partial data) and apply the correction as explained earlier in my sampling technique 2)d)
Once you have the whole set of data complete (for a weather station) , you can do the linear trending for the periods as indicated in my tables. Do you know how to do that?
3) the problem with other data sets is that they are not globally representative, so I made sure mine is. That would not interfere whatsoever with the randomness of taking a sample of a weather station anywhere in the world.
hope that helps

richardscourtney
September 7, 2013 1:44 pm

HenryP:
Your post at September 7, 2013 at 1:35 pm says to me

1) I have explained the sampling procedure and – technique to you in complete details in previous posts

You have not, and if you had then you could have copied and pasted to here.
Henry, I am sure you think you have answered the questions, but you have not.
I repeat what I said to you before.
Please try to answer the specific questions in the way I have asked.
If you cannot, then what does that tell you?
Richard

September 7, 2013 1:44 pm

Sorry, I am going to sleep now.
Perhaps WUWT is spending too much time on the the articles written by dr’s
e.g
http://wattsupwiththat.com/2013/09/07/new-paper-says-no-evidence-of-planetary-influence-on-solar-activity/
whilst it should be giving more time and attention to hands-on scientists
I am stunned to find that besides William Arnold, apparently I am currently the only one who found the link between the planets and the warming and cooling periods.

September 7, 2013 4:38 pm

Al, Rich – The ‘measured’ anomalies is actually an attempt at a least-biased trace. I plotted the anomalies for HadCRUT3, HadCRUT4, NOAA and GISS on the same graph and they are very similar. These all use essentially the same raw temperature measurement database. Each group processes the data slightly differently from the others. Each believes their method is most accurate. To avoid bias, each anomaly trajectory is shifted (reference-temperature change only) so its average is the same as the average for HadCRUT4 over the time period for which both are given and then the average from the available values (as-shifted if not HadCRUT4) for each year is calculated. This normalizes the set to a single trajectory which is shown in Figure 1.
The equation works for any one or combination. The coefficients would be very nearly the same but unique for the particular combination. The projection (prediction?) would be for the same combination as used to determine the coefficients. Thus to get a best estimate prediction, determine the coefficients for maximum R2 for the best estimate history.
The first EXCEL file on this was created 25 April, 2013 so I probably used data through February (Hadley is usually over a month late).
Rich – Apparently you still do not see what was done. Application of the energy equation is described more completely in the ‘corroboration’ link in my Sept 6, 1:32 pm post. Although the equation there is a predecessor to the current one, the concept is the same and may help with the equation in the climatechange90 link.
The hockeyschtick link shows a graph that goes back to 1610. I have used the sunspot numbers back to 1610 with the equation (calibrated 1895-2012) and got a very similar graph shape (different proxy factors). You should know that it is a vanishingly small probability that a curve-fit equation to data from 1895-2012 would also fit data back to 1610. But a valid physics based equation should make a fairly good fit.

richardscourtney
September 7, 2013 4:46 pm

Dan Pangburn:
In your post at September 7, 2013 at 4:38 pm you say to me

Apparently you still do not see what was done.

I do. As I have repeatedly attempted to explain to you, it is a curve fit.
Please read my explanations of why that is not informative.
As I said, the fit may be a correct model but there is no reason to think it is correct unless and until it demonstrates forecast skill.
Richard

September 7, 2013 11:00 pm

Henry
some of the problems you have are intrinsic to the procedure and collection of measurement results, which you must consider
e.g.
I found that SSN from 1895-1927 cannot be relied upon.
Another point is that thermometers were not re-calibrated before 1940
(at least I have not seen calibration certificates from before that time.)
Because of this, I reckon an error of about 0,2 or 0,3 in the anomaly in the historic record is easily possible (which means that your line could go more straight over a longer period)
I could not find any evidence that earth’s temp.s are influenced by more CO2.
you can throw that factor out, as far as I am concerned..
Means temps. are also influenced a lot by earth’s conditions, (volcanic, iron core rotation, etc)
That is why I decided to concentrate on maxima, and I looked only at data from 1974-2012.
In your case, if you looked only at data from 1950-2000, how would the projection be if you go 20 years backwards and 13 years forward?
, dr Brown
clearly, you still seem to think that I was sitting here picking weather stations that in the end would give me a perfect fit. What a silly thing for me to do. How stupid must one be to spend your hobby time fooling yourself that way……
I am saying that you should be able to reproduce my results, if you take another sample of 50 weather stations employing same sampling technique as I did.
Now, what a nice project for a first year statistics class ….

September 7, 2013 11:14 pm

Rich – referring to your explanations in your Sept 6, 5:19 am post.
“This matches the data because the ‘effects’ are tuned to obtain a fit with the anomaly.” Well, close, R2=0.9. But the tuning is only on scale factors. The ‘shape’ of the ‘effects’ is determined from measurements (sunspot number), or the SB function or the saw tooth shape of net ocean oscillations (decided by me because it is unbiased and easy to program), or the logarithmic decline of effect of added increments of CO2.
“Hence, the model demonstrates that those ‘effects’ can be made to match the anomaly, but it does not demonstrate there are not other variables which may be similarly tuned to obtain a match with the anomaly.” I understand. But the ‘other variables’ would have to take away from the other factors already considered at the same fraction that the ‘other variables’ used to keep the same R2. This is how considering CO2 could account for 19.8 % of the 1909-2005 increase without significantly increasing R2.
“The model matches the form of the anomaly.” I wouldn’t call it a model. It is just an equation. And it only matches the form after getting rationally determined coefficients.
“But, importantly, it only explains the opinion of its constructor:” Well, I decided what factors to include in the equation; ocean oscillation, sunspot numbers, SB thermal radiation, CO2 effect, offset.
“it does NOT explain anything about climate behaviour.” It does not address climate at all but it calculates average global temperature with R2 = 0.9.
“Therefore, it does not have a residual of “10%” of climate behaviour which is unexplained.” The number is obviously the difference between 100% and 90%. Seems like I read some place that R2=0.9 means that 90% of the data is explained. What would you call the unexplained 10%? Or would you just not talk about it.
It already has demonstrated forecast skill as described at my Sept 6, 1:32 pm post.
It has also demonstrated back-cast skill as described at my Sept 7, 4:38 pm post.

richardscourtney
September 8, 2013 2:43 am

HenryP:
In your post at September 7, 2013 at 11:00 pm you assert

I am saying that you should be able to reproduce my results, if you take another sample of 50 weather stations employing same sampling technique as I did.

NO!
We cannot attempt to reproduce your results because you are refusing to say the sampling technique you used.
And, at present, your claims of having selected data which you say is a “random sample” (which is it, selected or random?) induces me to understand that your work is invalidated by your undisclosed sampling technique.
Richard

richardscourtney
September 8, 2013 3:18 am

Dan Pangburn:
Your post at September 7, 2013 at 11:14 pm demonstrates some conceptual problems which explain why you are failing to understand the problem with your model. For example; this

“The model matches the form of the anomaly.” I wouldn’t call it a model. It is just an equation. And it only matches the form after getting rationally determined coefficients.

The equation is a model. And if you don’t think it is a model then you don’t think it describes anything.
And you are assuming that the “rationally determined coefficients” are the only applicable “coefficients” but you earlier stated that they are not. You said they are adjusted to provide a good fit. In your post at
http://wattsupwiththat.com/2013/09/05/statistical-proof-of-the-pause-overestimated-global-warming-over-the-past-20-years/#comment-1411064
http://wattsupwiththat.com/2013/09/05/statistical-proof-of-the-pause-overestimated-global-warming-over-the-past-20-years/#comment-1411064
you answer my point concerning the differences between different global temperature data sets.
That answer from you says

The equation works for any one or combination {of the global temperature data sets}. The coefficients would be very nearly the same but unique for the particular combination. The projection (prediction?) would be for the same combination as used to determine the coefficients. Thus to get a best estimate prediction, determine the coefficients for maximum R2 for the best estimate history.

YES! You adjust the coefficients to obtain a fit.
As I have repeatedly told you, that is curve fitting.

Your equation has 4 coefficients and an offset which can each be adjusted individually, and you say they ARE each adjusted “for maximum R2 for the best estimate history”.
With that many possible ways to adjust it, the equation could be tuned to agree with almost anything.
And you say of your model

It already has demonstrated forecast skill as described at my Sept 6, 1:32 pm post.
It has also demonstrated back-cast skill as described at my Sept 7, 4:38 pm post.

Sorry, but that is NOT true.
Your post at September 6, 2013 at 1:32 pm says

The past predictive skill of the equation is easily demonstrated. Simply determine the coefficients at any time in the past using data up to that time and then use the equation, thus calibrated, to calculate the present temperature.

That is NOT a “prediction”. It is a statement that you fitted the curve to the data, but that is not in dispute.
And in that post you also say

The future predictive skill, after 2012 to 2020, depends on the accuracy of predicting the sunspot number trend for the remainder of solar cycle 24 and the assumption that the net effective ocean oscillation will continue approximately as it has since before 1900.

That is not a prediction of the future because nobody can accurately predict sunspot number.
So, if I accept your assertions which I quote here, then I am forced to accept that your model has no predictive skill and no use. But, of course, that is true of all curve fitting exercises including yours, and that of Henry P.
Richard

September 8, 2013 4:22 am

richardscourtney says
that your work is invalidated by your undisclosed sampling technique.
henry says
funny that you keep saying this, as I have most certainly clearly stated this….
but I will copy it and paste it here for you again
1
I took a random sample of weather stations that had daily data
In this respect random means any place on earth, with a weather station with complete or almost complete daily data, subject to the given sampling procedure decided upon and given in 2) below.
2)
I made sure the sample was globally representative (most data sets aren’t!!!) ……
that means
a) balanced by latitude (longitude does not matter, as in the end we are looking at average yearly temps. which includes the effect of seasonal shifts)
b) balanced 70/30 in or at sea/ inland
c) all continents included (unfortunately I could not get reliable daily data going back 38 years from Antarctica,so there always is this question mark about that, knowing that you never can get a “perfect” sample)
d) I made a special provision for months with missing data (not to put in a long term average, as usual in stats but to rather take the average of that particular month’s preceding year and year after)
e) I did not look only at means (average daily temp.) like all the other data sets, but also at maxima and minima… …
3) I determined at all stations the average change in temp. per annum from the average temperature recorded, over the period indicated (least square fits)
4) the end results on the bottom of the first table (on maximum temperatures),
http://blogs.24.com/henryp/2013/02/21/henrys-pool-tables-on-global-warmingcooling/
clearly showed a drop in the speed of warming that started around 38 years ago, and continued to drop every other period I looked//…
5) I did a linear fit, on those 4 results for the drop in the speed of global maximum temps,
ended up with y=0.0018x -0.0314, with r2=0.96
At that stage I was sure to know that I had hooked a fish:
I was at least 95% sure (max) temperatures were falling. I had wanted to take at least 50 samples but decided this would not be necessary which such high correlation.
6) On same maxima data, a polynomial fit, of 2nd order, i.e. parabolic, gave me
y= -0.000049×2 + 0.004267x – 0.056745
r2=0.995
That is very high, showing a natural relationship, like the trajectory of somebody throwing a ball…
7) projection on the above parabolic fit backward, ( 5 years) showed a curve:
happening around 40 years ago. Dr. Brown is right in saying that you have to be careful with forward and backward projection, but you can do this with such high correlation (0.995)
8) ergo: the final curve must be a sine wave fit, with another curve happening, somewhere on the bottom…
http://blogs.24.com/henryp/2012/10/02/best-sine-wave-fit-for-the-drop-in-global-maximum-temperatures/
Now, I simply cannot be clearer about this. The only bias might have been that I selected stations with complete or near complete daily data. But even that in itself would not affect randomness in my understanding of probability theory.
Either way, you could also compare my results (in the means table) with that of Dr. Spencers, or even that reported here, in this post, and you will find same 0.14 /decade since 1990 or 0.13/decade since 1980.
In addition, you can put the speed of temperature change in means and minima in binomials with more than 0.95 correlation. So, I do not have just 4 data for a curve fit, I have 3 data sets with 4 data each.They each confirm that it is cooling. And my final proposed fit for the drop in maximum temps. shows it will not stop cooling until 2039.
In my case I don’t need your’s or anyone’s approval, I merely wanted WUWT to turn (y) our eyes to the planets. Obviously I was not successful.

richardscourtney
September 8, 2013 6:47 am

Henry P:
Approval has no place in science. Falsification does.
Until you state your sample procedure then there is nothing to falsify. Your work is NOT science.
Repeatedly saying you have explained a procedure which you have not explained does nor ‘cut it’.
Nobody can repeat your work because nobody can repeat your sampling procedure BECAUSE YOU HAVE NOT SAID WHAT YOU DID. So your work cannot be evaluated.
By refusing to explain your work YOU are saying that your work is useless and worthless.
I have tried to help you out of this hole of your own making and you have refused to be helped.
Richard

September 8, 2013 7:28 am

quote from this post
Global mean surface temperature over the past 20 years (1993–2012) rose at a rate of 0.14 ± 0.06 °C per decade (95% confidence interval)1.
quote from henry
you could also compare my results (in the means table) with that of Dr. Spencers’, or even that reported here, in this post, and you will find that I report same 0.14 /decade since 1990 or 0.13/decade since 1980.
(read: I have the results to back me up)
Quote from dr. Brown
What you are doing is also known to be numerically unstable to extrapolation.
(read: I have no results that can confirm this)
Quote from richardscourtney
your work is useless and worthless.
(read: I have no results that can confirm this)
@richardscourtney
So, in your opinion, from these 4 quotes, who is the person that is most likely to be correct about making a prediction on the future of temps. on earth?
go figure
do your maths
it is a simple multiple choice question
the answer is …..

BBould
September 8, 2013 7:42 am

HenryP, I believe what Richardscourtney is trying to say is this – Science (from Latin scientia, meaning “knowledge”[1]) is a systematic enterprise that builds and organizes knowledge in the form of testable explanations and predictions about the universe. Please correct me if I’m wrong but is it testable? Can Richardscourtney do the same experiment given the information you provided? I think he is saying NO he can’t.

Pamela Gray
September 8, 2013 7:46 am

Henry you used a psuedo-random sample. Therefore you need to describe it in detail. How many? Per latitude? And how do you know that longitude does not make a difference? How did you quantify that? An easier way would be to just list your stations and let other researchers determine any artifact you may have overlooked.

richardscourtney
September 8, 2013 7:54 am

BBould and Pamela Gray:
Thankyou to each and both of you.
If Henry P could recognise that I am trying to help him then I think he may be more willing to understand the problem.
Richard

September 8, 2013 7:56 am

BBould says
Can Richardscourtney do the same experiment given the information you provided? I think he is saying NO he can’t.
henry says
I am saying Yes he can and I have given him all the information. And if he (or Dr. Brown) had a whole class of students working on it, he could know for sure in a week.
Perhaps I am expecting that people who react on this post about “statistical proof” know something about stats, i.e. that they studied probability theory, distribution theory, sampling techniques, least square fitting, etc.
It looks to me that there is really no such qualified person here.

BBould
September 8, 2013 8:00 am

HenryP: No, what you are expecting is for people to take your word at face value rather than have it tested. That is what I see as do apparently many others.

richardscourtney
September 8, 2013 8:01 am

richardscourtney:
At September 8, 2013 at 6:47 am I wrote to you saying

Nobody can repeat your work because nobody can repeat your sampling procedure BECAUSE YOU HAVE NOT SAID WHAT YOU DID. So your work cannot be evaluated.
By refusing to explain your work YOU are saying that your work is useless and worthless.

At September 8, 2013 at 7:28 am you have written

Quote from richardscourtney
your work is useless and worthless.
(read: I have no results that can confirm this)

NO! That is NOT a “Quote from richardscourtney”.
It is a misrepresentation of my words and a nasty response to my attempt to assist you.
Richard

richardscourtney
September 8, 2013 8:03 am

Ouch! Obviously, my last post was intended to be addressed to Henry P and not myself.
Sorry.
Richard

BBould
September 8, 2013 8:07 am

Richarscourtney and Dr. Brown: There is an extremely interesting discussion over at Dr. Spencers Blog, in the comments section of “Revisiting Wood’s 1909 Greenhouse Box Experiment, Part II: First Results” By a poster named Konrad. He puts forth that GHG actually tends to cool the planet and he is doing a good job with his argument something I’m sure you both would be interested in. I don’t have any conclusions on this I’m merely an uneducated observer.
One of Konrad’s quotes: “I can show through empirical experiment that without radiative gases our atmosphere would super heat and most of it would boil off into space. You should acknowledge that the AGW hypothesis depended on the misapplication of SB equations to a planet with liquid oceans covering most of the surface and a gaseous atmosphere in a gravity field.”
And he does have the experiments to back this claim up which makes it even more interesting to me anyway.

September 8, 2013 8:10 am

Pamela Gray says
And how do you know that longitude does not make a difference? How did you quantify that?
Henry says
the argument here is that earth turns every 24 hours and we are looking at average yearly temperatures, or rather the change in annual temperatures, so longitude does not matter. This is not a matter of stats but of physics, i.e. understanding that the amount of exposure time from the sun is constant so we measure only the difference in exposure actually coming from the sun.

richardscourtney
September 8, 2013 8:11 am

BBould:
re your post at September 8, 2013 at 8:07 am
Thankyou for that. Can you give a link, please.
I would like to read it but will probably not get involved. Roy is good so never has need of my help.
Richard

BBould
September 8, 2013 8:13 am

Yes he is but he isn’t taken part in it. The comments are toward the end and start With Konrad.
http://www.drroyspencer.com/2013/08/revisiting-woods-1909-greenhouse-box-experiment-part-ii-first-results/#comments

richardscourtney
September 8, 2013 8:22 am

Bbould:
Thankyou for that. I need to give it more thought before concluding anything, but it is interesting.
Richard

Sleepalot
September 8, 2013 8:27 am

8) ergo: the final curve must be a sine wave fit, with another curve happening, somewhere on the bottom…
Just because your curve looks likes the flight of a ball, does not mean the thing you’re modelling is a ball in flight – it may be a butterfly.

Sleepalot
September 8, 2013 8:44 am

@BBould re sunshine heating water. (Note : I’m no scientist.)
I’d point out that hairdryers don’t produce much (visible) light, only IR.
You can see the bottom of your swimming pool because it reflects visible light. If you layed roofing slates on the bottom, it’d reflect less and warm more.
In lakes, you often can’t see the bottom because the water is muddy, or full of life, and whatever it is that blocks the transmission of light will have some opportunity for converting it to heat.
I remain impressedby how little heat is transferred to the water from the hairdryer.

RMB
Reply to  Sleepalot
September 13, 2013 8:51 am

I have had a similar experience when trying to heat water using a heat gun. Almost no transfer of heat into the water and the gun operates at 450degsC. If you persevere for a while a small amount of heat will transfer but I think that the fan forcing simulates weight and fools the surface tension into allowing in a very little heat. I’m convinced that surface tension blocks heat.

richardscourtney
September 8, 2013 8:45 am

Sleepalot:
I your post at September 8, 2013 at 8:27 am you provide a clear and succinct explanation of the reason why curve fitting is often misleading when you write

Just because your curve looks likes the flight of a ball, does not mean the thing you’re modelling is a ball in flight – it may be a butterfly.

I write to ask if I may use your sentence in future, please?
Richard

BBould
September 8, 2013 9:18 am

Sleepalot: I like this explanation from,
Pekka Pirilä | September 8, 2013 at 11:51 am |
Actually the temperature is lower at the skin than slightly below. The warmest layers are not deep below, but they are a little (like a few meters) below the surface, because the surface is losing energy, while the heating is done by solar SW that’s mostly absorbed a few meters deep in the ocean. There might be some local exceptions when warm moist air is brought in, but such situations are so exceptional that they have no influence on the overall picture.

September 8, 2013 9:35 am

sleepalot says
Just because your curve looks likes the flight of a ball, does not mean the thing you’re modelling is a ball in flight – it may be a butterfly.
henry says
well. if you had been following the discussion and if you were able to understand, you would have noticed that initially I had us on a binomial that would have brought us into an ice age very quickly. Luckily someone ( I think it was AussiDan) pointed me to the fact that it was more likely to be a sine wave,
So indeed, there was a butterfly, coming to rescue us. I subsequently found out that this butterfly comes in the form of two planets, who, in their circles around the sun, throw and catch us again, allowing global warming- and global cooling periods, of 44 years each, coming from the sun. My only worry now is : what if something happens to those two planets whilst in flight around the sun?
Anyway, it seems there is currently only one person who seems to get worried about that, so you can go back to sleep safely. I hope….

September 8, 2013 9:36 am

henry@richardscourtney
I agree that it was nasty of me. I do apologize. Please do accept my apologies.

rgbatduke
September 8, 2013 9:59 am

You can heat water through the sides of a glass, you can heat water through the bottom of a pot but you can not heat water through it’s surface because surface tension blocks the heat. Try heating water through the surface using a heat gun. At 450degsC the water should quickly boil, instead it remains cold. To heat water through the surface it is necessary to float something on the surface like a pan or metal dish to cancel the surface tension then heat will flow. Exactly why the heat is so convincingly rejected is not something I claim to understand but I suspect that the scientific community have seriously underestimated the properties of surface tension. Not many people fire heat guns at buckets of water but Trenberth’s missing heat made me curious, so Itried heating water from above. I recommend it.
Surface tension does not block heat.
However — as I noted — water is lightest where it is warmest, and when one blows warm(er) air over it you have several distinct ways to cool that can easily overwhelm the warming. For example, a warm wind will still cool your cooler skin if the skin is damp because it causes forced evaporation. This has been known since ancient times, and in India and other hot countries whole buildings were built well over a thousand years ago that cooled using this principle.
If the hot air you blow over the water is hot enough, of course, it will heat the water. Try repeating your experiment with a blowtorch. It will take a long time to heat the water because of its stratification — water is not a terribly good conductor of heat — but it will warm. If you stir the water while you are heating it it will warm much faster, although probably not very fast if it is a large container of water.
The scientific community doesn’t underestimate the properties of surface tension, they just happen to know what they are (unlike you, I am sorry to say). For example, liquid metals such as mercury have a much, much higher surface tension than water, but they heat up just fine because they do not evaporate as quickly and carry as much heat away as they do.
The thing you are missing, in other words, is “latent heat of evaporation”. You can, of course, read up on this on Wikipedia.
rgb

RMB
Reply to  rgbatduke
September 13, 2013 8:42 am

I take it that you have not tried to heat water from above, until you do there is little point in us having a discussion. If you cover water you can heat it but you can not heat uncovered water. The device that I used was a heat gun 450degsC, hot enough I would say.

rgbatduke
September 8, 2013 10:04 am

Take a hairdryer and blow it on dry skin. The air will feel very warm and will rapidly warm the skin (even to where it can burn). Take the same hairdryer and blow it on wet skin, or worse, on a wet tee shirt laid over the skin. It might well actually feel cool until the water evaporates, and will certainly take a while to warm. That is because the MOVING AIR carries away water vapor, cooling the surface exactly the way hot soup cools when you blow on it. The fact that the air is warm merely makes it easier to knock air molecules off of the surface by providing part of the energy needed to do the knocking.
rgb

RMB
Reply to  rgbatduke
September 11, 2013 11:29 am

I would like to comment on that post. If you go one stage further and apply the heat from a heat gun to the surface of water you will find that even though you are applying 450degsC the water will remain cold. Evaporation is not evident because there is no steam. My conclusion at this time is that surface tension blocks heat. Surface tension has properties that are not understood and AGW is a nonsense.

rgbatduke
September 8, 2013 10:13 am

Perhaps I am expecting that people who react on this post about “statistical proof” know something about stats, i.e. that they studied probability theory, distribution theory, sampling techniques, least square fitting, etc.
It looks to me that there is really no such qualified person here.

Dear Henry,
I’m your man, here. Which is why I have tried to teach you. You are utterly clueless about the entire subject if you think that the ability to fit a linear curve to a segment of data has predictive force outside of the segment being fit. Statistical proof, by the way, is an oxymoron, or at best an asymptotic state. You might learn something about Bayes theorem and conditional probability before even attempting to talk about the subject.
That’s why when you make egregious claims about “proof” that there is a causal connection between two things because you find similar fourier components in the two, you are wildly incorrect in your assertions of a concrete probability. You clearly do not understand the nature of the term “p-value”, or how it can apply to the null hypothesis. I, on the other hand, wrote and maintain dieharder, which has an interface in R, the well-known statistics package. dieharder is basically raw hypothesis testing made manifest in the direct realm of randomness itself.
I’m just sayin’. You can claim “Everybody else in the world but me is ignorant about statistics”, but you have at least two Ph.D.s who actually know a lot about it and do research that requires the knowledge (or, in my case, have a predictive modeling start up company on TOP of writing dieharder and spending 15 years doing importance sampling Monte Carlo in statistical mechanics) that you are making this claim about, and if you are wise you will pay attention to what we say.
Not that I expect this, but as I said, one has to try.
rgb

September 8, 2013 11:05 am

@ rgb
you are insulting me again…claiming I am clueless, which makes it difficult for me to even try a conversation with you.
If you followed the thread you would know that I did challenge both you and richard to show me your own results and that challenge still stands, as far as I am concerned.
So why don’t you show me your results on global cooling /warming and impress me?
Or why would you not simply get a class of students(who just have to learn least square fitting) to see if you cannot duplicate my results?
So far, it seems to me the logical answer to that simple multiple choice question is ….?
I would rather trust someone with results than someone with a Ph.D and no results…..
btw are you, richard brown, and richardscourtney perhaps one and the same person?
@sleepalot
regarding your comment that you left at my tables on my blog:
clearly you do not know yet what a least square fit is,
so I wil tell you what it is exactly.
It is the average change from the average temperature measured over the period indicated.
This value is of course a lot lower than the temperatures indicated in the “Klima” data from tutiempo.net
Now try and find out how to do least square fitting. In Excel it is called linear trending.
It will change your life forever.

September 8, 2013 11:58 am

btw
Sea water is heated by ir-radiation by the sun followed by the re-radiation of some of that radiation in the absorptive regions of water. An interesting aspect is that oceans never get warmer than 35 degrees C. Now you tell me why.

richardscourtney
September 8, 2013 12:26 pm

HenryP:
At September 8, 2013 at 11:58 am you ask

btw
Sea water is heated by ir-radiation by the sun followed by the re-radiation of some of that radiation in the absorptive regions of water. An interesting aspect is that oceans never get warmer than 35 degrees C. Now you tell me why.

This has been known for decades. For the first of several reports of it I refer you to
V. Ramanathan & W. Collins, ‘Thermodynamic regulation of ocean warming by cirrus clouds deduced from observations of the 1987 El Niño’, Nature, 351, 27 – 32 (02 May 1991)
Abstract
Observations made during the 1987 El Niño show that in the upper range of sea surface temperatures, the greenhouse effect increases with surface temperature at a rate which exceeds the rate at which radiation is being emitted from the surface. In response to this ‘super greenhouse effect’, highly reflective cirrus clouds are produced which act like a thermostat shielding the ocean from solar radiation. The regulatory effect of these cirrus clouds may limit sea surface temperatures to less than 305 K.

As I have repeatedly said (e.g. on WUWT) the effect means that additional heating (from any source) in the tropics REDUCES sea surface temperature when 305 K is reached because the induced clouds drift to regions adjacent to the region at 305 K and thus shield the adjacent regions from the Sun, too.
Richard

September 8, 2013 1:17 pm

richardscourtney
This has been known for decades.
henry says.
thanks for that explanation. I take it you accepted my apology.
It sounds very plausible for me but there is one thing that I also noted. My pool is solar heated and no matter what I do, I also cannot get the pool heated to above 35. So here the cloud theory does not work (I think)
My theory up to now was that the rate of evaporation @top of the water reaches a level where it keeps cooling the layer below, similar to the effect you notice when a low boiling fluid on your skin cools your skin very badly (to the point where it can cause cold burn, in the case of some CFC’s that we used to use in deodorants) until it reaches that equilibrium where the water reaches 305K. Beyond that point, more heat in will only cause more evaporation.
[that is what I think]

richardscourtney
September 8, 2013 1:38 pm

HenryP:
Of course I accept your apology and I thank you for it. I saw no reason to mention it again and, thus, to make a big thing of it. I answered your subsequent question and I would not have taken the trouble to do that if we had fallen out.
Your evapouration theory is correct to some degree. I refer you to the explanation by rgbatduke in his posts at September 8, 2013 at 9:59 am and September 8, 2013 at 10:04 am.
However, pan studies do not indicate the limit temperature of 305 K. Also, increased evapouration implies greater atmospheric humidity with resulting radiative warming from water vapour. Hence, Ramanathan & Collins (R&C) suggested that the increased humidity also creates the cirrus shielding.
Much debate occurred in the literature as to whether R&C were correct. Most of this debate concerned whether the 305 K limit actually exists. R&C ‘won’ that debate and the dissenters ‘abandoned the field’.
Richard

BBould
September 8, 2013 1:39 pm

HenryP: Sunshine routinely heats my pool to over 35c, highest I’ve seen is 99F. I live in Phoenix AZ, Gilbert actually.

rgbatduke
September 8, 2013 1:47 pm

@Dr. Brown
If you do not understand the basics of sampling techniques, statistics and (justified) curve fitting, and probability theory I cannot help you, either.

Who, me? What is this “sampling techiques” of which you speak? Probability theory? What, exactly, is that? I mean seriously, do you say things just to hear yourself talk?
I mean, it is actually almost funny in a sad, sad way. Exactly, do you know a priori what curve one is “justified” in fitting to what function or phenomenon without a physical model other than “Saturn makes the earth heat and cool” which sounds a lot more like astrology than physics (because it is a lot more like astrology than physics)?
Here, try your model out on this curve, or hell, just tell me what function you are “justified” in fitting to the data:
http://upload.wikimedia.org/wikipedia/commons/c/ca/Holocene_Temperature_Variations.png
Or this one:
http://upload.wikimedia.org/wikipedia/commons/6/60/Five_Myr_Climate_Change.png
Or this one:
http://upload.wikimedia.org/wikipedia/commons/b/bb/1000_Year_Temperature_Comparison.png
(take your pick, since you are gifted with special knowledge that everybody else lacks) or even this one:
http://commons.wikimedia.org/wiki/File:Instrumental_Temperature_Record.png
Be sure and project your solution backwards and see how well it hindcasts all of these figures. Saturn, my ass. Sinusoid, ditto. Linear trend? Hah! Multiple sinusoids? Do you think you are the first person to attempt a fourier transform of the data?
Seriously, do you even know what a fourier transform is? Or are you just using Excel to fit a spreadsheet full of numbers you “randomly selected” over a range where the fit happens to work? Because buddy, your fit won’t work (as in extrapolate into the past over any of the curves above. Not even close. So why, exactly, are you so 99.99999% certain that you can project them into the future, when they cannot come close to predicting the temperatures over even the last 140 years, let alone 1000, 12000, 5000000.
rgb
rgb

rgbatduke
September 8, 2013 2:02 pm

So why don’t you show me your results on global cooling /warming and impress me?
My “results”? Now you’re just kidding. If you read what I fairly regularly state on WUWT, I don’t think
anybody’s “results”, no matter how stupidly or cleverly they are computed, come close to capturing the complexity of the climate. I don’t even think we’ll have the data needed to figure out the climate for decades to as long as another century. But regarding stupid vs clever, fitting arbitrary curves is stupid (and, as Richard has aptly pointed out, not science, because what is the hypothesis? How can it be falsified?)
However, I do know a fair bit of physics (in addition to being able to, on a good day, derive probability theory from first principles such as the Cox axioms following either his approach or that of Jaynes) and I assure you — there is absolutely no plausible way that I can think of that Saturn could have the slightest effect on the climate of the Earth. You do know that it is pretty far away, right? And that anything Saturn brings to the table, Jupiter brings many times more (being a lot closer and a fair bit bigger). And Jupiter is still so far away and so weak that it has essentially no measurable effect on the Earth.
You’ll have to talk to Leif (that is a threat:-) if you want to assert that Jupiter or Saturn either one or both together have any significant effect on the Sun (so it could indirectly affect the Earth). At the very least, if you want to make assertions of this sort you need to come up with a plausible physical model beyond “and then a miracle happens…”.
You will recall that I’ve looked over all of your computations before, right down to the data in your spreadsheets, and they are pure numerology, not science, and not a credible model. Hindcast the holocene, then we’ll talk.
rgb

rgbatduke
September 8, 2013 2:12 pm

btw are you, richard brown, and richardscourtney perhaps one and the same person?
I have no idea who Richard Brown is, never heard of him. You can visit my website at Duke any time and determine whether or not you think it is plausible that Duke is participating in some sort of global plot to fool you, especially given the amount of work involved. My personal web page, BTW, gets around 12 million hits a year even without a blog — just because it provides lots of valuable resources that people all over the world access fairly regularly. So it must really be a hell of a conspiracy, huh?
As for who Richard “really” is, you’ll have to ask him. I only know him from communications on WUWT and hence do not know of his credentials (if any) and so on. For the most part, his remarks seem fairly sober and usually are backed up by references, far more so than most posters on WUWT. I have come to respect his comments, in part because he, like me, is just as likely to bop bad skeptical “science” — like yours — as he is to bop bad CAGW science. I would not speak for him, but I’d be surprised if he thinks the issue of anthropogenic global warming, catastrophic or not, is resolved either way. I certainly don’t.
rgb

Pamela Gray
September 8, 2013 2:19 pm

Henry, since you have not posted the list of stations you used, I am assuming the dog ate it?
Come on Henry. This is not hard. List the stations. A pseudo random sample must be described in detail in order to replicate the study. This is standard research 101, freshman class.

Sleepalot
September 8, 2013 6:23 pm

@ Richardscourtney: yes, happy to help. 🙂 Bumblebee is an alternative to butterfly
(flight of the bumblebee).

September 8, 2013 8:34 pm

@BBould
I take it that your system works similar as mine, pumping the water through solar panels attached to the roof.
Increasing the pumping time more should heat the water more when there is sunshine. In my case I never get the water warmer than 34-35 (in summer). If you say you can get it to 36 or 37 then I suspect that the difference lies in the air pressure. I live in Pretoria at 1000 meter high (3000 ft). The lower air pressure, the lower the boiling point, the higher the evaporation rate. It is the higher evaporation rate that cools the water more. Following these simple considerations, I take that Gilbert is or must be close to sea level?

September 8, 2013 8:46 pm

Pamela says
Henry, since you have not posted the list of stations you used,
@Pamela
Each station’s town and its latitude is mentioned in the tables.
http://blogs.24.com/henryp/2013/02/21/henrys-pool-tables-on-global-warmingcooling/
For example, here are the original results from JFK airport (New York)
http://www.tutiempo.net/clima/New_York_Kennedy_International_Airport/744860.htm
Note that in this particular example you will have to go into the individual month’s data for 2002 and 2005 to see which months are missing (or have only partial data) and apply the correction as explained earlier in my sampling technique 2)d)
Once you have the whole set of data complete (for a weather station) , you can do the linear trending for the periods as indicated in my tables. I take you know how to do that.
@the Richards
Please do carry on, stumbling around in the darkness. It might keep you in your jobs, to try and keep the people confused.

richardscourtney
September 9, 2013 3:34 am

HenryP:
I admit that I am losing patience with your unfounded smears, misrepresentations and insults which you seem to think are an alternative to explaining the methods you used to formulate your assertions.
The latest example of your behaviour is at September 8, 2013 at 8:46 pm
http://wattsupwiththat.com/2013/09/05/statistical-proof-of-the-pause-overestimated-global-warming-over-the-past-20-years/#comment-1411905
where you write

@the Richards
Please do carry on, stumbling around in the darkness. It might keep you in your jobs, to try and keep the people confused.

1.
I am retired and so don’t have a job or jobs.
2.
My major activity in my retirement is my work as an Accredited Methodist Preacher by means of which I hope to lead people into light and not darkness.
3.
As a preacher my primary function is to provide people with constructive doubt. That is, my purpose is to challenge people to question their beliefs and, thus, to find more profound beliefs. My decades of employment as a research scientist have given me great awareness of the value of doubt.
Indeed, I was employed as the Senior Material Scientist at the UK’s Coal Research Establishment and, therefore, I never worked at a mine (except to do emergency work during miners strikes), but the British Association of Colliery Management (BACM) elected me as their Vice President and confirmed me in that office in five successive elections. BACM knew my priorities, and when I retired as their Vice President they gave me a bust of John Wesley as a tribute to my work.
In this thread have attempted to encourage you to question your own work. That is consistent both with my job when I was a research scientist and my present activity as a preacher. Only when confusion is removed can clarity be obtained so progress can be made. But I have often learned that bigots cannot consider the possibility of their beliefs and understandings being wrong.
Bluster, smears and misrepresentations are a defence mechanism used by people who are unable to confront truth. So, I ask you to ponder on why you choose to use them when you are asked to explain the methods you have used to obtain your hypothesis.
Richard

September 9, 2013 4:42 am

I qualify as a “climate denier” in alarmist circles, but I’m a pure sceptic, which means although it’s obvious that warming has been “overestimated” for the last 20 years and that the rate of warming is “significantly slower,” even to the point where it’s “not significantly
different from zero,” it’s still warming, not cooling.

September 9, 2013 6:25 am

RichardSCourtney says
But I have often learned that bigots cannot consider the possibility of their beliefs and understandings being wrong.
Bluster, smears and misrepresentations are a defence mechanism used by people who are unable to confront truth.
Henry says
I am learning a lot about myself on this thread here.
I am a bigot and I am clueless. I realized I am really nasty, too, but I did apologize for that….
In my last comment which you took offence to, I simply meant to say that it does not really matter to me if people do not want to accept my results. Personally, my results freed me from feeling guilty about driving a big car (I have a big truck and my dogs love to go with me anywhere I go, they have the dog house under my canopy). Unfortunately, for reasons I will explain, maybe it does matter now that people come to accept my results, seeing that I have now been put on a mission.
Let me share some thoughts with you. I notice that you also think that doubt is a good beginning for faith. At least we share that same idea
http://blogs.24.com/henryp/2013/03/01/where-is-your-faith/
which is a good beginning.
I challenged you to produce your own results but you have none. So your right to speak is somewhat diminished. Let me argue this from the other direction, which I have already tried but I think you did not get it.
Let us now look at my results for means and compare it with other data sets. (look at the second table here,
http://blogs.24.com/henryp/2013/02/21/henrys-pool-tables-on-global-warmingcooling/
scroll to the bottom)
We are looking at the last results on the bottom of the means table:
for the last 32 years it has been warming at a rate of 0.013 K/annum
which is the same result as the 0.13/decade that Dr. Spencer recently reported for the past 33 years.
For the last 22 years the speed of warming was 0.014K/annum
which is the same as the 0.14 /decade reported in this post here, above.
For the last 12 years it has been cooling at a rate of -0.017 K/annum.
Now look at the trend from 4 major data sets for the past 11 years
http://www.woodfortrees.org/plot/hadcrut4gl/from:1987/to:2014/plot/hadcrut4gl/from:2002/to:2014/trend/plot/hadcrut3gl/from:1987/to:2014/plot/hadcrut3gl/from:2002/to:2014/trend/plot/rss/from:1987/to:2014/plot/rss/from:2002/to:2014/trend/plot/hadsst2gl/from:1987/to:2014/plot/hadsst2gl/from:2002/to:2014/trend/plot/hadcrut4gl/from:1987/to:2002/trend/plot/hadcrut3gl/from:1987/to:2002/trend/plot/hadsst2gl/from:1987/to:2002/trend/plot/rss/from:1987/to:2002/trend
do you agree with me that the trend downwards is at least -0.1/decade?
So, we have established now, that my results are correct and that they are globally representative – I believe possibly even more globally representative than those other 4 data sets quoted above because I was able to think things through from the beginning.
Do you agree with me, that if the results in one table are comparable with three other parties’ results, then my whole data set must be correct, and that my sample was properly taken and globally representative?
So now I have put the results of the first table (maxima) in a graph , see here:
http://blogs.24.com/henryp/2012/10/02/best-sine-wave-fit-for-the-drop-in-global-maximum-temperatures/
and you can except that they are right, at least for the blue part. The rest is indeed projection, but from a very high correlation (on a corresponding looking binomial…)
and I say to my self: this is great. It seems I am the only one who has figured this right. But why is this? Why has nobody looked at maxima? Why do I get blocked everywhere when I show this?
In then end I said to myself in frustration: Let fools stay fools if they want to be. Fiddling with the data they can, to save their jobs, but people still having to shove snow in late spring, will soon begin to doubt the data…Check the worry in my eyes when they censor me.
Under normal circumstances I would have let things rest there and just be happy to know the truth for myself. Indeed, I let things lie a bit. However, chances are that humanity will fall in the pit of global cooling and later me blaming myself for not having done enough to try to safeguard food production for 7 billion people and counting.
It really was very cold in 1940′s….The Dust Bowl drought 1932-1939 was one of the worst environmental disasters of the Twentieth Century anywhere in the world. Three million people left their farms on the Great Plains during the drought and half a million migrated to other states, almost all to the West. Please see here:
http://www.ldeo.columbia.edu/res/div/ocp/drought/dust_storms.shtml
I found confirmation in certain other graphs, that as we are moving back, up, from the deep end of the 88 year sine wave, there will be standstill in the speed of cooling, on the bottom of the wave, and therefore naturally, there will also be a lull in pressure difference at that > [40 latitude], where the Dust Bowl droughts took place, meaning: no wind and no weather (read: rain). However, one would apparently note this from an earlier change in direction of wind, as was the case in Joseph’s time. According to my calculations, this will start around 2020 or 2021….
Danger from global cooling is documented and provable. It looks we have only ca. 7 “fat” years left……
So, finally, I read the paper from William Arnold again.
Observe from my a-c curves:
1) change of sign: (from warming to cooling and vice versa)
1904, 1950, 1995, 2039
2) maximum speed of cooling or warming = turning points
1927, 1972, 2016
Then I put the dates of the various positions of Uranus and Saturn next to it:
1) we had/have Saturn synodical with Uranus (i.e. in line with each other)
1897, 1942, 1988, 2032
2) we had complete 180 degrees opposition between Saturn and Uranus
1919, 1965, 2009,
In all 7 of my own results & projections, there is an exact 7 or 8 years delay, before “the push/pull ” occurs, that switches the dynamo inside the sun, changing the sign….!!!!
I asked: What is the probability of this not being related to my sine wave which was simply a proposed best fit of mine for the data that I had obtained?
Conceivably the gravitational pull of these two planets has some special lob sided character, causing the actual switch. Perhaps Uranus’ apparent side ward motion (inclination of equator by 98 degrees) works like a push-pull trigger. Either way, there is a clear correlation. Other synodical cycles of planets probably have some interference as well either delaying or extending the normal cycle time a little bit. So it appears William Arnold’s report was right after all….(“On the Special Theory of Order”, 1985).
So what is the chance of this all happening to me and what if I did not warn the people about the horrible droughts that will be coming back to the great plains of America again, 2021-2028?
Please help me to spread this message:
WHAT MUST WE DO?
1) We urgently need to develop and encourage more agriculture at lower latitudes, like in Africa and/or South America. This is where we can expect to find warmth and more rain during a global cooling period.
2) We need to tell the farmers living at the higher latitudes (>40) who already suffered poor crops due to the cold and/ or due to the droughts that things are not going to get better there for the next few decades. It will only get worse as time goes by.
3) We also have to provide more protection against more precipitation at certain places of lower latitudes (FLOODS!),
God bless you.
Henry

richardscourtney
September 9, 2013 7:44 am

Henry P:
Your own words reveal your certainty in your assertions and your complete lack of ability to substantiate those assertions. But – in common with all others who have such unsubstantiated belief in their own rightness – you say what we “MUST” do.
Well, you can do whatever you want so long as it does not harm others.
And others can, too, so they can ignore you unless and until you provide rational reasons for your assertions.
Richard

BBould
September 9, 2013 7:55 am

HenryP: No system only natural sunlight and its a larger pool about 23000 gallons.

BBould
September 9, 2013 7:58 am

Richardscourteny: Thanks for your BIO, I’m retired as well. 20 Years Air Traffic control and 20+ years computers and networking, though never mixed the two.

September 9, 2013 11:33 am

richardscourtney says
http://wattsupwiththat.com/2013/09/05/statistical-proof-of-the-pause-overestimated-global-warming-over-the-past-20-years/#comment-1412241
henry says
I am deeply disappointed in your post, which clearly lacks any scientific arguments.
I wish you all the best.

richardscourtney
September 9, 2013 2:53 pm

Henry P:
re your post at September 9, 2013 at 11:33 am
In the event that you make a scientific point then I would be pleased to address it. So far, you have not. I addressed what you had said.
Richard

September 9, 2013 5:24 pm

Rich – I have been wondering why climate scientists have failed to discover what drives average global temperature. Your comments provide some indication.
We obviously have different definitions for ‘curve fitting’. To me, a ‘curve fit’ has no predictive ability. The equation has predictive ability. It does a pretty good job of calculating average global temperatures all the way back to 1610 with coefficients determined for 1895-2012. The shape of the trace is about as shown in the graph at the hockeyschtick blog referred to in my Sept 6, 1:32 pm post.
The determination of coefficients is common in engineering. That is how theoretical functions are calibrated to the real world. Lift coefficients, drag coefficients, heat transfer coefficients, etc. are arrived at this way. The equations, so calibrated, all have predictive ability and are used by engineers to design many of the products that you and I use.
Apparently your assessment of the predictive ability of the equation results from an irreconcilable difference between how most working engineers and at least some non-engineers have been trained to think.
On the other hand, if what I have observed about you is correct, we both have determined that rational CO2 change has no significant influence on climate and average global temperature is going down. I am interested in how you arrived at this determination.

richardscourtney
September 10, 2013 4:04 am

Dan Pangburn:
I am replying to your post addressed to me at September 9, 2013 at 5:24 pm.
As I understand your post it presents two points and a question. I apologise if I have missed anything and – if I have – then would welcome my error being pointed out. I here write.to address the points and the question I have recognised.
You say

We obviously have different definitions for ‘curve fitting’.

Your plot IS a curve fit. I explained this in my post at September 8, 2013 at 3:18 am
http://wattsupwiththat.com/2013/09/05/statistical-proof-of-the-pause-overestimated-global-warming-over-the-past-20-years/#comment-1411227
I explained the matter there so see no need to repeat the matter.
However, your curve fit consists of not needed complexity and ascribes unjustifiable causes. That is mere opinion and is NOT science.
The simplest curve fit is provided by Akasofu and is being discussed in the WUWT thread at
http://wattsupwiththat.com/2013/09/09/syun-akasofus-work-provokes-journal-resignation/
His curve fit makes a falsifiable hypothesis; viz
What has happened throughout the twentieth century will continue to happen until something changes.
Your curve fit contains 5 assumptions and ascribes multiple causes. Hence, it is useless.
You say

To me, a ‘curve fit’ has no predictive ability. The equation has predictive ability. It does a pretty good job of calculating average global temperatures all the way back to 1610 with coefficients determined for 1895-2012. The shape of the trace is about as shown in the graph at the hockeyschtick blog referred to in my Sept 6, 1:32 pm post.

I agree that a curve fit has no predictive ability and, of course, the curve fits of you and Akasofu have no predictive ability. But the simplicity of Akasofu’s curve fit affords the possibility of discerning a change to the behaviour of the climate system. Even if one were to accept that your curve fit would also do that, then the simplicity of Akasofu’s model makes it preferable to yours (Occam’s Razor).
I also explained the inability of your model to make predictions in my post at September 8, 2013 at 3:18 am (which I have linked from this post) so see no need to repeat that explanation.
You ask me

On the other hand, if what I have observed about you is correct, we both have determined that rational CO2 change has no significant influence on climate and average global temperature is going down. I am interested in how you arrived at this determination.

I have explained this many times including repeatedly on WUWT. But to save you finding it I copy it to here.
Before presenting my argument, I point out I remain to be convinced that human emissions are or are not the cause – in part or in whole – of the observed recent CO2 rise. However, the cause of a rise in atmospheric CO2 concentration is not relevant to the effect on global temperature of that rise.
My view is simple and can be summarised as follows. The feedbacks in the climate system are negative and, therefore, any effect of increased CO2 will be too small to discern. This concurs with the empirically determined values of low climate sensitivity obtained by Idso, by Lindzen&Choi, etc..
In other words, the man-made global warming from man’s emissions of greenhouse gases (GHG) would be much smaller than natural fluctuations in global temperature so it would be physically impossible to detect the man-made global warming.
Of course, human activities have some effect on global temperature for several reasons. For example, cities are warmer than the land around them, so cities cause some warming. But the temperature rise from cities is too small to be detected when averaged over the entire surface of the planet, although this global warming from cities can be estimated by measuring the warming of all cities and their areas.
Similarly, the global warming from man’s GHG emissions would be too small to be detected. Indeed, because climate sensitivity is less than 1.0°C for a doubling of CO2 equivalent, it is physically impossible for the man-made global warming to be large enough to be detected. If something exists but is too small to be detected then it only has an abstract existence; it does not have a discernible existence that has effects (observation of the effects would be its detection).
I hold this view because I am an empiricist so I accept whatever is indicated by data obtained from observation of the real world.
Empirical – n.b. not model-derived – determinations indicate climate sensitivity is less than 1.0°C for a doubling of atmospheric CO2 equivalent. This is indicated by the studies of
Idso from surface measurements
http://www.warwickhughes.com/papers/Idso_CR_1998.pdf
and Lindzen & Choi from ERBE satellite data
http://www.drroyspencer.com/Lindzen-and-Choi-GRL-2009.pdf
and Gregory from balloon radiosonde data
http://www.friendsofscience.org/assets/documents/OLR&NGF_June2011.pdf
Climate sensitivity is less than 1.0°C for a doubling of atmospheric CO2 concentration and, therefore, any effect on global temperature of increase to atmospheric CO2 concentration only has an abstract existence; it does not have a discernible existence that has observable effects.
Richard

September 11, 2013 12:27 am

Rich –
This is what you missed:
You failed to recognize that the equation is not a curve fit even though I gave a reference to a graph that went back to 1700 and another that went back to 1610 although temperatures prior to 1895 were not used for calibration. I even showed how to demonstrate its forecast skill using recent measurements.
You stated that the equation “…contains 5 assumptions…” when there are not even that many coefficients; one of which can be set to zero with no significant effect on R2 and another which is equivalent to a change of reference temperature for anomalies. This is explained in detail in the paper which you may have not even read but certainly did not understand.
You say there are multiple causes. There are two drivers of average global temperature that are significant. 1) The time-integral of sunspot numbers. 2) Natural ocean oscillations.
You repeatedly say that you “explained”. Repeating an erroneous explanation does not make it any less erroneous.
This is what you got right (or close to right):
You said “I hold this view because I am an empiricist so I accept whatever is indicated by data obtained from observation of the real world.
Empirical – n.b. not model-derived…” That is close to what I did. But I went a step farther and discovered the two main drivers of average global temperature and derived an equation that includes them and demonstrates that rational CO2 change has no significant influence.
You say “Climate sensitivity is less than 1.0°C…” I discovered that rational CO2 change has no significant effect on average global temperature and disclosed that finding in a paper made public more than 5 years ago at http://www.middlebury.net/op-ed/pangburn.html. The equation at http://climatechange90.blogspot.com/2013/05/natural-climate-change-has-been.html corroborates that finding and demonstrates that the heat added by human activity of burning fossil fuels and nuclear activity has had no significant effect on average global temperature.

Because average global temperature has begun a down trend and the down trend will steepen, climate sensitivity, as warmers are fond of defining it, will continue to decline and may eventually become negative. The CO2 level continues to go up while the average global temperature doesn’t. Apparently, the separation between the rising CO2 level and not-rising agt will need to get much wider for the AGW mistake to become evident to some of the deniers of natural climate change.

September 11, 2013 10:10 am

Hello Dan,
I suggest you would face less opposition if you were to qualify your work as hypotheses rather than proven facts.
I personally find your work interesting as a hypo, and almost everything in climate science is still a hypo – this field is in its infancy – we cannot even agree on what drives what.
Is it correct that the oceanic oscillation in your equation has no net upward slope?
Also, what would happen to the CO2 factor if you removed the warming bias inherent in Hadcrut4? Assume say 0.05 to 07C back to about 1945.
I suggest that the broad concept of “Climate Sensitivity to CO2” does not even exist at current atmospheric concentrations, since it is clear that temperature primarily drives CO2, and there has been no net global warming in 10-20 years despite significant increases in atmospheric CO2.
We should primarily be examining “CO2 Sensitivity to Temperature” and also “increased atmospheric CO2 and its probable causes“, one significant component of which may be the combustion of fossil fuels (and or deforestation, or primarily natural rather than humanmade causes).
Regards, Allan
http://wattsupwiththat.com/2013/08/28/another-paper-blames-enso-for-the-warming-hiatus/#comment-1403597
Does the concept of “Climate Sensitivity to CO2” even exist at current atmospheric concentrations?
Please consider my statement from earlier threads that:
“Atmospheric dCO2/dt varies almost contemporaneously with global temperature T, and CO2 lags T at all measured time scales, from about 9 months in the modern data record to about 800 years in the ice core record. Is there any logical explanation for this factual observation, other than the conclusion that Temperature DOES Primarily Drive CO2, and CO2 DOES NOT Primarily Drive Temperature?”
As supporting evidence, I suggest with some confidence that the future cannot cause the past.

September 11, 2013 10:24 am

Macrae
I agree 100% with your last post to Dan. I determined exactly the same thing but looked at it from another corner.
what is your opinion/comment about me finding a 100% correlation (r=1)
with the timing of the planets Uranus and Saturn on the best sine wave for the drop in maximum temperatures?
http://wattsupwiththat.com/2013/09/05/statistical-proof-of-the-pause-overestimated-global-warming-over-the-past-20-years/#comment-1412171

richardscourtney
September 11, 2013 12:03 pm

Dan Pangburn:
I am writing as a courtesy to say I have read your post to me at September 11, 2013 at 12:27 am.
It makes no additional points, and I have already answered the points it does contain when you previously made them.
Saying I am wrong does not make me wrong. I like to be shown I am wrong because then I learn. You merely iterate that your misunderstandings are right so I must be wrong. That is not cogent. I would be grateful if you were to show I am wrong, but you have not done that.
However, I have repeatedly explained how and why you are wrong and you have ignored every point I have made; e.g. by refusing to agree your curve fit is a curve fit!
Richard

richardscourtney
September 11, 2013 12:14 pm

HenryP:
At September 11, 2013 at 10:24 am you ask Allan Macrae

what is your opinion/comment about me finding a 100% correlation (r=1)
with the timing of the planets Uranus and Saturn on the best sine wave for the drop in maximum temperatures?

I can answer that:
it is certain to agree with something because there are an infinite number of combination of things it may agree with.
Why does it concur with “the timing of the planets Uranus and Saturn” and not with Jupiter which is larger and denser?
If you cannot give a cogent answer to this question then you have merely searched through an infinite number of possibilities until you found one which – by chance – fits with what you wanted.
Allan assesses known mechanisms. That is science.
You search for correlations. That is not science unless you can provide falsifiable hypotheses of how and why those correlations exist.
Richard

September 11, 2013 12:29 pm


you ask
Why does it concur with “the timing of the planets Uranus and Saturn” and not with Jupiter which is larger and denser?
It was not me who came with this proposition, originally
Why don’t you read the whole paper of William Arnold?
http://www.cyclesresearchinstitute.org/cycles-astronomy/arnold_theory_order.pdf
Note that there are several factors, all confirming the various dates of the best fit for the drop in maxima, six in total…..
http://blogs.24.com/henryp/2013/04/29/the-climate-is-changing/

richardscourtney
September 11, 2013 1:13 pm

HenryP:
At September 11, 2013 at 12:29 pm you say


you ask

Why does it concur with “the timing of the planets Uranus and Saturn” and not with Jupiter which is larger and denser?

It was not me who came with this proposition, originally
Why don’t you read the whole paper of William Arnold?
http://www.cyclesresearchinstitute.org/cycles-astronomy/arnold_theory_order.pdf

I did not ask who made the “proposition”.
I asked why you adopted it and I asked you to justify you having adopted it.
And I am not pleased that instead of answering my request for explanation you set me homework which I have no intention of doing. You are the one making the claims so it is your responsibility to provide the justification for those claims, and I am only required to ask you to provide it. Setting me homework is not providing your explanation of what you did.
Richard

September 11, 2013 2:15 pm

Al – Thanks for the comments. I find it interesting that engineers seem to understand this stuff faster than others. Perhaps my words have meaning more in common with other engineers. I suspect that some climate scientists get mired in the minutia trying to explain average global temperature change using meteorology.
My work started out with the energy equation and a hypothesis, that net energy gain above or below breakeven is proportional to the time-integral of sunspot numbers (and, of course, accounting for net energy loss, also above or below breakeven). There is a second hypothesis, that I didn’t even realize until lately, that, at least from 1895 through 2012, the net over-all ocean oscillation could be approximated with a saw tooth trajectory with period of 64 years. These hypotheses are determined to be valid by the high R2 when the trace from the resulting equation is compared to least-biased reported anomalies. I expect the ocean oscillation approximation to eventually fade as discussed in http://endofgw.blogspot.com/
My perception is that about 90% of average global temperature change is driven by natural ocean oscillations plus some phenomena that correlate with the sunspot number time-integral. I welcome legitimate challenge.
The oscillation part has no slope. Superimposed on the oscillation is the positive slope of the GW (which ended about a decade ago) that correlates with the sunspot number time-integral. Part of the problem with reported annual temperature measurements is that they contain a random uncertainty with s.d. ≈±0.1K. It takes about 20 years of data to get the uncertainty in the trend down to where the trend slope begins to have credibility.
Short answer on warming bias is that I expect that CO2 factor would remain insignificant. I discovered that change to noncondensing ghg had no significant effect on average global temperature about 5 years ago and made my findings public in the Middlebury paper linked in the Sept 11, 12:27 am post. I discussed this a bit more in the Sept 7, 4:38 post. An early version of the analysis at the Climaterealists link in the Sept 6, 1:32 pm post did not use HadCRUT4.
My work corroborates that ‘climate sensitivity’ is very near zero.
Climate wise, CO2 change doesn’t matter, but increase helps plants. Plants must now sort through 2500 molecules to find one that can be used to make food. More CO2 means more food. That is a good thing.
ENSO & PDO & AMO, etc. all contribute to the net over-all ocean oscillation. The Argo project should eventually shed some light on this issue once the local temperature anisotropy of the oceans is sorted out.
Certainly in the paleo data temperature change drove atmospheric CO2 level. It is a simple solubility thing. I was not aware that temperature change led CO2 change in modern data, but, since CO2 has no significant effect on climate, it wouldn’t matter if it led. According to a Woods Hole report at http://www.whoi.edu/oceanus/viewArticle.do?id=17726 there is about 50 times as much carbon in the oceans as in the atmosphere.

September 11, 2013 3:50 pm

Hello Henry,
I did look at the 1985 Arnold paper and was previously familiar with it and the work of several “Cycles Institutes”. The study of cycles has a mixed history, and the field is much less popular now than it was in the past. Some parties are inclined to be dismissive of everything that mentions cycles, whereas others want to ascribe all manner of events to cycles, and are busy force-fitting the evidence into their models.
I think there is probably validity to the PDO, although it may not follow a regular 60-year cycle, and it is both in-and-out-of phase with the Gleissberg solar cycle of about 90 years.
The first question I have with your hypo is the one raised by Richard Courtney (which previously occurred independently to me, by Jove 🙂 ). I have seen previous cyclical work that includes the much larger Jupiter, and wonder why it is not included in the Arnold paper. That said, I have not studied this subject in detail.
I only hold strong opinions in subjects to which I have devoted significant study, and so cannot comment further.
In general, I share your concern about imminent global cooling, although my 2002 written prediction of global cooling was based on the expert opinion of Dr. Tim Patterson. Tim`s response was based on his research into natural climate change and was based on the Gleissberg Cycle. If in fact the PDO governs, then cooling could start sooner than our prediction of 2020-2030. We did not predict the degree of global cooling, and at the time NASA (Hathaway) had predicted a robust SC24, and SC24 is now apparently a dud.
I wish you success in your studies, and sincerely hope we are both wrong about imminent global cooling. If cooling is severe, society is unprepared and the consequences could be tragic. Witness the huge population die-off in Northern countries during the Maunder Minimum circa 1700.
We like to think that we are much more capable of managing disasters than we were in the past, but our civilization is also much more technically and politically complex, and such complexity is more difficult to manage effectively. Despite increased physical capability to effect change, we have been unable to effectively mitigate relatively small disasters such as the flooding of New Orleans – so how could we deal effectively with a major food or energy shortage.
Best regards, Allan

September 11, 2013 4:00 pm

Hi Dan,
Have to run but temperature driving CO2 is certainly more than just solubility.
It is the entire carbon cycle. and is primarily driven by photosynthesis..
Please examine the beautiful 15fps AIRS data animation of global CO2 at
http://svs.gsfc.nasa.gov/vis/a000000/a003500/a003562/carbonDioxideSequence2002_2008_at15fps.mp4
It is difficult to see the impact of humanity in this impressive display of nature’s power.
Regards, Allan

September 12, 2013 12:32 am

Allan Macrae says
…and sincerely hope we are both wrong about imminent global cooling. If cooling is severe, society is unprepared and the consequences could be tragic. Witness the huge population die-off in Northern countries during the Maunder Minimum circa 1700.
Henry says
we are not wrong, because I found too many confirmations that energy-in is going down.
The results of my plot also suggest that this reduction in energy coming in already started in 1995. (You can do any fit you like on that blue line and still you get 1995). Also, from the look at my tables, it looks earth’s energy stores are depleted now and average temperatures on earth will probably fall by as much as what the maxima are falling now. I estimate this is about -0.3K in the next 8 years and a further -0.2 or -0.3K from 2020 until 2038. By that time we will be back to where we were in 1950, more or less…
There is a bit of a delay between energy-in and energy out. I predicted that it would be about 5-7 years, and indeed do we see cooling now having started in most of the means data sets as well.
http://www.woodfortrees.org/plot/hadcrut4gl/from:1987/to:2014/plot/hadcrut4gl/from:2002/to:2014/trend/plot/hadcrut3gl/from:1987/to:2014/plot/hadcrut3gl/from:2002/to:2014/trend/plot/rss/from:1987/to:2014/plot/rss/from:2002/to:2014/trend/plot/hadsst2gl/from:1987/to:2014/plot/hadsst2gl/from:2002/to:2014/trend/plot/hadcrut4gl/from:1987/to:2002/trend/plot/hadcrut3gl/from:1987/to:2002/trend/plot/hadsst2gl/from:1987/to:2002/trend/plot/rss/from:1987/to:2002/trend
However, I suspect a lot of fiddling with the data to try and hide the decline, as my own data set suggests that we already dropped about 0.2 and not 0.1 as the above results suggest.
The problem is not so much the drop in temps. , & more snow, etc.We will survive that.
I survived it.
The problem is the drop in pressure over the oceans – causing less rain at higher latitudes 2020-2030. For example, to confirm this suspicion, I analysed the daily data of a weather station in Wellington, NZ, which lies at 40 latitude, and found rainfall 1930-1940 was 14% lower, on average, compared with 1940-2000.
Quote on what I have said before:
As the temperature differential between the poles and equator grows larger due to the cooling from the top, very likely something will also change on earth. Predictably, there would be a small (?) shift of cloud formation and precipitation, more towards the equator, on average. At the equator insolation is 684 W/m2 whereas on average it is 342 W/m2. So, if there are more clouds in and around the equator, this will amplify the cooling effect due to less direct natural insolation of earth (clouds deflect a lot of radiation). Furthermore, in a cooling world there is more likely less moisture in the air, but even assuming equal amounts of water vapour available in the air, a lesser amount of clouds and precipitation will be available for spreading to higher latitudes. So, a natural consequence of global cooling is that at the higher latitudes it will become both cooler and drier.
As the people in Alaska have noted,
http://www.adn.com/2012/07/13/2541345/its-the-coldest-july-on-record.html
http://www.alaskadispatch.com/article/20130520/97-year-old-nenana-ice-classic-sets-record-latest-breakup-river-1
the cold weather in 2012 was so bad there that they did not get much of any harvests. And it seems NOBODY is telling the farmers up there that it is not going to get any better.
That is “our” fault. We know it is coming. We must warn the world that we cannot stop it from coming.

September 12, 2013 2:20 am

Hello Henry,
A few comments:
Rivers in Western Canada (the Athabasca, the North Saskatchewan, etc.) exhibit a cyclical flow – please note that in this region, it appears that warmer is dryer (lower river flow) and colder is wetter. I have no opinion regarding other regions.
In the USA, it appears that the warmest years in the modern data record occurred in the 1930’s. This may be true globally as well. Hadcrut3 probably has a warming bias of about 0.2C since ~1980 and this warming bias may extend back several more decades.
I have no basis to estimate the degree of global cooling. I do believe that natural cycles like the PDO suggest imminent cooling (that may have already started), and we know that SC24 is a dud.
At a minimum, I suggest society should immediately study the global cooling issue and also invest in developing better frost-resistant crops. Cessation of fuel-from-food schemes and storage of grains would also be prudent and relatively low-cost measures for consideration.
Regards, Allan

September 12, 2013 6:31 am

Alan Macrae says
In the USA, it appears that the warmest years in the modern data record occurred in the 1930’s
…it appears that warmer is dryer
Henry says
It depends on where you look.
The 2nd question here is also: what is the relationship/function? Some places are warmer in a cooling period and cooler in a warming period? CET is a case in point. I note (with a sense of unbelief), that the average temperature (means) at the airforce base in Anchorage has dropped by more than 2 degrees C since 2000. (and nobody noticed???)
whereas in Bodo, Norway, it is still warming, as is the east coast of the USA.
Why is this you ask?
The reason of course is that these places (that get warmer in a cooling period) happen to get more clouds/cloud formation due to the direction of the weather/winds.These clouds give a GH effect, that is very real, both in winter and at nights. Also condensing water vapor releases enormous amounts of heat.
I agree with you that before 1930 we do not have a global base of temps. as nobody can even show me a re-calibration certificate of a thermometer before that time. So yes, it is easily possible that before 1930 everything must just shift up.
All of this does not change the facts. As far as I know, the inflow into the Hoover dam is already going down, exactly as I expected, for the trend since 2000, with global cooling firming since that time. (Obviously the misinformed broadcaster informs us it is due to climate change – which is only half true, clearly implying it is “our” fault). So, the global trend of diminishing rainfall for >[40] latitude is already happening. It is just that certain places like W-Europe and E-USA are lucky because of the weather. (although: wet and more snowy is not always nice, either)
The problem will be the biggest in 2020-2030 as it was in 1930-1940. Many places >[40] will have no weather…..for a long time. Believe me. It will come.

September 12, 2013 7:18 am

Alan says
events to cycles, and are busy force-fitting the evidence into their models.
Henry@Alan&Richard
If you look at the second row of data which can be calculated/projected by energy-in (Maxima)
1851 1859 8
1874 1882 8
1897 1904 7
1919 1927 8
1942 1950 8
1965 1972 7
1988 1995 7
2009 2016 7
2032 2039 7
and compare it with the first row
in a linear graph
you get
0.9941x + 18,931
R2=1
It is compelling to include it in my final report.

September 12, 2013 10:56 am

Hello Henry,
Re your post at 6:31am today.
You say: “The problem will be the biggest in 2020-2030 as it was in 1930-1940.”
I do not understand your statement. I suggest that 1930-40 was probably the warmest decade in the USA and possibly the warmest on Earth (in the modern data record). Second-warmest decade was probably 2000-2010.
IF global cooling does recur, then I suggest that 2020-2030 will be a cooler decade, analogous but more or less severe than the global cooling trend from ~1945 to ~1975.
Re your post at 7:18am today
Please note my statement “warmer is dryer” applied to “rivers in Western Canada” and I should further state that my previous analysis was limited to rivers that flow eastward off the continental divide, specifically the Athabasca and the North Saskatchewan. Yes, of course it depends on location – I said “I have no opinion regarding other regions”.
You say: “If you look at the second row of data…”
I cannot comment on this statement because you have apparently not provided a reference to the data file.
Regards, Allan

September 12, 2013 11:13 am

Climate models wildly overestimated global warming, study finds
FoxNews.com Sept 12, 2013.
Refers to the Nature Climate Change journal piece

117 climate predictions made in the 1990’s to the actual amount of warming. Out of 117 predictions, the study’s author told FoxNews.com, three were roughly accurate and 114 overestimated the amount of warming. On average, the predictions forecasted two times more global warming than actually occurred.

This paper and this WUWT post should be added to the WUWT “Climate Fail Files” menu.

September 12, 2013 12:02 pm

(I hope Richard who does not want home work is reading)
you make good arguments,
allow me to counter the last one because I think it will make you understand the importance of energy coming in
The data file is here:
http://blogs.24.com/henryp/2013/02/21/henrys-pool-tables-on-global-warmingcooling/
(first table)
the 2nd row in my previous post are my own results – turning points and change of signs –
the dates 1995 (change of sign) and 1972 (change of direction) and 2016 (change of direction) can be calculated from the observed data with very high confidence (r2>0.995 o the binomial) as it lies within or just outside the measuring range 1974-2012
I subsequently realised:
quote:
Persistence of the Gleissberg 88-year solar cycle over the last ˜12,000 years: Evidence from cosmogenic isotopes
Peristykh, Alexei N.; Damon, Paul E.
Journal of Geophysical Research (Space Physics), Volume 108, Issue A1, pp. SSH 1-1, CiteID 1003, DOI 10.1029/2002JA009390
Among other longer-than-22-year periods in Fourier spectra of various solar-terrestrial records, the 88-year cycle is unique, because it can be directly linked to the cyclic activity of sunspot formation.
end quote
So Gleissberg & others knew that there must be a cyclic nature to energy coming in (maxima)
Putting the data in such a best fit with 88 year wavelength
gives the other dates (projection)
http://blogs.24.com/henryp/2012/10/02/best-sine-wave-fit-for-the-drop-in-global-maximum-temperatures/
If you still don’t understand that argument, check here:
http://wattsupwiththat.com/2013/09/05/statistical-proof-of-the-pause-overestimated-global-warming-over-the-past-20-years/#comment-1412171
If you will agree with me on this, you will understand how I put everything together
http://blogs.24.com/henryp/2013/04/29/the-climate-is-changing/
I am saying: go on your own data, where we know it was reliable 1974-2012 and start from there, reconstructing the past on energy coming in (maxima). Never mind average temps. on earth. It confuses. Too many factors. Understand that there is a time delay because of this.
I am convinced that my experiment / sampling procedure is repeatable.

September 12, 2013 3:33 pm

Hello Henry,
Sorry but I am out of time for now. No guarentees, but if you want to email me a spreadsheet then pls contact me through my website.
Regards, Allan

richardscourtney
September 12, 2013 3:41 pm

HenryP:
At September 12, 2013 at 12:02 pm you say
(I hope Richard who does not want home work is reading)”
Of course I am. I always enjoy a joke, and somebody who cannot explain his work is a joke.
Richard

September 13, 2013 7:22 am

Henry;
Allan, the rows with data are there,
http://blogs.24.com/henryp/2013/02/21/henrys-pool-tables-on-global-warmingcooling/
you can right click on a table and print it.
Each original row of data came from tutiempo.net, after taking the linear trends over the periods indicated.
For example, here are the original results from JFK airport (New York)
http://www.tutiempo.net/clima/New_York_Kennedy_International_Airport/744860.htm
Note that in this particular example you will have to go into the individual month’s data for 2002 and 2005 to see which months are missing (or have only partial data) and apply the correction as explained earlier in my sampling technique 2)d)
FYI, if you missed it:
2d) says
I made a special provision for months with missing data (not to put in a long term average, as usual in stats but to rather take the average of that particular month’s preceding year and year after)
Once you have the whole data set of a station complete, you can copy and paste into excel and do the linear trending (I did maxima., means and minima together)
What I am saying is that Means are less reliable, not only because of earth’s own contributions from inner sources and storage places (oceans), but also because of the methods applied to record readings before automatic recording began. Maxima are much more reliable as we have only one measurement per day and we know it can be related to how much energy is coming in.
It would be difficult for anyone to mess up because they had thermometers that got stuck on the maximum.
So it should be easy for anyone to come up with a similar graph of mine on maxima
(problem is: NOBODY IS LOOKING AT MAXIMA< WHY NOT?)
You have to take a globally representative sample though, to get a reasonable average of the whole globe. If you look at only one place you get a very biased result.The biggest variable that is causing that is the "weather".
So now, if you grasp all of that, it follows that 1927 was the lowest point for energy coming in, during the last century and 2016 will be the next. The droughts on the great plains of America started in 1932. Hence, there was a 5 year delay before the lack of warming/cooling caused the lack of pressure over the oceans. This is consistent with my other observations, namely that there is always a delay of at least 5 years before you see "what is happening"…. it is cooling since at least 2002 but some data sets suggest 2000, as does mine.
There is also no such thing as a "pause" in warming. That is rubbish. In nature, it is either cooling or warming and it appears that the planets are acting together to keep the cooling and warming in check.
@Richardscourtney
Typical that the people without any data usually are singing the highest notes. You are also insulting me again. I am also a joke, now?
"I admit that I am losing patience with your unfounded smears, misrepresentations and insults"

richardscourtney
September 13, 2013 7:58 am

HenryP:
At September 13, 2013 at 7:22 am you ask me

I am also a joke, now?

I answer, yes.
I have answered your question but you have still not answered the questions I asked you concerning what you have done.
I remind you of the most recent question which you have evaded.

Why does it concur with “the timing of the planets Uranus and Saturn” and not with Jupiter which is larger and denser?

Richard

September 13, 2013 8:41 am

Richard says
I answer, yes.
Henry says
You are committing a serious sin. (Mathew 5: 22: and whoever calls his brother a worthless fool will be in danger of going to hell)
I did answer your question but I cannot help it if you are too lazy to spend time on how William Arnold came to his conclusions. He did mention the other planets and their functions.
The relationship of my best fit and the conjunctions of Saturn and Uranus help explain to me the 88 years cycle I am observing from my data as others did (i.e. Gleissberg himself, for one.)
Most likely we are also moving up or down ( as far as temps. are concerned) in other longer cycles, no doubt also caused by the planets….

September 13, 2013 8:55 am

Henry@RMB
It might help if you address the person(s) you are speaking to?

richardscourtney
September 13, 2013 9:24 am

HenryP:
I am answering your post to me at September 13, 2013 at 8:41 am in which you don’t provide a reply to me.
Firstly, I do not intend to swap Biblical quotations with you because I consider it would be unfare if I were to enter a gunfight when my opponent is only armed with a toothbrush.
However, I could list several quotations about bearing false witness as a response to your saying to me

I did answer your question but I cannot help it if you are too lazy to spend time on how William Arnold came to his conclusions.

NO! DO TRY TO NOT BE AN IDIOT.
I did NOT ask you about William Arnold,
I did NOT ask you about William Arnold’s conclusions, and
I did NOT ask you about how William Arnold came to his conclusions.
I asked you why YOU had done what YOU did.
You have been unable and/or unwilling to say why YOU did what YOU did.
But you did tell me to read a paper by William Arnold.
So far you have only been asked to explain your sample procedure and why your choice of a relationship is not data mining.
YOU HAVE NOT ANSWERED EITHER QUESTION
And there are other dubious things in your so-called analysis, too.
Richard

September 13, 2013 10:01 am


I was reading back the words you had written to me, after which I had apologized to you. You accepted my apology.
Clearly, after you calling me a fool, you must understand that I will not address you again until you apologize to me?
As far as relationships go, we are now lost, in time, so to speak,
I am just busy dusting me off.

richardscourtney
September 13, 2013 10:17 am

Henry P:
Were you to provide answers to my questions then clearly I would owe you an apology and in that case I would be pleased to provide it.
However, if you choose to use faux offence as a pretended reason to continue your refusal to explain your methodology then everybody will see that.
Richard