IPCC Climate: A Product of Lies, Damn Lies and Statistics Built On Inadequate Data

Guest essay by Dr. Tim Ball

“If you torture the data enough, nature will always confess” – Ronald Coase.

Facts are stubborn things, but statistics are more pliable. – Anonymous.

Climatology is the study of average weather over time or in a region. It is very different than Climate Science, which is the study by specialists of individual components of the complex system that is weather. Each part is usually studied independent of the entire system and even how it interacts or influences the larger system. A supposed link between the parts is the use of statistics. Climatology has suffered from a pronounced form of the average versus the discrete problem from the early 1980s when computer modelers began to dominate the science. Climatology was doomed to failure from then on, only accelerated by its hijacking for a political agenda.  I witnessed a good example early at a conference in Edmonton on Prairie Climate predictions and the implications for agriculture.

It was dominated by the keynote speaker, a climate modeler, Michael Schlesinger. His presentation compared five major global models and their results. He claimed that because they all showed warming they were valid. Of course they did because they were programmed to that general result. The problem is they varied enormously over vast regions. For example, one showed North America cooling another showed it warming. The audience was looking for information adequate for planning and became agitated, especially in the question period. It peaked when someone asked about the accuracy of his warmer and drier prediction for Alberta. The answer was 50%. The person replied that is useless, my Minister needs 95%. The shouting intensified.

Eventually a man threw his shoe on the stage.  When the room went silent he said, “I didn’t have a towel”. We learned he had a voice box and the shoe was the only way he could get attention. He asked permission to go on stage where he explained his qualifications and put a formula on the blackboard. He asked Schlesinger if this was the formula he used as the basis for his model of the atmosphere. Schlesinger said yes. The man then proceeded to eliminate variables asking Schlesinger if they were omitted in his work. After a few eliminations he said one was probably enough, but you have no formula left and you certainly don’t have a model. It has been that way ever since with the computer models.

Climate is an average, and in the early days averages were the only statistic determined. In most weather offices the climatologist’s job was to produce monthly and annual averages. The subject of climatology was of no interest or concern. The top people were forecasters who were meteorologists with only learning in physics of the atmosphere. Even now few know the difference between a meteorologist and a climatologist. When I sought my PhD essentially only two centers of climatology existed, Reid Bryson’s center in Wisconsin and Hubert Lamb’s Climatic Research Unit (CRU) at East Anglia. Lamb set up there because the national weather office wasn’t interested in climatology. People ridiculed my PhD being in the Geography Department at  the University of London, but university departments weren’t doing such work. Geography accommodated it because of its chorologic objectives. (The study of the causal relationships between geographic phenomena in a region.)

Disraeli’s admonition of lies, damn lies and statistics was exemplified by the work of the IPCC and its supporters. I realized years ago that the more sophisticated the statistical technique the more likely the data was inadequate. In climate the data was inadequate from the start as Lamb pointed out when he formed the CRU.  He wrote in his autobiography “…it was clear that the first and greatest need was to establish the facts of the past record of the natural climate in times before any side effects of human activities could well be important.” It is even worse today. Proof of the inadequacy is the increasing use of more bizarre statistical techniques. Now they invent data such as in parameterization. Now they use output of one statistical contrivance or model as real data in another model.

The climate debate cannot be separated from environmental politics. Global warming became the central theme of the claim humans are destroying the planet promoted by the Club of Rome. Their book, Limits to Growth did two major things both removing understanding and creating a false sense of authority and accuracy. First, was the simplistic application of statistics beyond an average in the form of a straight-line trend analysis: Second, predictions were given awesome, but unjustified status, as the output of computer models. They wanted to show we were heading for disaster and selected the statistics and process to that end. This became the method and philosophy of the IPCC. Initially, we had climate averages. Then in the 1970s, with the cooling from 1940, trends became the fashion. Of course, the cooling trend did not last and was replaced in the 1980s by an equally simplistic warming trend. Now they are trying to ignore another cooling trend.

One problem developed with switching from average to trend. People trying to reconstruct historic averages needed a period in the modern record for comparison. The 30-year Normal was created with 30 chosen because it is a statistically significant sample, n, in any population N.  The first one was the period 1931-1960, because it was believed to have the best instrumental data sets.  They keep changing the 30-year period, which only adds to the confusion. It is also problematic because the number of stations has reduced significantly. How valid are the studies done using earlier “Normal periods”?

Unfortunately, people started using the Normal for the wrong purposes. Now it is used as the average weather overall. It is only the average weather for a 30-year period. Actually it is inappropriate for climate because most changes occur over longer periods.

But there is another simple statistical measure they effectively ignore. People, like farmers, who use climate data in their work know that a most important statistic is variation. Climatology was aware of this decades ago as it became aware of changing variability, especially of mid-latitude weather, with changes in upper level winds. It was what Lamb was working on and Leroux continued.

Now, as the global trend swings from warming to cooling these winds switched from zonal to meridional flow causing dramatic increases in variability of temperature and precipitation. The IPCC, cursed with the tunnel vision of political objectives and limited by their terms of reference did not accommodate natural variability.  They can only claim, incorrectly, that the change is proof of their failed projections.

Edward Wegman in his analysis of the “hockey stick” issue for the Barton Congressional committee identified a bigger problem in climate science when he wrote:

“We know that there is no evidence that Dr. Mann or any of the authors in paleoclimatology studies have had significant interactions with mainstream statisticians.

This identifies the problem that has long plagued the use of statistics, especially in the Social Sciences, namely the use of statistics without knowledge or understanding.

Many used a book referred to as SPSS, (it is still available) the acronym for Statistical Packages for the Social Sciences. I know of people simply plugging in numbers and getting totally irrelevant results. One misapplication of statistics undermined the career of an English Geomorphologist who completely misapplied a Trend Surfaces analysis.

IPCC projections fail for many inappropriate statistics and statistical methods. Of course, it took a statistician to identify the corrupted use of statistics to show how they fooled the world into disastrous policies, but that only underlines the problem with statistics as the two opening quotes attest.

There is another germane quote by mathematician and philosopher A.N. Whitehead about the use, or misuse, of statistics in climate science.

There is no more common error than to assume that, because prolonged and accurate mathematical calculations have been made, the application of the result to some fact of nature is absolutely certain.

_______________

Other quotes about statistics reveal a common understanding of their limitations and worse, their application. Here are a few;

He uses statistics as a drunken man uses lampposts – for support rather than for illumination. – Andrew Lang.

One more fagot (bundle) of these adamantine bandages is the new science of statistics. – Ralph Waldo Emerson

Then there is the man who drowned crossing a stream with an average depth of six inches. – W E Gates.

Satan delights equally in statistics and in quoting scripture. – H G Wells

A statistical analysis, properly conducted, is a delicate dissection of uncertainties, a surgery of suppositions. – M J Moroney.

Statistics are the modern equivalent of the number of angels on the head of a pin – but then they probably have a statistical estimate for that. – Tim Ball

__._,_.___

Advertisements

  Subscribe  
newest oldest most voted
Notify of
Marcos

the NOAA web site used to have a page discussing the use of ‘normals’ that specifically stated that they were were never intended to be used to gauge climate change. for the life of me, i cant find that page anymore…

Gonzo

[He uses statistics as a drunken man uses lampposts – for support rather than for illumination. – Andrew Lang.] I couldn’t help but think of tamino et al LOL

This part of an exchange I had with a modeler on my blog.
Let’s go back to basics. What can a model teach us? Or in other words, what is the knowledge gained by running a simulation. Knowledge is gained from analysis of information. Where does information come from? From the resolution of contingency (if that last sentence doesn’t make sense, consult any introductory text on information theory). What contingency then can running a simulation resolve? It’s output trajectory, unknown before the run. On what does that result depend? The model’s equation plus initial state. Since this information is internal to the model, a model can only teach us about itself! Models can create contingency but can never resolve it, and therefor cannot provide information gain if the metric is information about the real world. Models can only provide a hypothesis but information gain (knowledge) cannot occur unless the hypothesis is tested by comparing the models predictions to the observed data, something the IPCC refuses to acknowledge. They claim models do not make predictions. But a model is only as good as its predictive power.

oldseadog

Old Scottish saying:
Facts are chiels that winna’ ding.
Approximate translation: Facts are things that cannot be argued with.
Now, how can we get a), IPCC and b), MSM to accept this?
I’m not holding my breath.

Curiously, I just posted a list of four statistical “sins” committed in AR5 with regard to figure 1.4 (and in AR4’s SPM as well), on CA in the thread Steve just opened on statistical problems with this figure. They are a bit more specific than the general essay above:
* Cherrypicking (presenting single model runs without telling us how those particular model runs were selected or constructed, permitting them to be selected out of an ENSEMBLE of runs from slightly variable initial conditions per model).
* Data Dredging in the worst possible way and concealing the fact in a plateful of spaghetti.
* Applying the concept of the statistical mean of an ensemble of models as if it has any meaning whatsoever, especially when the models individually fail or do very poorly on an ordinary hypothesis test compared to the actual data, once you remove the illusion of the plate of spaghetti and display the results, one model at a time, compared to the actual data.
* Applying the concept of the statistical variance/standard deviation to the results from an enemble of models as if it has any meaning whatsoever. This is the only rational basis for statements such as “95%” confidence or “high confidence” and is based on the central limit theorem, which does not apply to ensembles of model predictions because the general circulation models used in the ensemble are not randomly selected independent and identically distributed samples drawn from a probability distribution of correct general circulation models.
Steve also added the fact that they cherrypicked and shifted the start date for figure comparison to maximize the overlap of their spaghetti with the actual climate data, certainly compared to the earlier AR5 draft. I’d add another feature of 1.4 I pointed out back in the originally leaked draft — the error bars shown on the current temperatures are utterly meaningless — somebody just made them up at plus or minus 0.1C. HADCRUT4 doesn’t even claim to be accurate to better than 0.15C across the contemporary time frame. As usual, the graphic adds an error bar onto individual points that are what they are (conveying a false sense of precision beyond that represented by the data itself) and omits any sort of error estimate or analysis of fluctuations in the individual GCMs represented in the spaghetti. Finally, one should really look AT the fluctuations in the spaghetti — in particular, how does the autocorrelation of the GAST produced by the GCMs compare to the observed climate autocorrelation. Hmmm, not so well, not so well.
I’d add further that performing the most cursory of fits of the form T(t) = a t + b sin(ct) + d to the 165 year HADCRUT4 data, one obtains a result with vastly smaller a (the linear rate of warming), b around 0.2 C, c around 1/70 inverse years, and d set by the need to optimize the slope and oscillation relative to the data. This curve explains almost all of the visible variation in the data remarkably accurately, including the rapid warming of both the early and late 20th century and the “pause/cooling” in the middle and end of the 20th century, as WELL as a similar cycle visible even in the 19th century data with its presumably large uncertainties. In fact, I think I could pretty easily make up a stochastic function such as:
T(t) = a t + b sin(ct) + d + F(t)
where F(t) is trendless, exponentially correlated or power correlated noise and produce temperature curves that are ALMOST indistinguishable from the HADCRUT4 observed 165 year temperature series, and could do even better if I cherrypicked a different start and (say) threw out all data before 1880 or whatever as being too imprecisely known.
If this curve (which is pure numerology — I have no idea what the physical basis of a and d are beyond mumblings about Milankovitch cycles that add up to “I don’t really know”, might GUESS that c is the inverse of the PDO period with b an empirical amplitude, and of course F(t) is noise because, well, the climate system is empirically noisy as all hell in ways we cannot predict or understand) has any meaning at all, it is that making egregious claims about the “unprecedented” nature of the warming trend in the late 20th century is ridiculous — no trend explainable by a four parameter sinusoidal fit is “unprecedented”, and as Dick Lindzen has shown, no audience that is shown early and late 20th century temperature variations at the same scale and asked to pick which one is with and which without CO_2 can do this unless they are enormously familiar with the data and can pick out individual FINGERPRINTS in the data such as the late 20th ENSO/Pinatubo bobble. The curves are qualitatively and quantitatively identical to within this sort of feature specific noise. What isn’t appreciated is that the curve actually extends decently back into the 19th century as well, at least as far as HADCRUT4 is concerned.
Personally, I’d like to see AR5 criticism be a little less general and a little more specific, and think that one can pick figure 1.4 completely to pieces in a way that is most embarrassing to the IPCC. I’d start by simply separating out the contributing GCMs, one at a time, from the spaghetti. Look at the one in orange near the very top! It doesn’t come within MILES of the empirical curve. To hell with it, it fails an INDIVIDUAL hypothesis test regardless of how you massage starting points and so on. Remove it from the contributing ensemble as a falsified model. Look at another. Yes, it dips down as low as the empirical curve — for less than 10% of its overall values — and it has UTTERLY incorrect autocorrelation, showing wild spikes of warming that really ARE unprecedented in the data. To hell with it — remove it from the contributing ensemble.
In the end, you might have a handful of GCMs that survive this winnowing not because they are CORRECT, but because they at least spend SOME significant fraction of their time at or below the actual temperature record and have at least approximately the right autocorrelation and heating/cooling fluctuation range. Better yet, run entire Monte Carlo ensembles of runs per model and see what fraction of the models have EXACTLY the right mean and autocorrelation behavior. If the answer is “less than 5%” chuck the model. Even fewer of the GCMs would make the cut here.
When you are done, don’t present a pile of spaghetti and don’t data dredge a conclusion. The non-data dredged conclusion one SHOULD draw from considering the GCMs as if they WERE iid samples would be “throw them all out”, because one has to apply MUCH MORE STRINGENT STATISTICAL REQUIREMENTS FOR SIGNIFICANCE if you have many jars of jellybeans you are trying to correlate with acne, and commit even more grevious sins that mere data dredging if one PIECES TOGETHER regions where individual models HAPPEN to descend to barely reach the data in order to be able to claim “the models embrace the data”.
Of course, if one does this one ends up with the conclusion that the rate of warming is highly exaggerated in AR4 and AR5 alike by any of the ensembles of GCMs. You might even end up with a rate of warming that is surprisingly close to a, the slope of the linear trend in my numerological fit. That’s order of a half degree per century, completely independent of any presumed cause such as CO_2.
rgb
[Thank you. An “iid sample” ? Mod]

D Nash

The result of statistical corruption then leads to other forms of corruption:
“The surest way to corrupt a youth is to instruct him to hold in higher esteem those who think alike than those who think differently.”
― Friedrich Nietzsche
“One of the saddest lessons of history is this: If we’ve been bamboozled long enough, we tend to reject any evidence of the bamboozle. We’re no longer interested in finding out the truth. The bamboozle has captured us. It’s simply too painful to acknowledge, even to ourselves, that we’ve been taken. Once you give a charlatan power over you, you almost never get it back.”
― Carl Sagan

statistics is fine to study climate
as long as you keep to probability theory
and taking representative samples
http://blogs.24.com/henryp/2013/02/21/henrys-pool-tables-on-global-warmingcooling/

MattB

Facts are stubborn things, but statistics are more pliable. – Anonymous
FYI, that one is generally attributed to MarkTwain, and I can see that 🙂

Brad

Even when the models compare well to observed data, there is still the possibility of no relevance between the two. I deal with this on a regular basis where modelers “tune” their calculations adjusting inputs that really fixed, and ignoring those that are critical.
As an example: A building energy model was created to match the building energy consumption characteristics to past bills. A mechanical contractor had approached the owner with a energy retrofit project that he claimed would save 30% of the building air conditioning energy use. He “tuned” the model to match annual energy use within 2%, calling it good and presented the output.
It took me the better part of 3 weeks to get the input files in order to review the output files.
Monthly energy use was off as much as 200% of the bills. Higher in the summer, lower in the winter.
They had adjusted the building air conditioning operations from the actual 60 hours/week to 94 hours/week.
They had not taken into account a data center that consumed 30% of the total electrical usage. (This was purposely done to provide a larger energy pool to claim savings from.)
Motor KW draw was nameplate, not actual. (Actual was ~50% less than nameplate.)
Chiller efficiency at 0.9KW/ton cooling was worse than actual. (more energy consumption.)
These issues were brought up and the modeler went back for another round. Again I had to ask repeatedly for the input files.
The final showed half the data center load accounted for and the operating hours were corrected,.
However the chiller efficiency was adjusted upwards to 1.4KW/ton cooling. The reason given was to more accurately reflect “actual” usage. (Actual was around 0.68KW/ton.)
The occupant count was increased almost two-fold, including floor plug loads.
By this time the owner saw the writing on the wall and stopped the project.
The contractor was furious that I had deemed it necessary to actually question his model.
The sad thing was that the contractor didn’t know (or acknowledge?) the real reason for poor energy performance was the building envelope. Caulking was failed, and the air conditioning motorized dampers were failed wide open at night, allowing stack effect to “push” the heat out of the building every night.
Had these issues been identified upfront, there would have been no reason to do the modeling in the first place.
This was one instance where I managed to get ahead of the problem. All too often I get brought in afterwards, when the damage has already been done. Those projects make for great meetings…:)
I could write for months on this subject….

Tom J

There’s some interesting historical details that perhaps may mesh with this story. Reid Bryson was a friend of Hubert Lamb. I believe it was in the 1970s, when global cooling and a coming ice age were the fashion statement of the day, Reid Bryson was claimed to have speculated on the cooling potential that might occur if every human kicked up something like a spoonful of dirt into the atmosphere. Bryson, himself, never accepted the CAGW meme, but he unwittingly gave an inspiration to Thomas Wigley. Hubert Lamb was not particularly computer literate and hired Wigley to transfer climate records onto the computer. Wigley had other ideas and Lamb grew to regret having hired him at the CRU. Wigley, who eventually migrated over to the NCAR, was one of the original AGW mavens. A problem always existed with the global warming meme in that global temperatures dropped from the 1950s till the late 1970s, in contradiction to the greenhouse theory. Thomas Wigley, so to speak, ‘borrowed’ Reid Bryson’s theories on atmospheric aerosols and used that to explain away that drop, claiming that when clean air legislation came on line in the 70s the cooling effect of the aerosols ceased which allowed AGW to light off. The IPCC apparently attempted to resurrect that claim about a year ago to explain the current lull in warming. Lots of twists and turns to this story, eh?

Tom G(ologist)

I have been in the water resources business for 30 years, much of which has dealt with groundwater. That segment of the industry has been hijacked by groundwater modelers in much the same way. Groundwater models are no more in touch with reality than climate models but they are the cornerstone of decisions for groundwater remediations. In the absence of an IPCC, EPA is the mover and shaker of the model-driven basis of groundwater reality in the U.S., and if you think they are draconian with climate policy, you should spend some time working on a CERCLA (Superfund) Site and dealing with that branch of the agency. If a model says something is a certain way, then that’s it – no arguments. The premier journal of groundwater science “Ground Water” has become a paper forum for modelers to throw computer codes around and there are virtually no actual studies of groundwater in the real world.
I have represented companies and groups of companies which are paying tens of millions of dollars attempting to remediate groundwater which has no potential of impacting a single receptor and which, through natural processes, remediates itself before any receptor could be exposed to any contamination. I have produced hard time series and spatial distribution data to show that there is no need for the expenditure. But the agency runs a model which says that under a narrow range of specific pre-determined conditions, starting from an impossible initial condition, there is a chance that someone might be exposed. What’s more, they use the highest concentration ever measured at the source of a release (where no-one actually uses the groundwater) as the exposure concentration regardless how far away that exposure point is from the source and ignoring how much measured attenuation of concentration has occurred in the intervening distance/time, and assume that if a million people drink two litres of that most-contaminated water every day for 70 full years (no vacations or moving to a new house is allowed) then there is a chance that one of those people will develop a cancer – and they purport to know this despite the fact that of those million persons, 250,000 will actually develop cancers from other reasons. In other words, if ANYONE happens to spill a few gallons of gasoline onto the ground and it is measureable in groundwater, EPA takes the highest concentration detected, extrapolates THAT concentrations to remote locations and concludes that instead of 250,000 cancers per million people, there MIGHT be 250,001 cancers. Believe me, the transport assumptions are IMPOSSIBLE.
The agency then calculates a number which should be the remediation goal and forces the company to attempt to meet that goal by expensive actions which invariable fail (that is correct – they do not work). EPA KNOWS that NONE of its enforced groundwater remediations have ever succeeded in achieving its impossible goals in the absence of the wonderful ability of nature to take care of itself and do most of the actual work of contaminant elimination. Yet the agency continues to required multiple tens of millions of dollars per year in useless remediations because a model says it will be possible.
Now here’s a familiar scenario for you. Who actually runs the models? It is the contractors who will then be awarded the remedial design contract and construction oversight and operations and maintenance work. In every case in which I have been confronted with a computer model (and that’s about 100 to 125 out of the many many hundreds of cases on which I have worked), I have been able to refute the conclusions of the models by applying a very simple trick. Force everyone to look at the actual field data.
In groundwater modeling, there is a simple precept which everyone except the modelers and EPA accept: in order for a model to be remotely accurate, you need to generate enough data to eliminate assumptions for the input parameters. But by the time you have that much data, you no longer need a model because you HAVE a data-supported answer.

Hi
If the models are the best of the science can produce (and I do accept that the model constructors are experts in the numerical modelling, but may not be experts in much else) and their product does not work, perhaps what some call ‘pseudo science’ should be looked into.
For some years now, I have looked into changes of the Earth’s magnetic field and noticed easily observed ‘apparent correlation’ with the temperature records.
For time being the science indicates that the only way these changes may make an impact is through the cosmic rays nucleation process, but it doesn’t support the Svensmark’s hypothesis since the Earth’s and solar field appear to have negative ‘correlation’:
http://www.vukcevic.talktalk.net/LFC9.htm
There are other physical processes that come into play but sadly, as the current science indicates none has sufficient power available to move the oceanic temperatures.
However, the geomagnetic field as measured on the surface is often indication what is happening further down in the Earth’s interior.
Changes in the interior are also reflected on the surface through the tectonic activity, which contains enough energy to affect efficiency of the ocean currents, the main transporters of energy from the equatorial regions pole-wards. .
Thus next step was to look at the tectonic records for the last 100+ years (relatively good records available) in the areas of the three climate indices (AMO, PDO & ENSO).
An odd ‘oscillation’ was noted in the North Atlantic, but when these events were integrated over period of time ( k ) the picture became far more encouraging.
The process was repeated for the areas of the North and Equatorial Pacific and results are presented here:
http://www.vukcevic.talktalk.net/APS.htm
As it can be clearly seen the ‘forcing formula’ is the same one in all three cases. The k factor was determined by trial and error, it has same value for both Pacific indices, but it is less effective in the Atlantic than in the Pacific, Pacific being more tectonically active.
Finally, it is intriguing that a single (albeit regional) variable can within reason model past 100+ years temperature records, but for the future, ‘the time will tell’.
Note: if anyone is keen to know tectonic data, they are available on the web but it takes time and effort to collate, so do not expect readily made handout.
(the above is an exception, normally do not do long posts)

GlynnMhor

How to ‘prove’ causality with graphs:
http://www.smbc-comics.com/?id=3129

Lloyd Martin Hendaye

Normal curves are like mastectomies: Fine with a cosmetic overburden, yet nothing underneath.

heyseuss

Nice essay, Dr. Ball.

Gary Pearse

“Edward Wegman in his analysis of the “hockey stick” issue for the Barton Congressional committee identified a bigger problem in climate science when he wrote:
“We know that there is no evidence that Dr. Mann or any of the authors in paleoclimatology studies have had significant interactions with mainstream statisticians.”
Unfortunately Bradley pole axed Wegman’s effectiveness by charging him with plagiarism of some lines in his report and that is effectively about all that remains of Wegman’s contribution. Attacking the man is all that’s needed to neutralize his work these days. If Mann used the Tiljander series (Finnish lake sediment cores) upside down, that doesn’t matter. He attacked his accusers and has been since exonerated, even decorated and a dozen studies have been done since supporting the Hockey Stick. That’s how it’s done in post normal times.

stan stendera

Brad and Tom G: You should both write books!!!! Among the all time great comments on WUWT, and that’s saying something.

Gunga Din

Climatology is the study of average weather over time or in a region. It is very different than Climate Science, which is the study by specialists of individual components of the complex system that is weather.

=================================================================
I read that and thought of the medical field or “The Healing Arts” as I heard it called once. A nutritionist is not a MD but what MD would say that nutrition has nothing to do with health and vise/versa. “Health foods”? “Herbal remedies”? They may have their place. (Slippery elm can relieve the symptoms of a simple sore throat.) The problem enters when one “field” thinks theirs is the only valid one.
That’s what’s going on here. Some think tree rings (one tree ring?) and CO2 from fossil fuels are the only valid things to consider.

chris y

stan stendera says:
October 2, 2013 at 1:25 pm
“Brad and Tom G: You should both write books!!!! Among the all time great comments on WUWT, and that’s saying something.”
I agree. These are absolutely excellent comments.
I encourage both of you to consider putting together a post for Anthony. Your writing styles are similar to Willis Eschenbach, whose posts I enjoy immensely!

rgb
Always interesting hearing your take on models. Recently I’ve been keeping tabs on Bishop Hill, especially the recent Nic Lewis interactions with Julia Slingo of the Met Office.
My take on all this has been pretty much constant: where has IR forcing been characterised in a realistic environment, how have losses been measured and how effective is IR heating at very low power densities. IR is used to for example make bacteria inactive, but the power densities are in the KW/m2.
Anyway some commenters notably AlecM (posts on WUWT as AlecMM) suggested looking at Hansen_etal_1981 (if you Googled this it would come up). AlecM believes that Hansen has miscalculated the greenhouse effect and that more focus should be on LTE in the atmosphere (sorry for the paraphrase). What struck me was that Hansen attributes 90% of warming to the 15 micron band -“downwelling radiation” – in essence that Co2 is the main driver of climate. There is a reference for this pointing to an earlier paper, which then relates to Manabe and then finally to the first recent era estimation of atmosphere heating due to greenhouse gases – by Moller.
The paper I have from 1961 shows the logic of the calculation. Basically the idea is that slabs of the atmosphere absorb and radiate depending on composition (some gases absorp and thermalise). But for 15 microns it looks very much like the surface is the main contributor as this freqency passes through the slabs largely unaffected. The net radiation (the difference between upwelling and downwelling from the atmosphere) is used to adjust the blackbody emission of the Earth. The reduction in radiation will result in an increase in BB emission to balance it.
Now in a vacuum this is fine. I have experience in the space industry testing plasma thrusters and thermal properties are a big part of it. The temperature of surfaces can have repercussions on operating temperature of cables and components and temperature can only be determined by radiative processes to space.
The problem is that this modelling assumes that all the downwelling radiation is turned into real heat. That none of it is lost. And that this occurs in an atmosphere, with surface conduction, evaporation, convection not to mention scattering and thermal losses.
And this idea has been perpetuated for 50 years or more.
All the models assume that W/m2 is instantaneously converted 100% into real heat.
It’s a small detail buried in what is a good attempt to model an atmosphere, using tested radiative properties of gases. But the issue is that there is no correction for “effective forcing” due to this downwelling radiation in an atmosphere interface. This would have been picked up if the effect had been studied and characterised.
It was interesting then that in Julia Slingo’s reply she mentioned that scientists are still trying to isolate radiative forcing from C02 alone. Not eliminating it using known characteristics. They are still trying to look for it. And use the whole Earth climate system with all the other effects thrown in.
That is a real problem.

chris y

Here is a timeless quote regarding statistics-
“If you need statistics to prove something, and you have no proof except statistical proof, then what you have proved probably isn’t true. Statistical evidence is the lowest form of evidence there is. What a depressing conclusion.”
William Briggs, statistician (to the stars!), October 5, 2011

Rud Istvan

Statistical methods are tools. Nothing more. In the hands of a skilled craftsman they can help build wonders like medical discoveries or pricing models. But most users are not skilled craftsmen, as Drs Ball and Brown point out. To repeat an old saw (carpentry pun intended), if all you have is a hammer, even screws look like nails.
But statistical incompetence is no excuse for mendacity as practiced by IPCC AR5 in SPM figure 1-4. Rewrote previous IPCC pronouncements by arbitrarily revising previous starting data. Included unrealistic scenario B1 to lower the lower bound. Cherry picked individual model runs instead of using CMIP3 ensemble averages. Who knows what else.

YEP

First, SPSS is not a book, it’s a statistical software package. Perhaps the fact that it’s easy to use creates the temptation to use it without proper understanding. But SPSS is certainly not the problem; it’s GIGO, enabled by lack of understaning on many sides, not least graduate supervisors and peer reviewers.
Second, the use of statistics in the social sciences is not uniformly poor. Of course there are plenty of papers written and published based on simplistic analyses and poor methodologies, but any graduate program worth its salt should provide a proper grounding in fundamental statistical theory and practice. In my own field of economics, econometrics can boast very high standards of sophistication and has been at the forefront of the development of statistical theory and practice.
Thirdly (mainly in response to comments), there is nothing wrong epistemologically with using models, so please let’s stop the generalized bashing. The real world is too complex, stochastic and chaotic to be analyzed in its totality, so simplifications are needed to further any understanding. A good analogy is a road map: it’s an imperfect and simplistic model, but it gets you there, and it’s a lot more useful and practical than carrying a full-size replica of the earth in your car. As long as we don’t treat models or their output as truth or fact, not only do they have their uses, but the advancement of knowledge becomes impossible without them. So by all means criticize those who use model outputs as inputs to other investigations, or claim them as proofs for hypotheses, or ignore facts incompatible with the models’ forecasts, or talk of “settled science”. But dismissing whole fields of inquiry by saying “it’s only models” is absurd. A model’s usefulness is in simplifying, highlighting the essential aspects of a set of phenomena, and using logically coherent methods to deduce testable hypotheses from actual data and well-understood causal mechanisms. If the testable hypotheses fail, then you go back and reexamine assumptions and causal links. Failure is as important as success (or more so) in illuminating the assumptions and causal mechanisms used and promoting understanding.
Feel free to apply any of the above to the IPCC and its “settled science”. But the generalized assault on the social sciences and the use of models misunderstands the most basic aspects of the philosophy of science, and is getting a little tiresome.

Jordan

“The 30-year Normal was created with 30 chosen because it is a statistically significant sample, n, in any population N. ”
The above statement is only true on the condition of independent samples. This means sample errors follow a theoretical pattern of cancellation when aggregated. Put another way, we need to be confident that sample errors are zero mean random variables with a predictable variance.
If this condition is met, we can accurately calculate confidence limits for the sample. If these conditions are not met, confidence limits must be greater (greater than the estimates we would produce using methods which assume zero mean random error).
Proper, well-designed, sampling schemes are necessary to be able to meet the above condition, and to make claims about the result of statistical analysis.
Proper, well designed sampling schemes require prior understanding of the characteristics of the population. If you don’t have this, you are off to a bad start, and statistical methods become more difficult to apply (although this requires equal justification and understanding of the population).
Even then, there can be issues with stationarity (the assumption that a sample taken at one time is a representation of the system behaviour at another.
Climate analysis and climate modelling has no reliable measure of the “climate population”, or understanding of the fundamental properties (meaning statistical behaviour). It therefore lacks a dependable basis for design of an adequate sampling system. Without this, it cannot make any reliable claims based on statistics.
If we say N=statistically significant, we fall into a trap. We concede too much ground to the amazing claims of the doomsayers, when we really need to be sticking to fist principles. Sticking to first principles allows us to say: “you haven’t a clue”.

Marcos says:
October 2, 2013 at 11:11 am
the NOAA web site used to have a page discussing the use of ‘normals’ that specifically stated that they were were never intended to be used to gauge climate change. for the life of me, i cant find that page anymore…
====================================
Wayback machine at archive.org might help? I wouldn’t know where to start to look for the page you are after.

“The 30-year Normal was created with 30 chosen because it is a statistically significant sample, n, in any population N.”
Wrong. there is no such thing as a statistically significant sample.
the samples required is dependent upon the variance of the thing you are measuring and your desired accuracy.
here is something simple that even ball might get
http://statswithcats.wordpress.com/2010/07/11/30-samples-standard-suggestion-or-superstition/

Mac the Knife

With regards to the UN-IPCC AR5 report:
“The whole aim of practical politics is to keep the populace alarmed (and hence clamorous to be led to safety) by menacing it with an endless series of hobgoblins, all of them imaginary. ” H.L. Mencken

stan stendera says:
“Brad and Tom G: You should both write books!!!!”
I second that request. Both subjects are familiar to me. Building energy “efficiency” assessments where the perp walks away with the folding stuff while the costs go up. Groundwater transport requirements to protect an ephemeral watercourse 50m away, and the permeability rate is less than 2mm per day. I have done a fair amount of modelling. Models have now become a pandemic worse than swine flu.

BW2013

I am halfway done…. Going to cover a lot more than modeling errors.

Jquip

Robert Brown @ 11:59 — “Applying the concept of the statistical variance/standard deviation to the results from an enemble of models as if it has any meaning whatsoever.”
You did most of the heavy lifting for a point I wished to raise, so I’ll pick on you just a bit here. Though with nothing personal intended, consider it a guitar jam. Is there meaning to the ensemble var/sd? Well, yes, absolutely. This is an Economics Toolkit issue. Where *an* investor is always an idiot, even if they are an expert idiot. And the var/sd speaks directly to the amount of conformity in the individual opinions of the idiots in the ensemble. The mean of the distribution directly speaks only to the opinion of a constructed Statistical Idiot.
To speak to any more than that requires demonstration that the subset ensemble is reflective of a larger population. In Economics the Market is *the* ensemble of idiots. And a panel of selected expert idiots, if selected without bias, can be extended from a subset ensemble to a population ensemble. Because the collected opinions of *all* idiots is directly causal to the Market outcomes. And I don’t think it terribly likely that you’ll find Climatologists or Catastro-fans signing onto the explicit statement that the Climate ensembles causally manifest outcomes in reality. Though I’m quite sure you’ll find a random fellow here and there that will sign onto it.
But the models are not reflective of a larger population. The Statistical Idiot they construct is not an abstract notion of the distribution of the opinions in the subset ensemble being representative of a population ensemble. It is a single, and solitary object. A fictional, but constructed, Committee Chairman that speaks as one voice for the ensemble of idiots used to construct it. Which is fine for what it’s worth. But how accurate are the Delphic prognostications of the Statistical Idiot? And what metric do we use to justify investing on the basis of those Delphic prognostications?
If we’re interested in sciency things such as laws and theories, then a Statistical Idiot applied to a single reality isn’t going to get us there. It requires the explicit statement that ‘Science,’ as a body of knowledge, is not yet competent enough to produce a law or theory on the given subject. And it requires the explicit statement that we reject that *any specific* idiot is actually correct. Either known to be correct, or *possibly but unknowably* correct. If we held any candle of hope that there was such an idiot in the ensemble, then we would not use the ensemble. And it requires the final explicit statement that, while we are self-certain that every idiot in the ensemble is wrong in material ways, that blunting individual wrongness by constructing a Statistical Idiot is useful.
For those not familiar with gambling, this is how the Bookie at the horse races determines what odds to give on each race. The gamblers are the ensemble of idiots that are not reflective of the population of dogs or ponies in the given race. Nor are they a subset of the population of race contestants. Nor can their constructed Statistical Idiot be causal to the outcome of the race; unless there are criminal elements involved. The Bookie merely uses the Statistical Idiot vying for each dog or pony as a manner to ensure he stays in business. To adjust his odds of profit and loss for himself.
But if there is only one Statistical Idiot, then a Bookie is out of business. His payout must go to zero. Unless he has better knowledge that the Statistical Idiot about how often that dog or pony wins. That is, absent empirical knowledge — though still statistically aggregated — he cannot permit any payout greater than zero. If he wants to stay in business. And here, in respect to climatology, the ensembles are the gamblers, reality is the pony, and the Bookie is each consumer of the pair. Which, in consideration to the recent IPCC report and its purposes, the Bookie is any Govenment or Actuary.
So is there worth in the Statistical Idiot constructed by the model ensembles? Absolutely. It is a direct and explicit reflection of the consensus in the Consensus Science of Climastrology.

DaveS

I have an alternative vision of what a drunken man might do with a lamppost. And perhaps it’s an equally appropriate metaphor for how some climate change alarmists treat statistics.

Fernando (in Brazil)

Me too.
I am now.
I am skeptical sarcastic.
=================
Sincerely, I apologize to the Rev.
The indignation with such stupidity will lead me to be even more stupid.
And do not want to cause any embarrassment to my longtime friends.
There are no words. To describe. the incredible. 5 IPCC.
========================
Sorry.
The crazy liars ever crossed the line.

It is clear after the AR5 SPM that the IPCC forecasts based on climate models are completely useless as a basis for policy. This was a last chance for the IPCC contributors and editors to acknowledge frankly that their models have no skill in forecasting and begin to change their CAGW paradigm The IPCC scientists are so far out on a limb that for psychological and professional and funding reasons they cannot scramble back .Further discussion of the IPCC science is a waste of time – a new forecasting paradigm is required. Such a different approach is outlined in a series of posts at http://climatesense.norpag.blogspotr.com
Here are some quotes from the latest post.
“b) A Simple Rational Approach to Climate Forecasting based on Common Sense and Quasi Repetitive- Quasi Cyclic Patterns.
How then can we predict the future of a constantly changing climate?
When,about ten years ago ,I began to look into the CAGW – CO2 based scare, some simple observations immediately presented themselves. These seem to have escaped the notice of the Climate Establishment. ( See the Post 5/14/13 Climate Forecasting for Britain’s Seven Alarmist Scientists and for UK Politicians.)
a) Night is colder than day.
b) Winter is colder than summer.
c) It is cooler in the shade and under clouds than in the sun
d) Temperatures vary more widely in deserts and hot humid days are more uncomfortable than dry hot days – humidity (enthalpy) might be an important factor. We use Sun Screen against UV rays – can this be a clue?
e) Being a Geologist I knew that the various Milankovitch cycles were seen repeatedly in the Geologic record and were the main climate drivers controlling the Quaternary Ice Ages.
f) I also considered whether the current climate was unusually hot or cold. Some modest knowledge of history brought to mind frost fairs on the Thames and the Little Ice Age and the Maunder Minimum without sunspots during the 17th century . The 300 years of Viking settlements in Greenland during the Medieval Warm Period and viniculture in Britain suggested a warmer world in earlier times than at present while the colder Dark Ages separate the MWP from the Roman Climate optimum.
g) I noted that CO2 was about 0.0375% of the Atmosphere and thought ,correctly as it turns out, that it was highly unlikely that such a little tail should wag such a big dog.
I concluded ,as might any person of reasonable common sense and average intelligence given these simple observations that solar activity and our orbital relations to the sun were the main climate drivers. More specific temperature drivers were the number of hours of sunshine,the amount of cloud cover,the humidity and the height of the sun in the sky at midday and at Midsummer . It seemed that the present day was likely not much or very little outside the range of climate variability for the last 2000 years and that no government action or policy was required or would be useful with regard to postulated anthropogenic CO2 driven climate change.
These conclusions based on about 15 minutes of anyone’s considered thought are, at once , much nearer the truth and certainly would be much more useful as a Guide to Policymakers than the output of the millions of man hours of time and effort that have been spent on IPCC – Met Office models and the Global Warming impact studies and the emission control policies based on them.”
For a forecast of the coming cooling – very likely until 2035 and possible until 2650 check on the link above.

Gail Combs

YEP says:
October 2, 2013 at 1:51 pm
First, SPSS is not a book, it’s a statistical software package. Perhaps the fact that it’s easy to use creates the temptation to use it without proper understanding…..
Feel free to apply any of the above to the IPCC and its “settled science”. But the generalized assault on the social sciences and the use of models misunderstands the most basic aspects of the philosophy of science, and is getting a little tiresome.
>>>>>>>>>>>>>>>>>>>>>
In my field, Quality Engineering/Chemistry, statistical software packages are routinely ‘used without proper understanding.’ It was one of my major gripes with the Six Sigma program as ‘taught’ in the companies I worked for. Heck my state had to pass a law making it mandatory kids are taught the multiplication tables!
As for “the generalized assault on the social sciences and the use of models misunderstands the most basic aspects of the philosophy of science” that is what happens when you intentionally use bad ‘science’ and suborned scientific societies to shove bad political programs down peoples throats.
If and when the general public finds out they have been intentionally misled by ‘scientists’ the backlash against science could get rather nasty and that is the real crime committed by ‘Climate Scientists’ (Aside from all the human deaths they have caused that is.)

Gail Combs

Martin Clark says: @ October 2, 2013 at 3:02 pm
stan stendera says:
“Brad and Tom G: You should both write books!!!!”
I second that request…. I have done a fair amount of modelling. Models have now become a pandemic worse than swine flu.
>>>>>>>>>>>>>>>
I rather see them write short essays like they just did so they can be plastered across the internet and in letters to the editor. (The seven second rule. )
As far as models and modelers go… well now we know what all those would be horse traders do for a living in the modern age.
Yes models are useful but only when used for a better ‘Dig Here’ guess. Without validation with real world data they are of no more use than fairy tales… Well actually of less use.

No wonder Mann wants to sue everyone. His integrity has been defined.

observa

We don’t care about no steenking statistical underpinnings, just the HEADLINES-
http://www.adelaidenow.com.au/news/breaking-news/september-hottest-on-record/story-fni6ul2m-1226732022605
bearing in mind whitefellas only rolled up seriously in Gondwanaland in 1788, Adelaide itself was only founded in 1836-
http://en.wikipedia.org/wiki/Adelaide
and a reasonable network of Stevenson Screens was only rolled out throughout Australia around 1910, but never let the paucity of an historical data record get in the way of a ripping good headline here scary folk.
The priceless thing about the self appointed climatology club is they’ll happily wave away any early settler temp records and heat wave horror stories and then in the next breath tell you how important it is to recognise the thousands of years of aboriginal settlement and their Dreamtime stories.

kuhnkat

iid sample
In probability theory and statistics, a sequence or other collection of random variables is independent and identically distributed (i.i.d.) if each random variable has the same probability distribution as the others and all are mutually independent.[1]
http://en.wikipedia.org/wiki/Independent_and_identically-distributed_random_variables

RACookPE1978

What is the “real data” that these much-vaunted global Models supposedly generate? I have NEVER seen a plot or graphic of their output winds, temperatures, ice areas, oceans and land areas EVER.
nor have we been told what the original (0,0) conditions are for their fixed variables: % land, % ice, locations of the land and sea masses, original currents, original temperature and solar radiation simulations, original land and water albedo simulations, etc. We are only told of the one-line output.
For example, summer and winter land albedoes are going to change (going to get darker! and cause the land to absorb more energy over longer periods of the year) as the increased CO2 DOES increases growth of every plant, tree, shrub, grass, and plankton on the planet by 12 to 27% fro 1970 through today’s more lush growth.
Is this 1% – 2% or 4% decrease in land albedo included as a function of time? .
We are told that “land use” changes (cutting trees) is some 15% of the man-caused increase in CO2 – but where is that estimate justified by data worldwide?

observa

Your average HS student with internet skills could put these catastrophist chicken littles into perspective starting here-
http://en.wikipedia.org/wiki/List_of_disasters_in_Australia_by_death_toll#cite_note-39
Now bear in mind that’s for a whole continent full of just over 23 mill people nowadays-
http://www.abs.gov.au/ausstats/abs%40.nsf/94713ad445ff1425ca25682000192af2/1647509ef7e25faaca2568a900154b63?OpenDocument
But looking down that Wiki tragedy list including those various killer heat waves/bushfires and natural disasters you need to bear in mind in 1910 our popn was only 4.5mill and climbed to 7 mill at the end of WW2-
http://www.populstat.info/Oceania/australc.htm
Now to really put all that climate related death and the chicken littles into perspective here kiddies-
http://www.bitre.gov.au/publications/ongoing/rda/files/RDA_Aug_2013.pdf
note that during the 12 months ended August 2013 there were 1,265 road deaths nationally so be VERY VERY afraid when mum and dad want you to hop in the car to get to school!

r murphy

I am always enlightened after reading one of Dr. Ball’s essays. His down home, common sense commentary on the climate field is plain refreshing, thanks.

policycritic

Thanks, Dr. Ball.

kuhnkat

Moshpup,
do you ever get tired of making incorrect assertions??
“Wrong. there is no such thing as a statistically significant sample.”
Even eHow has got you:
http://www.ehow.com/how_6967270_calculate-statistically-significant-sample-size.html
HAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHA
Maybe you could just hire this guy for help:
http://www.statisticallysignificantconsulting.com/SampleSize.htm
or
http://www.amnavigator.com/blog/2010/10/27/how-to-calculate-statistically-significant-sample-size/

wayne

Read all about it!
THE NEW AND IMPROVED IPCC AR5 REPORT
Propaganda with raw fear at its very finest.
Don’t wait, get your copy while they’re bright red and hot!
Use to scare the daylights out of your fellow comrades with ease!
Doubts? Just ask the environmentalist on your block for instructions.
IMPORTANT: Never let mere data or observations lead you astray,
all must first pass through an official UN.IPCC statistical filter.
Enjoy!
/sarc off

wayne

Now why did I put /sarc off at the end of that? That’s not right.

Kuhnkat,
There is no general statistically significant sample size. It is subject to and conditional upon the variance of the data and the question you are asking.
“30” is a rule of thumb, but you should know from polling for example that 30 people would not work to predict a presidential election. now would it
To repeat. 30 is not a magic number. Sometimes you can have fewer samples and sometimes you need more. There is no such thing as a canonical sample size for statistical significance. It depends. its conditional. Notcarved in stone, not in dropped from heaven. it depends.
Ask yourself what the sample sizes are for particle physics.. you wont get 99.99% from 30 samples
Ask yourself about 6 sigma practices.
The bottomline is that 30 years was not selected because of the reason Ball asserts. In fact, you can find a discussion about this in the climategate mails.

Mike Bromley the Kurd

As they say in many parts of the Middle East, and this part of Canuckistan, “Shoe-kran, Dr. Tim!”

Steve Obeda

I have three things to say, and I will say them. Now.
A drunk wanders down the middle of the road. On average, he is fine.
Even as an undergrad, it became apparent to me that the social sciences were for the statistical lightweights.
Climatologists seem to want everyone else to stfu because they’re not members of the high priesthood of climatology. Well, to a great extent this is just applied statistics. No great mystery.

RACookPE1978

So, educate me here please:
The “average of the average of all of the world’s published (not measured or actual but “published”) average temperature anomalies” over 30 years is NOT an independently sampled statistical “number” by any means, right? So, though each year’s “average temperature anomoly” could be assigned an error band, why would somebody want to periodically “rest” the client for yet another 30 years? Are they not continuously trying to flat-line a ever-rising periodic wave ?
By definition, the global “average temperature anomaly” IS going to be changing over time, so understandably, we have to have some reference point. Should not that single reference point be assigned, then fixed?
Given that there is a very visible 55 year ACI variation over time (or 60, or 65, and now some writers are claiming 88 and 100 year periods!) all on top of a undeniable 800 year long-term rise and fall from the Roman Warming to the Dark Ages to the Medieval Warming Period to the Little Ice Age to today’s Modern Warming Pause …. Should not the climate community at least recognize that their temperature trends are NOT simple linear models, but have to include a period cycle that MAY LIKELY BE influenced by a linear CO2 increase on top the original trend?
Yet they seem zealously and emotionally fixated on projecting a simple linear trend out for 100 years, as this 30 year ploy of accepting their own models shows.

I like Cork Hayden’s comment : The average American has one breast and one ball .
From my perspective I see “climate scientists” following the path of “social scientists” learning ever more esoteric statistics and arguing endlessly over data sets .
But never having learned the most basic physics and its classical analytical approach . It appears to me that there are career “climate scientists” on both sides of the “debate” who have never even learned how to calculate the temperature of a radiantly heated colored ball .
Until that becomes an absolute requirement for any undergraduate presuming to enter the field , along with the return to the teaching of the analytical approach of comparable branches of applied physics , this decade upon decade of near total stagnation will continue .
Another of Cork Hayden’s aphorisms : If it were science , there would be 1 model instead of 30 ( now 73 ) .

Brian H

Dr. Ball,
Your title reminded me of an analogous article, in the Atlantic: Lies, Damned Lies and Medical Science. When big bucks are available, human ingenuity devotes itself to acquisition.