New peer reviewed paper shows just how bad the climate models really are

One of the biggest, if not the biggest issues of climate science skepticism is the criticism of over-reliance on computer model projections to suggest future outcomes. In this paper, climate models were hindcast tested against actual surface observations, and found to be seriously lacking. Just have a look at Figure 12 (mean temperature -vs- models for the USA)  from the paper, shown below:

Fig. 12. Various temperature time series spatially integrated over the USA (mean annual), at annual and 30-year scales. Click image for the complete graph

The graph above shows temperature in the blue lines, and model runs in other colors. Not only are there no curve shape matches, temperature offsets are significant as well. In the study, they also looked at precipitation, which fared even worse in correlation. The bottom line: if the models do a poor job of hindcasting, why would they do any better in forecasting? This from the conclusion sums it up pretty well:

…we think that the most important question is not whether GCMs can produce credible estimates of future climate, but whether climate is at all predictable in deterministic terms.

Selected sections of the entire paper, from the Hydrological Sciences Journal is available online here as HTML, and  as PDF ~1.3MB are given below:

A comparison of local and aggregated climate model outputs with observed data

Anagnostopoulos, G. G. , Koutsoyiannis, D. , Christofides, A. , Efstratiadis, A. and Mamassis, N. ‘A comparison of local and aggregated climate model outputs with observed data’, Hydrological Sciences Journal, 55:7, 1094 – 1110

Abstract

We compare the output of various climate models to temperature and precipitation observations at 55 points around the globe. We also spatially aggregate model output and observations over the contiguous USA using data from 70 stations, and we perform comparison at several temporal scales, including a climatic (30-year) scale. Besides confirming the findings of a previous assessment study that model projections at point scale are poor, results show that the spatially integrated projections are also poor.

Citation Anagnostopoulos, G. G., Koutsoyiannis, D., Christofides, A., Efstratiadis, A. & Mamassis, N. (2010) A comparison of local and aggregated climate model outputs with observed data. Hydrol. Sci. J. 55(7), 1094-1110.

INTRODUCTION

According to the Intergovernmental Panel on Climate Change (IPCC), global circulation models (GCM) are able to “reproduce features of the past climates and climate changes” (Randall et al., 2007, p. 601). Here we test whether this is indeed the case. We examine how well several model outputs fit measured temperature and rainfall in many stations around the globe. We also integrate measurements and model outputs over a large part of a continent, the contiguous USA (the USA excluding islands and Alaska), and examine the extent to which models can reproduce the past climate there. We will be referring to this as “comparison at a large scale”.

This paper is a continuation and expansion of Koutsoyiannis et al. (2008). The differences are that (a) Koutsoyiannis et al. (2008) had tested only eight points, whereas here we test 55 points for each variable; (b) we examine more variables in addition to mean temperature and precipitation; and (c) we compare at a large scale in addition to point scale. The comparison methodology is presented in the next section.

While the study of Koutsoyiannis et al. (2008) was not challenged by any formal discussion papers, or any other peer-reviewed papers, criticism appeared in science blogs (e.g. Schmidt, 2008). Similar criticism has been received by two reviewers of the first draft of this paper, hereinafter referred to as critics. In both cases, it was only our methodology that was challenged and not our results. Therefore, after presenting the methodology below, we include a section “Justification of the methodology”, in which we discuss all the critical comments, and explain why we disagree and why we think that our methodology is appropriate. Following that, we present the results and offer some concluding remarks.

Here’s the models they tested:

Comparison at a large scale

We collected long time series of temperature and precipitation for 70 stations in the USA (five were also used in the comparison at the point basis). Again the data were downloaded from the web site of the Royal Netherlands Meteorological Institute (http://climexp.knmi.nl). The stations were selected so that they are geographically distributed throughout the contiguous USA. We selected this region because of the good coverage of data series satisfying the criteria discussed above. The stations selected are shown in Fig. 2 and are listed by Anagnostopoulos (2009, pp. 12-13). THSJ_A_513518_O_XML_IMAGES\THSJ_A_513518_O_F0002g.jpg

Fig. 2. Stations selected for areal integration and their contribution areas (Thiessen polygons).

In order to produce an areal time series we used the method of Thiessen polygons (also known as Voronoi cells), which assigns weights to each point measurement that are proportional to the area of influence; the weights are the “Thiessen coefficients”. The Thiessen polygons for the selected stations of the USA are shown in Fig. 2.

The annual average temperature of the contiguous USA was initially computed as the weighted average of the mean annual temperature at each station, using the station’s Thiessen coefficient as weight. The weighted average elevation of the stations (computed by multiplying the elevation of each station with the Thiessen coefficient) is Hm = 668.7 m and the average elevation of the contiguous USA (computed as the weighted average of the elevation of each state, using the area of each state as weight) is H = 746.8 m. By plotting the average temperature of each station against elevation and fitting a straight line, we determined a temperature gradient θ = -0.0038°C/m, which implies a correction of the annual average areal temperature θ(H – Hm) = -0.3°C.

The annual average precipitation of the contiguous USA was calculated simply as the weighted sum of the total annual precipitation at each station, using the station’s Thiessen coefficient as weight, without any other correction, since no significant correlation could be determined between elevation and precipitation for the specific time series examined.

We verified the resulting areal time series using data from other organizations. Two organizations provide areal data for the USA: the National Oceanic and Atmospheric Administration (NOAA) and the National Aeronautics and Space Administration (NASA). Both organizations have modified the original data by making several adjustments and using homogenization methods. The time series of the two organizations have noticeable differences, probably because they used different processing methods. The reason for calculating our own areal time series is that we wanted to avoid any comparisons with modified data. As shown in Fig. 3, the temperature time series we calculated with the method described above are almost identical to the time series of NOAA, whereas in precipitation there is an almost constant difference of 40 mm per year. THSJ_A_513518_O_XML_IMAGES\THSJ_A_513518_O_F0003g.jpg

Fig. 3. Comparison between areal (over the USA) time series of NOAA (downloaded from http://www.ncdc.noaa.gov/oa/climate/research/cag3/cag3.html) and areal time series derived through the Thiessen method; for (a) mean annual temperature (adjusted for elevation), and (b) annual precipitation.

Determining the areal time series from the climate model outputs is straightforward: we simply computed a weighted average of the time series of the grid points situated within the geographical boundaries of the contiguous USA. The influence area of each grid point is a rectangle whose “vertical” (perpendicular to the equator) side is (ϕ2 – ϕ1)/2 and its “horizontal” side is proportional to cosϕ, where ϕ is the latitude of each grid point, and ϕ2 and ϕ1 are the latitudes of the adjacent “horizontal” grid lines. The weights used were thus cosϕ(ϕ2 – ϕ1); where grid latitudes are evenly spaced, the weights are simply cosϕ.

CONCLUSIONS AND DISCUSSION

It is claimed that GCMs provide credible quantitative estimates of future climate change, particularly at continental scales and above. Examining the local performance of the models at 55 points, we found that local projections do not correlate well with observed measurements. Furthermore, we found that the correlation at a large spatial scale, i.e. the contiguous USA, is worse than at the local scale.

However, we think that the most important question is not whether GCMs can produce credible estimates of future climate, but whether climate is at all predictable in deterministic terms. Several publications, a typical example being Rial et al. (2004), point out the difficulties that the climate system complexity introduces when we attempt to make predictions. “Complexity” in this context usually refers to the fact that there are many parts comprising the system and many interactions among these parts. This observation is correct, but we take it a step further. We think that it is not merely a matter of high dimensionality, and that it can be misleading to assume that the uncertainty can be reduced if we analyse its “sources” as nonlinearities, feedbacks, thresholds, etc., and attempt to establish causality relationships. Koutsoyiannis (2010) created a toy model with simple, fully-known, deterministic dynamics, and with only two degrees of freedom (i.e. internal state variables or dimensions); but it exhibits extremely uncertain behaviour at all scales, including trends, fluctuations, and other features similar to those displayed by the climate. It does so with a constant external forcing, which means that there is no causality relationship between its state and the forcing. The fact that climate has many orders of magnitude more degrees of freedom certainly perplexes the situation further, but in the end it may be irrelevant; for, in the end, we do not have a predictable system hidden behind many layers of uncertainty which could be removed to some extent, but, rather, we have a system that is uncertain at its heart.

Do we have something better than GCMs when it comes to establishing policies for the future? Our answer is yes: we have stochastic approaches, and what is needed is a paradigm shift. We need to recognize the fact that the uncertainty is intrinsic, and shift our attention from reducing the uncertainty towards quantifying the uncertainty (see also Koutsoyiannis et al., 2009a). Obviously, in such a paradigm shift, stochastic descriptions of hydroclimatic processes should incorporate what is known about the driving physical mechanisms of the processes. Despite a common misconception of stochastics as black-box approaches whose blind use of data disregard the system dynamics, several celebrated examples, including statistical thermophysics and the modelling of turbulence, emphasize the opposite, i.e. the fact that stochastics is an indispensable, advanced and powerful part of physics. Other simpler examples (e.g. Koutsoyiannis, 2010) indicate how known deterministic dynamics can be fully incorporated in a stochastic framework and reconciled with the unavoidable emergence of uncertainty in predictions.

h/t to WUWT reader Don from Paradise

0 0 votes
Article Rating

Discover more from Watts Up With That?

Subscribe to get the latest posts sent to your email.

139 Comments
Inline Feedbacks
View all comments
December 7, 2010 4:20 am

It would be interesting to start the models at some reasonably instrumented year – say 1950 or 1960 do a run and then see what using actual starting conditions from 1951 or 1961 produces. i.e. kind of a shot in the dark to give some estimate of how chaotic the models are. i.e. do things actually average out if you don’t do an average? Do different initial conditions (on the actual path) give different results for 2009?

anna v
December 7, 2010 9:55 am

Antonis Christofides says:
December 7, 2010 at 2:59 am
Anna V:
” If I were ten years younger I would have tried to weasel my way into working with their group. ”
We are flattered. Does ten years make such a difference?

Ta panta rei, when those ten years are in retirement. By the way I also am greek, retired particle physicist from Demokritos. and I have been following your group’s publications with great interest.
Anna V: ” The problem set by prof Koutsogiannis , that climate may be deterministic in part and stochastic in another part is real “.
Professor Koutsoyiannis has not been making this point. On the contrary, he has been calling the distinction between a deterministic and a random part a “false dichotomy” and a “naïve and inconsistent view of randomness”. He has been insisting that randomness and determinism “coexist in the same process, but are not separable or additive components” (A random walk on water, p. 586).

I stand corrected.My view is that given the power to program directly the nonlinear differential equations in an analogue computer then errors due to badly known input parameters would be stochastic. Of course in the output values the stochastic and deterministic chaos deviations could not be untangled, so maybe my view is not so different after all.

Paul Vaughan
December 7, 2010 11:09 am

I agree on the issue of tangling (i.e. that mainstream climate science is silly in believing in rashly considered decompositions).
Climate scientists should be aiming to produce models that can reproduce earth orientation parameters. If they cannot accomplish this, it should help them realize what they are ignoring.
I don’t even get the sense that most climate scientists recognize how SOI “anomalies” alias onto nonstationary spatitemporal modes (or even ones so simple as temporal semi-annual). There are asymmetries that fall into at least 3 categories: north-south, continental-maritime, & polar-equatorial. Why do so few even bother to condition their spectral coherence studies on these simple spatial variables? And why do so many treat SOI as univariate when it is bivariate? Even with techniques like EEOF (which goes a step beyond EOF), nonrandom coupling switching is missed (probably dismissed as “chaos” by those falling for Simpson’s Paradox). Recently I was delighted to find this paper:
Schwing, F.B.; Jiang, J.; & Mendelssohn, R. (2003). Coherency of multi-scale abrupt changes between the NAO, NPI, and PDO. Geophysical Research Letters 30(7), 1406. doi:10.1029/2002GL016535.
http://www.spaceweather.ac.cn/publication/jgrs/2003/Geophysical_Research_Letters/2002GL016535.pdf
These guys are not so blind. (Note their speculation on a change in the number of spatial modes.)
Here’s another paper:
White, W.B.; & Liu, Z. (2008). Non-linear alignment of El Nino to the 11-yr solar cycle. Geophysical Research Letters 35, L19607. doi:10.1029/2008GL034831.
https://www.cfa.harvard.edu/~wsoon/RoddamNarasimha-SolarENSOISM-09-d/WhiteLiu08-SolarHarmonics+ENSO.pdf
I suspect many will misunderstand the authors’ simplest point if they do not condition their analyses on coupling switching.
It is only a matter of time (for more carefully conditioned data exploration) before sensible spatiotemporal coupling matrices are designed.
With the pace of recent developments in solar-terrestrial relations, Charles Perry will likely soon realize he doesn’t need 32 year lags (see his 2007 paper) if he figures out the simple aliasing (SOI & NPI semi-annually).

Steve Koch
December 7, 2010 11:17 am

Anna V,
“I think those who keep advocating the use of the sea surface temperatures as the world’s thermometer are right.”
Many (such as Pielke, Sr.) are advocating using ocean heat content rather than sea surface temperatures to determine whether the earth is heating up or cooling down. Some advantages of using OHC are that (by far) most of the climate energy is stored in the ocean, the OHC is an integrated value rather than an instantaneous value, the OHC takes into account thermal mass, the OHC measuring network is managed better, and the OHC measuring network is less vulnerable to political corruption than the surface sensor temp measuring network..

RACookPE1978
Editor
December 7, 2010 1:12 pm

anna v says:
December 5, 2010 at 11:00 pm (Edit)
Pat Frank says:
December 5, 2010 at 10:39 pm

In a conversation I had with Demetris by email awhile ago, he agreed that hindcast tests such as he’s done really could have, and should have, been done 20 years ago. That would have spared us all a lot of trouble and expense.
It’s a fair question to ask professionals such as Gavin Schmidt or Kevin Trenberth why they didn’t think to test what they have so long asserted.
But I am sure they did have these studies. That is why they came up with the brilliant idea of anomalies, as I discuss above.

—…—…—
I fear the problem (of trying to hindcast/backcast/start-from-a-known-year’s condition’s-and-work-forward-60-years) is even worse than you fear.
The models – as I understand them from their textbook descriptions – begin from a certain assumed conditions, usually all expressed as a certain radiative forcing constant plus a yearly change in radiative forcing is imposed. Model conditions in each 2 degree x 2 degree “cube” (average wind, average temperature, average humidity, average moisture content, amount of radiation emitted into space, and all the “exchange” of this energy are not so much preset (before the model begins) but rather are determined from the model after a *(very large) number of iterations of the model’s equations are run. The model is then rerun with the results of the n-1 run forming the n run to create the results for the n+1 run. After many thousand of simulations, the modelers then select the “final average” conditions which they then filter and select to represent the outcome for the world at large after so many years.
At 2×2 degree cubes being their smallest calculation area, is it any wonder the modelers do NOT want to let anybody directly compare one cube’s results after 30 thrity years be compared to real world after thirty years?
Worse, the modelers’ 2×2 degree “cubes” are merely large “plates” with very, very thin “walls” around each edge. The atmosphere doesn’t behave like that: energy is exchanged vertically, horizontally, and through every side of the “cube” that are 1 km x 1 km x 1km high. The number of “cubes” varies from equator to pole, but the models require rectangles. The real solar input varies from season to season, but the models don’t allow that. The real solar input varies from pole to equator too as the earth rotates – and the emitted radiation barriers change also. The models do not simulate day and night except by averaging conditions. Real cubes rotate as the earth does so their atmosphere are subject to the Coriolas effect and jet streams. Real cubes are affected by currents and oceans and changing seasons – not just the ENSO and AMO and PDO – but the modelers assume only ice cap melting and glaciers retreating and storm conditions based only on temperatures and conditions of the global air.
Also:
The real “cube” conditions vary from sea to coast to inland mountain to inland icecap to woodland and jingle and forest (low Co2 and high clouds) to plains (med CO2 and med albedo) to deserts (very high CO2 and no clouds). There are many dozen cubes stacked vertically too through the atmosphere: not just one, two, or three “slices” from a cube that is assumed to receive an “average” annual sunlight from “average” clouds distributed “averagely” across the world.
Perhaps in a distant future the models will simulate these.
But today’s models do hindcast. But the results of those hindcasts are not used to “qualify” or check the models’ accuracy. Instead they are sued to CREATE the “corrections” – primarily soot forcings, aerosol loading forcings, and volcano forcings – that are then loaded back INTO the modeled outputs so the period from 19xx to 1995 DOES fit the GISS/HadCru temperatures generated by the model itself.
Larger problems. (Yes, it is worse than you thought.) There is no real-world whole-world “soot level” primary data available. There is no real-world “aerosol content” values affecting albedo. Instead “increased soot levels” are assumed by the modelers to vary over time equally over the whole globe. The result of these “increased soot levels” are then factored into the forcings that generate temperatures to re-create the drop in global temperatures between the 1940 Medium Warming Period to the 1970’s low point. After 1970, worldwide soot/aresaol levels are sassumed to get cleaned up (with the US EPA rules used as an example) and thus the required reflected energy (from soot/aerosol levels) is allowed to decrease, and thus absorbed energy allowed to rise enough, to make the model output increase as required to fit world temperatures between 1970 and 1995. Recent soot and aerosol forcings are assumed based on India, China, and Brasil industrialazation if models are run after 1995 conditions.
After 2010, soot levels are assumed to be removed completely, depending on whether a geo-engineering “positive” or “negative” result is desired.
What is actually surprising is that, despite the fact that their model conditions are carefully changed to allow “calibration” of the models by hindcasting, that the actual results are still so inaccurate.

December 7, 2010 5:47 pm

anna v, I’ve been thinking a little about your comments concerning model tests and anomalies, and if you really are sure that back in AGW year zero, main stream modelers did hindcast tests of GCM reliability, similar to those of Antonis and Demetris, et al., and then deliberately didn’t publish them, they’d be guilty of having lied by omission for 20 years.
If, knowing the unreliability of their models, they went ahead and developed anomalies, as you suggest, also here, in order to disguise the lack of model reliability, they’d have been guilty of lies of commission for 20 years.
Do you really, really think that’s the case?

anna v
December 7, 2010 9:33 pm

Pat Frank says:
December 7, 2010 at 5:47 pm
If, knowing the unreliability of their models, they went ahead and developed anomalies, as you suggest, also here, in order to disguise the lack of model reliability, they’d have been guilty of lies of commission for 20 years.
I hope you read the post of racookpe1978 above. It describes what I found when reading up on the models very well, and has gone in more depth then I . My assurance lies in having worked for over 30 years with computer simulations of models, mainly Monte Carlo, in my field of particle physics, and in recognizing that one cannot tune the models without knowing the temperature curves, certainly before anomalies were latched on. The precipitation curves’ failures are all there in AR4 disguised by spaghetti graphs of ensembles of models.
I do no know if I would call them lies of commission or “self” delusion of the ensemble “self”. High energy physics is a field where large numbers of competent scientists work and group meetings when I started were of 15 people and when I retired of 2000. In addition the international nature of the projects requires large committees directing sources etc. There is a sociology of groups of scientists that I would address as ” the head scientist is right ab initio”, if the group is small, or “the directing group is right ab initio” if the group is large. Like beehives, if too much challenge exists the group splits in two , one leader/leading-group taking its convinced followers to a new project.
Paradigms change, then the whole group starts revolving around the new paradigm that they were rejecting vehemently before, if the leader/leading-group accepts the new paradigm. An example was the transition from the theory of the parton model to QCD that I lived through in many details. The parton model had Feynman behind it and the leaders/leading-groups were slow to convince that the real data did not vote Feynman.
This was not bad, because people followed the leaders and worked hard to create complicated experiments and progress was incrementally made since fortunately theoreticians are not of a group mentality. The fate of the world economy was not hanging on to the flow of research, as it does is in climate research.
I think what happened was that the leaders of the pack of climate realized that temperatures could not be hindcast in measure but were OK in shape and had the brilliant idea of using anomalies instead of temperatures, and the pack followed.
I am sure they sincerely believed the blurbs about averaging over details etc. that come out when anomalies are attacked. There is no sincerest believer in a model than the model instigator, believe me. In all scientists there hides a perpetual motion machine inventor:).

Roger Carr
December 8, 2010 12:36 am

anna v says: (December 7, 2010 at 9:33 pm) In all scientists there hides a perpetual motion machine inventor:).
    Such inventors today, Anna, lack the social conscience of the old-timers. Those from the past agonised over the perfecting of a brake they could use to stop their perpetual motion machines if necessary once they were started — just in case.
    Do today’s scientists have that vision?

December 8, 2010 3:05 pm

anna v, I hadn’t considered persistent self-delusion. But that’s what you seem to be suggesting.
I did read racookpe1978’s post above, and thought it was very cogent. I was happy to see you mention that his/her comments matched your own conclusions.
But look, as a particle physicist, you must have used some standard model application, such as GIANT to simulate and predict resonances in particle interactions. I recall reading that you folks didn’t credit an observed resonance unless it passed the 3-sigma test. That means the observation must have been replicated enough to give good error statistics. It also means that you must have paid quantitative attention to the resolution limits of your detectors and the systematic effects of such things as thermal load on detector response, and so forth.
You’d need all that information just in order to calculate the 3-sigma test. You’d also need to know the uncertainty width around your simulated resonance, in order to decide whether an observation some eV away from your prediction could actually be a confirmation, or a different resonance altogether.
I can’t believe that the delusional effect of leader-paradigm loyalty would be enough to subvert that sort of very basic paying of attention to the gritty details of your scientific practice.
But that’s what we see going on in AGW climate physics. Regardless of self-delusion about their beloved paradigm, what we see in AGW climate science practice is disregard of theoretical uncertainty in the models along with neglect of uncertainty in the data due to instrumental resolution and the error from systematic effects.
I just don’t understand how this poor practice could be so amazingly persistent and could be so widespread. For 20 years! Haven’t these people ever taken an instrumental methods lab as undergraduates?

anna v
December 8, 2010 8:35 pm

Pat Frank says:
December 8, 2010 at 3:05 pm
I can’t believe that the delusional effect of leader-paradigm loyalty would be enough to subvert that sort of very basic paying of attention to the gritty details of your scientific practice.
But that’s what we see going on in AGW climate physics. Regardless of self-delusion about their beloved paradigm, what we see in AGW climate science practice is disregard of theoretical uncertainty in the models along with neglect of uncertainty in the data due to instrumental resolution and the error from systematic effects.
I just don’t understand how this poor practice could be so amazingly persistent and could be so widespread. For 20 years! Haven’t these people ever taken an instrumental methods lab as undergraduates?

Evidently not? Or were convinced by arguments that the methods were not applicable where “chaos” reigns?
We have a proverb in Greek ” the fish starts smelling from the head”, to summarize some of what I meant in the post above.
Take James Hansen :
After graduate school, Hansen continued his work with radiative transfer models and attempting to understand the Venusian atmosphere. This naturally led to the same computer codes being used to understand the Earth’s atmosphere
Does that sound as if he has familiarity with instruments and errors?
And a lot of the people involved in creating and working with GCMs must be computer focused, not physics focused, and easily follow the leader.
Also the last twenty years has seen an inflation in the number of students entering universities and a lowering of standards. At the same time this inflation created many jobs that needed publications and climate studies were easy ( group work) and offered grants and tenure track posts, creating a positive feedback:). Group work means that often the only one who has a complete picture of the project is the group leader. Members of the group survive by doing their part of the work and trusting that the checks necessary are being done by the others, or are not needed if the group leader thinks not. They are at the level of the GEANT constructors, in your discussion, not the users of GEANT who will check against data and will be demanding statistical accuracy of results. GCMs are cumbersome programs that demand large computer time and are treated like reality generators, not simulation programs, by their creators and users :). The results are called “experiment”.
It is synergy of all these factors with the pack mentality.
Well, otherwise one must construct a conspiracy theory, on the lines of planned depopulation by Malthusian ecologists, which is very far fetched. Not that they do not exist in the fringe and taking advantage of the situation, but they were not in the position, twenty years ago, to create such a situation, IMO. Hansen was, and has the typical ego and mentality of a scientist in the head of a pack. Remember the story of heating up the hall at congress when the climate was going to be discussed ? All means are legal in love and war.

December 8, 2010 11:02 pm

Thanks for your thoughts, anna. I don’t credit AGW conspiracy theories beyond the crass conniving of the climategate miscreants. So, I guess we have to chalk most of it up to ‘go along to get along.’
But it’s still hard to understand the crippled practice being so widespread and so enduring. Maybe the attendant self-righteous moralistic social pressures, brewed up with such frenzy by the environmental NGOs, has given it an especially refractory character.
But as a social phenomenon, it’s probably a safe prediction to suppose that one day social scientists will be cutting Ph.D.s studying the phenomenon. And then, 50 years later, maybe neuropsychologists.

Roger Carr
December 10, 2010 1:56 am

As this thread moves into its sunset, I hereby note my disappointment, perhaps despair, that the following statement has not been given consideration and debate here.
    There is a ring to it which focuses my attention and sounds an alert; but I do not have the skills to pursue it myself, nor to even investigate it; but nevertheless feel a strong conviction that it sounds truth, and therefore importance in the matter of the world at climate war.

We have been bamboozled with anomalies, and worse, with global average anomalies. — Anna V (December 6, 2010 at 10:03 pm)

Mike M
December 13, 2010 9:37 am

What is the point of examining GCM’s when Joanne Nova already told us that US Postage drives climate?

December 14, 2010 1:49 am

The real reason for the ‘hockey-stick’
and the Goremometer…
http://yfrog.com/0afzkj

1 4 5 6