A new view on GISS data, per Lucia

For some time I’ve been critical (and will likely remain so) of the data preparation techniques associated with NASA GISS global temperature data set. The adjustments, the errors, the train wreck FORTRAN code used to collate it, and the interpolation of data in the polar regions where no weather stations exists, have given me such lack of confidence in the data, that I’ve begun to treat it as an outlier. 

Lucia however makes a compelling argument for not discarding it as such, but to treat it as part of a group data set. She also makes some compelling logical tests that give an insight into the entire collection of datasets. As a result, I’m going to temper my view of the GISS data a bit and look at how it may be useful in the comparisons she makes.

Here is her analysis:


 Surface Temperatures Trends Through May: Month 89 and counting!

a guest post by Lucia Liljegren

Trends for the Global Means Surface temperature for five groups (GISS, HadCrut, NOAA/NCDC, UAH/MSU and RSS.) were calculated from Jan 2001-May 2008 using Ordinary Least Squares (OLS) using the method in Lee & Lund. to compute error bars, and Cochrane-Orcutt and compared to the IPCC AR4’s projected central tendency of 2C/century for the trend during the first few decades of this century.

The following results for mean trends and 95% confidence intervals were obtained:

  1. Ordinary Least Squares average of data sets: The temperature trend is -0.7 C/century ± 2.3C/century. This is inconsistent IPCC AR4 projection of 2C/century to a confidence of 95% and is considered falsified based on this specific test.
  2. Cochrane Orcutt, average of data sets: The temperature trend is -1.4 C/century ± 2.0 C/century. This is inconsistent with the IPCC AR4 projection of 2 C/century to a confidence of 95% and is considered falsified based on this specific test for an AR(1) process.
  3. OLS, individual data sets: All except GISS Land/Ocean result in negative trends. The maximum and minimum trends reported were 0.007 C/century and -1.28 C/century for GISS Land/Ocean and UAH MSU respectively. Based on this test, The IPCC AR4 2C/century projection is rejected to a confidence of 95% when compared to HadCrut, NOAA and RSS MSU data. It is not rejected based on comparison to GISS and UAH MSU.
  4. Cochrane-Orcutt, individual data sets: All individual data sets result in negative trends. The IPCC AR4 2C/century is falsified by each set individually.
  5. The null hypothesis of 0C/century cannot yet be excluded based on data collected since 2001. This, does not mean warming has stopped. It only means that the uncertainty in the trend is too large to exclude 0C/century based on data since 2001. Bar and Whiskers charts showing the range of trend falling inside the ±95% uncertainty intervals using selected start dates are discussed in Trends in Global Mean Surface Temperature: Bars and Whiskers Through May.

The OLS trends for the mean, and C-O trends for individual groups are compared to data in the figure immediately below:

Click for larger.

Figure 1: The IPCC projected trend is illustrated in brown. The Cochrane – Orcutt trend for the average of all five data sets is illustrated in orange; ±95% confidence intervals illustrated in hazy orange. The OLS trend for the average of all five data sets is illustrated in lavender, with ±95% uncertainty bounds in hazy lavender. Individual data sets were fit using Cochrane-Orcutt, and shown.

Discussion of Figure 1

The individual weather data in figure 1 are scattered, and show non-monotonic variations as a function of time. This is expected for weather data; some bloggers like to refer to this scatter as “weather noise”. In the AR4, the IPCC projected a monontonically increasing level increase in the ensemble average of the weather, often called the climate trend. For the first 3 decades of the century, central tendency of the climate trend was projected to vary approximately linearly at a rate of 2C/century. This is illustrated in brown.

The best estimates for the linear trend consistent with the noisy weather data were computed using Cochrane-Orcutt (CO), illustrated in orange, and Ordinary Least Squares (OLS) illustrated in lavender.

Results for individual hypothesis tests

Some individual bloggers have expressed a strong preference for one particular data set or another. Like Atmoz, I prefer not to drop any widely used metric from consideration. However, because some individuals prefer to examine results for each individual group seperately, I also apply the technique to describe the current results of two hypothesis tests based on each individual measurement system.

The first hypothesis tested, treated as “null” is the IPCC’s projections of a 2C/century. Currently, this is rejected at p=95% under ordinary least squares (OLS) using data from 3 of the five services, but it is not rejected for UAH or GISS. The hypothesis is rejected against all 5 servies when tested using C-O fits.

The second hypothesis tested is the “denier’s hypothesis” of 0C/century. This hypothesis cannot be rejected using data starting in 2001. Given the strong rejection with historic data, and the large uncertainty in the determination of the trend, this “fail to reject” result is likely due to “type 2″ or “beta” error.

That is: The “fail to reject” is likely a false negative. False negatives, or failure to reject false results are the most common error when hypotheses are tested using noisy data.

Results for individual tests are tabulated below:

Trend Estimates and Results for Two Hypothesis Tests Treated Individually as Null Hypotheses
Group OLS Trend Reject / Fail to Reject? CO Trend Reject / Fail to Reject?
  (C/century)  2C/century 0 C/century (C/century) 2C/century 0 C/century
Average of 5 -0.7

± 2.3

Reject Fail to reject -1.4 ± 2.0 Reject Fail to reject
GISS 0.0 ± 2.3 Fail to

Reject

Fail to reject -0.4 ± 2.0 Reject Fail to reject
HadCRUT -1.2 ± 1.9 Reject Fail to reject -1.6 ± 1.6 Reject Fail to reject
NOAA -0.1 ± 1.7 Reject Fail to reject -0.3 ± 1.5 Reject Fail to reject
RSS MSU -1.3 ±2.3 C Reject Fail to reject -2.1 ± 2.1 Reject Fail to reject
UAH MSU -0.8 ± 3.6 Fail to reject Fail to reject -2.0 ± 3.1 Reject Fail to

reject

The possibility of False Positives

In the context of this test, rejecting a hypothesis when it is true is a false positive. All statistical test involve some assumptions, those underlying this test assume we can correct for red noise in the residuals to OLS using one of two methods: A) The method recommended in Lee&Lund or B) Cochrane-Orcutt, a well known statistical method for time series exhibiting red noise. If these methods are valid, and used to test data, we expect to incorrectly reject true hypotheses at p=95%, 5% of the time. (Note however, finding reject in February, March, April and May do not actually count separately, as the rejections themselves are correlated with each other, being largely based on the same data.)

Given the results we have found, the 2C/century projection for the first few decades of this century is not born out by the current data for weather. It appears inconsistent with underlying trends that could possibly describe the particular weather trajectory we have seen.

There are some caveats that have been raised in the blog-o-sphere. There has been some debate over methods to calculate uncertainty intervals and/or whether one can test hypotheses using short data sets. I have been examining a variety of possible reasons. I find:

  1. In August, 2007, in a post entitled “Garbage Is Forever”, Tamino used and defended the uses OLS adjusted for red noise to perform hypothesis tests using short data sets, going into some detail in the response to criticism by Carrick, where Tamino stated:

    For a reasonable perspective on the application of linear regression in the presence of autocorrelated noise see Lee & Lund 2004, Biometrika 91, 240–245. Your claims that it’s “pretty crazy, from a statistics perspective” and “L2 is only reliable, when the unfit variability in the data looks like Gaussian white noise” raises serious doubts about your statistical sophistication.

    Later posts, when this method began falsifying the IPCC AR4 projection of 2 C/century, Tamino appears to have changed his mind about the validity of this method possibly suggesting the uncertainty intervals are too high.

    The results here simply show what anyone would obtain using this method: According to this method, the 2C/century is falsified. Meanwhile, re-application to the data since 2000 indicates there is no significant warming since 2000 as illustrated here.

  2. Gavin Schmidt suggested that “internal variability (weather!)” noise results in a standard error of 2.1C/century in 8 year trends; this is roughly twice the standard error obtained using the method of Lee & Lund, above. To determine if this magnitude of variability made any sense at all, I calculated the variability of 8 year trend in the full thermometer record including volcano eruptions, and measurement noise due to the “bucket-jet inlet transition. I also computed the variability during a relatively long historic period with no volcanic eruptions. A standard error of 2.1 C/century suggested by Gavin’s method exceeded both the variablity in the thermometer record for real earth including volcanic periods and that for periods without volcanic eruptions. (The standard error in 8 year trends computed during periods with no volcanic eruptions is approximately 0.9C/century, which is smaller than estimated for the current data).I attribute the unphysically large spread in 8 year trends displayed by the climate models to the fact that the model runs include

    a) different historical forcings, some including volcanic eruptions, some don’t. This results in variability in initial conditions across model runs that do not exist on the real earth

    b) different forcings during any year in the 20th century; some include solar some don’t.

    c) different parameterizations across models and

    d) possibly, inability of some individual models to reproduce the actual characteristics of real-earth weather noise.This is discussed Distribution of 8 Year OLS Trends: What do the data say?

  3. Atmoz have suggested the flat trend is either to ENSO and JohnV suggested considering the effect of Solar Cycle. The issue of ENSO and remaining correlation in lagged residuals has been discussed in previous posts and the solar cycle is explained here.
  4. The variability of all 8 month trends that can be computed in the thermometer record is 1.9 C/century; computing starting with a set spected at 100 month intervals resulted in a standard error of 1.4 C/century. These represent the upper bound of standard errors that can be justified based on the empirical record. Variabiity includes features other than “weather noise”– for example, volcano eruptions, non-linear variations in forcing due to GHG’s, and measurement uncertainty, including the “jet transition to bucket inlet” noise. So, these represent the upper limit on variability in experimentally determined 8 year trends.Those who adhere to these will conclude the current trends fall inside the uncertainty intervals for data. If the current measurement uncertainty is as large as experienced during the “bucket to jet inlet transition” associated with World War II, they are entirely correct.

After consideration of the various suggestions about uncertainty intervals, and the issues ENSO, solar cycles and other features, and considering the magnitude of the pre-existing trend I think over all the data indicate:

  1. It is quite likely the IPCC projection for an underlying climate trend of 2C/century exceeds the current underlying trend.I cannot speculate on the reasons for the over estimate; they may include some combination of poor forecast of emissions when developing the SRES, to the effect of inaccurate initial conditions for the computations of the 20th century, to inaccuracy in GCMs themselves or other factors.
  2. It remains likely the warming experienced over the past century will resume.While the 2C/century projection falsifies using both OLS and C-O, the flat trend is entirely consistent with the previously experienced warming trend. In fact, additional analysis (which I have not shown) would indicate the current trend is not inconsistent with the rate of warming seen during the late 90s. It is entirely possible natural factors, including volcanic eruptions depressing the temperature during the early 90s, caused a positive excursion in the 10 year trend during that period. Meanwhile, the PDO flip can be causing a negative excursion affecting the trend. These sorts of excursions from the mean trend are entrely consistent with historic data.Warming remains consistent with the data. As the theory attributing the warming to GHG’s appears sound and predates the warming from the 80s and 90s, I confident it will resume.

What will happen during over the next few years

As Atmoz warns, we should expect to see the central tendency of trends move around over the next few years. What one might expect is that, going forward, we will see the trend slowly oscillate about the mean, but eventually the magnitude of the oscillation will decay.

One of my motives in blogging this is to show this oscillation and decay over time and to permit doubters to see the positive trend resumes.

I will now set off on the sort of rampant speculations permitted bloggers. When the next El Nino arrives, we will see a period where the trends go positive. Given trends from the 70s through 90s, and current trends, it seem plausible to me that, using the methods I describe here that that we will experience some 89 month trends with OLS trends of 3C/century – 4 C/century or even greater sometime during the next El Nino. At which point, someone will likely blog about that, the moment the 89 month trend occurs. )

This result will entirely consistent with the current findings. An OLS (or CO ) trend of 3-4 C/century is likely even if the true trend is less than 2C/century, and even if CO and OLS do give accurate uncertainty intervals.

What’s seems unlikely? I’d need to do more precise calculations to find a firm dividing line between consistent and inconsistent. For now, I’ll suggest that unless there is a) a stratospheric volcanic eruption, b) the much anticipated release of methane from the permafrost or c) a sudden revision in the method the agencies use to estimate GMST, I doubt we’ll see an 89 month trend greater than 4.6 C/century within the next five years. (I won’t go further because I have no idea what anyone is emitting into the atmosphere!)

Meanwhile, what is the magnitude of the trend for the first three decades of this century? That cannot be known with precision for man years. All I can say is: The current data strongly indicate the current underlying trend less than 2C/century, and likely less than 1.6 C/century!

Excel Spreadsheet.

Advertisements

  Subscribe  
newest oldest most voted
Notify of
Richard deSousa

How can anyone think that NASA GISS is unbiased? Not with Hansen at the helm of their climate department. He’s already proclaimed himself the chief inquisitor gunning for the oil company executives. He’s already proclaimed them guilty.
REPLY: Even biased data has it’s value, as Lucia has demonstrated. I don’t agree with their methods and resulting data, but it is useful for comparison. Hansen’s political exploits tend to speak more to the conclusions he draws from the data, than the data itself.

neilo

I got totally lost in all the statistics. So how can biased data have value?
Also, she says: “As the theory attributing the warming to GHG’s appears sound”
Eh? My understanding was that the theory has been totally debunked. So what has this analysis actually shown? That we should use the GISS data? In other words, let’s apply weird statistical methods to prove that the GISS data is valid, and because the GISS data is valid then the global warming theory is sound.

Lucia/Anthony,
The hyperlink for Tamino’s response to the criticism by Carrick seems to be broken.
As far as GISS goes, its important to remember that while it has been trending slightly higher than the other series for the last four years, that is certainly not beyond the scope of normal variability between temperature series. You can see GISS minus the mean of UAH, HadCRU, and RSS for the past 30 years here:
http://i81.photobucket.com/albums/j237/hausfath/GISS-MEAN.jpg

I don’t think the GISS temperatures are biased. And I think the extreme similarities in the long-term trends when compared to all the other temperature metrics support that conclusion. However, the GISS code is publicly available. Theoretically, one should be able to run their code and see if the results are the same as in their data products. That cannot be said for any of the other sources producing global mean temperature data.

Dave Andrews

Atmoz,
Do you ever lurk at Climate Audit?
If you did you would soon realise there is a world of difference between the code being “publically available” and being able to run it and check the results. There is a wide gulf between “theoretically” and actuality,
REPLY: And in the gulf is the train wreck. That code is written in SPLODE++ – Anthony

Bill F

I am kind of grappling with a problem with Lucia’s conclusion #2. Forgive me if this comment rambles a bit. Lucia provides numerous examples of natural factors inducing both large positive and large negative trends in the data over a period of many years in the past. Yet she uses that along with “analysis not shown here” to conclude that warming is likely to resume and could meet the 2C/century trend predicted? I don’t see anything in her analysis to suggest that what she has done has any predictive value whatsoever. She makes what is almost a statement of faith when she concludes that the theory of GHG warming is sound and uses her confidence in that faith to make a conclusion that the warming will resume.
I was with her right up to that point, but don’t see how what she concludes there is supported by what she has done. She is in essence saying that the past positive trend could be caused by natural factors…and the current negative trend could be natural as well…but her faith in the theory leads her to conclude that warming will resume. I am quite confident that warming at some point in the future will reach a 2+C/century trend as well. But I have no way of knowing (and neither does Lucia) whether that trend will resume from the current endpoint or after a prolonged period of cooling that takes us back to early 1970s levels or below. Her conclusion #2 is simply an opinion based on her faith in AGW theory and not something that “the data indicate” as she claims.
Overall…nicely done analysis…but it doesn’t support conclusion #2.

crosspatch

I thought I detected some sarcasm in that she was saying you can find pretty much whatever you happen to be looking for if you wait long enough.

Dave,
I read the interesting posts at CA, but not the comments (signal to noise ratio is a little too low). But yes, I do know that no one has been able to get the GISS code to compile. I gave it a token shot on my box, to no avail. But that’s not really important. What’s important is that there are 5 independent groups (GISS, NCDC, Hadley, UAH, RSS) that take two distinct sets of data (in situ and remotely sensed) apply 5 different algorithms that results in almost exactly the same answer over the time periods of interest (they may disagree on monthly and yearly time scales, but over >20 years they agree well). So if you think that GISS has been biasing their results to appear like there has been greater warming than reality, you must think the same of the other 4 groups as well.
Reply: I think you missed the reference to this over at CA. I believe the code has been compiled on OSX. See this thread.~jeez

Anthony wrote: “And in the gulf is the train wreck. That code is written in SPLODE++”
I’ve written in RPG, Autocoder, Easycoder, Cobal D, Fortran IV, and a few other specialzed Basic type languages, but what in the hell is SPODE+++? A specialized assembly language?
Jack Koenig, Editor
The Mysterious Climate Project
http://www.climateclinic.com
REPLY: Jack, here is the definition. Plus a training video on SPLODE -Anthony

Based on the lack of comments posted so far, it doesn’t appear Lucia has much of a following.
From a personal standpoint, I can no longer consider ANYTHING coming out of NASA or NOAA as legitimate information. What’s that old saying: Lie to me once, shame on you. Lie to me twice, shame on me.
So why would anyone in their right mind believe anything stated by either of these agencies after they’ve been caught in one lie/distortion/misinformation after another?
Jack Koenig, Editor
The Mysterious Climate Project
http://www.climateclinic.com

David Segesta

Please excuse me for being a perpetual skeptic, but in my opinion you cannot make any kind of a future projection using that type of analysis. It assumes that temperature will continue on the trend it has followed for the last 7 years. Now imagine applying that method to the ice core data for the last 400,00 years. Look at the temperature graph at http://www.geocraft.com/WVFossils/last_400k_yrs.html ( 2nd graph)
Pick any place on the graph and run your analysis. Would you even come close to predicting the future temperatures? I would say; not a chance.

Michael Hauber

What exactly is the evidence that the GISS temp series is biased?
I lurk on Climate Audit; I don’t read everything, and some of what I read I don’t fully understand or forget, but what I’ve pieced together:
Many photos of stations that have been poorly sited. The GISS algorithm attempts to correct for this. If it does a good job this would be a small problem. If it does a bad job this issue is a bigger problem.
Steve has shown that for many individual stations the GISS algorithm gives strange results.
However do these factors cause a detectable bias in the overall answer?
The only analysis I know of along these lines is the trend calculation that John V initiated, and Steve refined, based on a set of stations in the USA that were assessed as being good quality. This analysis showed good agreement with the GISS results for USA. My understanding is that Steve has objected to this being called a ‘validation’ of the GISS temperature trend as it applies to a small set of data (about 60 stations I think), in a limited geographical area (USA). Also GISS algorithms are evidently different for the rest of the world.
Also if we compare GISS trends to other temperature trends we see that the difference is very small. So either GISS is not biased or all temperature trends have the same bias. A systematic bias could conceivably apply to all temperature series – particularly if its a result of poor station siting or changes in instrumentation. I think GISS and CRU are both based off the same raw station data? And satellite measurements are callibrated to match the same station data?
Is there any other evidence that should be considered to make an assessment of whether GISS is baised or not?
REPLY: The work that John V did was terribly premature. Only 17 CRN1,2 stations with poor geographic distribution in the USA was used to arrive at that “good agreement”. The effort was rushed, mostly to quickly find some baton to beat up the surfacestations.org effort with. Since then JohnV has not done anything else in analysis, and that rumor you circulate still stands. Meanwhile my volunteers and I continue to collect more stations so that a complete analysis can be done.

Anthony said: “REPLY: Jack, here is the definition. Plus a training video on SPLODE -Anthony”
Ho ho ho!

Bill Marsh

Mostly this makes my head hurt, but it seems to me that it is very difficult for the GISS algorithim to ‘correct’ for siting issues that are pretty much unknown to the developer of the algorithim.
I always had issues in software development with effort estimations that tried to project completion time for development of modules that had unknown qualities/challenges. Pretty much it was a guess wrapped in fancy mathematics, but at its core it was still a SWAG (and rarely, if ever does it seem to be even close to correct).
To many unknowns to be able to make corrections with any confidence.

D Buller

Atmoz, et al:
It is my impression that over the last 29 years, GISS trends have been similar to other measurement trends, and that the GISS methodology has been examined more than the others. However, I am more skeptical of GISS than others. Even though the GISS trend has been similar, it still appears to be a lttile bit higer than the others. Personal biases at HadCrut make its trend a bit suspicious similar to GISS, and HadCrut’s trend over 29 years is most like GISS. Persopnal beliefs at UAH also exist, but they are opposite of those at RSS, and there appears to be a great deal professional exchange between the two MSU sites — UAH even helped RSS correct its data when it was undermeasuring temperatures. I trust RSS and UAH the most. HOWEVER, the big issue with GISS (and to some degree HadCrut) is what they do to temperatures before the 29 years. They seem to be forever decreasing temperatures in the 1930s. That is the biggest problem with GISS. And of course UAH and RSS do not go back that far.

Earle Williams

Atmoz in his assessment of the non-bias of HISS neglects the corrections applied in GISS to temperature data pre-dating the satellite era. Over the time period of 1979 to current GISS is well within spitting range of the other temperature metrics. Prior to 1979, well, caveat emptor.

Earle Williams

Bah! HISS == GISS!

Bill Illis

The ONLY reason why GISS, Hadley and the NOAA temp figures agree with the other measures now is because UAH and RSS temp measurements exist.
We now have an independent third party audit committee and these agencies know they have to be reasonably close to the third party audit committee figures or they will lose all their credibility.
Of course, the audit committee is only available for 1979-on so the biased agencies have been forced to “adjust’ the pre-1979 data so that is shows the appropriate level of warming.
Even the Urban Heat Island “adjustment” which should reduce the trend for a large metropolitan centres by up to 3.0C, has a roughly equal number of positive and negative adjustments that add up to a paltry 0.05C on average. Sorry, that is completely illogical.
We cannot rely on the pre-1979 data. For the post-1979 data, just throw out the biased numbers and the rely on the third party audit committee.

I was with her right up to that point, but don’t see how what she concludes there is supported by what she has done.

Hi. My belief that warming will resume is not based on the data in that post. It is based on some understanding of radiative physics. GHG’s should result in at least some warming. Also, the trend over the past century is up.
It is possible to do a t-test to compare trends during periods to see if the trend is “period A” is inconsistent with the trend in “period B”. I haven’t shown that in this post. But if you click to the bar-and-whiskers post, you can see that there are broad ranges of trends that overlap. It’s entirely possible to show the very large uncertainty bounds for the current periods overlaps the uncertainty bounds for say, 1990-2000.
That means the trends for the two periods could very well be the same as each other. But that would suggest the current periods is a “low” relative to the true underlying trend, and the other period is a “high” relative to the true underlying trend. Truth would be somewhere in between.
Of course, doing a test like this presupposes that in some sense the “ensemble average” for some sort of population of all possible weather noise trajectories exist and has a linear trend. Still, the simplifying assumption is worth making simply to illustrate.
I guess I may have to show the comparison to clarify.

Todd Martin

How can one even contemplate throwing out St. Hansen’s GISS data. Given that temperature history is rewritten and rewritten again in Hansen’s universe, the record is never sufficiently stationary to get a grip on and toss it out.
Further, summarily tossing it out would torpedo the fabulous work that Steve McIntyre, Anthony, and others are doing in exposing GISS malfeasance and rank incompetence.

Fred Nieuwenhuis

As previous poster commented, a lot of this statistical theory is flying over my head. But the gist of it would be that, although the central trends for most of the datasets are negative, it is the variability in each dataset that determines the +/- (uncertainty), and since the positive uncertainty allows for a neutral trend, then AGW is proven? I find it astounding that UAH dataset has the most negative central trend (using OLS) but has the largest +/-, so therefore it fails to reject an overall + 2C/Cen, thus AGW theory is sound??
Secondly, I find the greatest pity when regarding datasets is the fact that satellite records don’t go back as far as surface records. It has been shown that there is relative corelation between surface and satellite datasets. However, it is exactly those pre-satellite era records which are the most suspect in non-western countries both in terms of coverage and accuracy, not to mention the adjustment that the GISS SPLODE++ makes on those iffy records.
Thirdly, the current spread between Sat. and Surface datasets could be due to the dramatic drop in current surface station numbers. Looking at http://data.giss.nasa.gov/gistemp/station_data/ , the number of stations and coverage do not match. From a Canadian perspective, there are VERY few, if any, stations that GISS uses current, 2004-2008 data. So practically speaking, it is not just the polar regions that GISS is estimating, but essentially all of Canada as well.

Pofarmer

I agree with Bill F above.
How can Lucia falsify all of the IPCC theories and still say that AGW is the correct theory? That just doesn’t make any sense. The numbers say one thing, but, by golly we will just ignore them. More than one business has gone bankrupt that way.

REX

What nobody seems to have considered is that temperatures may actually DECLINE over the next 10-100 years?

Jeff Alberts

I say we adjust GISS adjustments based on neighbor datasets within 4 orders of magnitude. We’ll use a “the lights are on but nobody’s home” approach to assess how much to adjust the GISS data based on the other remaining datasets.

Michael Hauber

Ok the ‘good agreement’ reached by John V with GISS may be based on only 17 stations.
Is there any other better analysis of the accuracy or bias of the GISS temperature trend with a larger set of stations?
And I personally would never use this analysis or anything else as a baton to beat up surfacestations.org. I think it can only help science to actually go out there and photograph the stations so that a better understanding of any errors in our climate data can be gained.

Philip_B

The thing to bear in mind about GISS is that it isn’t an average. It’s more like the output of a climate model that uses temperature data as input.
The satellite data (UAH and RSS) is an average of actual measurements. Therefore any trend is almost certain to be present in the raw data, i.e. real.
When GISS shows a trend it could be any combination of data and model processing. We have no way of telling if the GISS trend is present in the raw data.
http://data.giss.nasa.gov/gistemp/
IMO, all discussions about temperature trends show be restricted to the satellite datasets.

Richard

I overall support the null hypothesis of no warming, but does this hold if the start date is 2000, 1999, 1998, 1997. Selective start dates are a dangerous way oto go imho.

blcjr

lucia,
I’m having trouble replicating your confidence intervals, and hence your conclusions about whether we reject, or fail to reject, the null hypotheses. Let’s start first with GISS. I get the following for an OLS regression of your GISS data on time:
VARIABLE COEFFICIENT 95% CONFIDENCE INTERVAL
const 0.532084 0.480818 0.583350
time 6.12870E-06 -0.000983234 0.000995492
Taking the upper value of the CI for time, 0.000995492, and multiplying it by 1200 (12 months x 100 years) I get 1.1945904, so let’s say 1.19C per century. I’d say that even with GISS, we reject the 2C/century hypothesis, i.e. 2C lies above the upper limit of the 95% CI for the GISS time (slope) variable.
With UAH, I get
VARIABLE COEFFICIENT 95% CONFIDENCE INTERVAL
const 0.279470 0.219990 0.338951
time -0.000653149 -0.00180104 0.000494745
Here the upper limit is 0.000494745, which when multiplied by 1200 is 0.593694, or 0.59C per century.
So in two cases where you conclude “fail to reject,” I don’t see it.
Even where you “reject” the 2C/century hypothesis, I don’t follow how you got the confidence intervals. For instance, for HadCRUT, you have -1.2 ± 1.9, which implies a positive upper limit. Whereas I get:
VARIABLE COEFFICIENT 95% CONFIDENCE INTERVAL
const 0.477588 0.437910 0.517266
time -0.00102780 -0.00179353 -0.000262070
Note here that the upper limit is still negative, and when we multiply by 1200 we get -0.314484, or -0.31C per century.
Without posting details, I get upper limits of -0.065C per century for RSS, and +0.97C per century for NOAA.
So I get upper limit estimates of:
HadCRUT -0.31C/century
GISS +1.19C/century
NOAA +0.97C/century
UAH +0.59C/century
RSS -0.07C/century
Since I have reservations about the GISS and NOAA estimates, and because of the small number here, rather than take an average, I think the more appropriate measure of central tendency is the median, which is the UAH estimate of +0.59C. Again, these are upper limit estimates, but like you, I expect some warming to resume at some point, though I don’t expect it to return to the levels of growth we saw during the 1980’s and 1990’s. But with these upper limits, we still reject the 2C/century hypothesis soundly, in each case.
So, can you explain the difference between your confidence limits, and the one’s I’ve posted?
Basil

Jeff Alberts

What nobody seems to have considered is that temperatures may actually DECLINE over the next 10-100 years?

Actually that’s been discussed over and over again here and at other places. Global Temperature (as useless and impossible a metric as that might be) is as likely to fall as it is to rise. We simply don’t know.

Pofarmer:

How can Lucia falsify all of the IPCC theories and still say that AGW is the correct theory? That just doesn’t make any sense. The numbers say one thing, but, by golly we will just ignore them.

No, I don’t ignore the numbers.:)
The issue is that one needs to be cautious not to over extend and claim more than is shown.
It’s important to remember the general theory of AGW is not precisely the same as the specific IPCC projections for the magnitude of warming at anytime. I think sometimes, bloggers at some of the pro-AGW blogs cause some confusion about this by trying to represent the output of GCM’s, IPCC projections and the whole theory of AGW as one and the same thing. The aren’t.
A very specific projection of 2C/century is something the IPCC projected in the AR4. This projection falsifies using these two particular statistical tests and this batch of data.
Falsifying 2 C/Century doesn’t mean the entire theory of AGW– which is basically that GHGs added to the atmosphere by man cause noticable warming.
The underlying theory that GHGs cause warming can be true, but the magnitude could be lower than predicted by the IPCC in the AR4. There are plenty or reasons outside the basic theory underlying AGW why the magnitue predicted cold be inaccurate.
For example:
a) The IPCC could over predict the amount of GHGs in their SRES. That is, they could think CO2 or methane would rise faster than it really did. In that case, warming is overpredicted because they modelers over estimated how much GHGs were in the atmosphere.
b) Cloud models in GCMs could be inaccurate. This would mess up the feedbacks. Other parameterizations in models could be somewhat inaccurate. If collectively, the feedbacks are a bit too high, he models can predict too much warming.
c) Something about the initial conditions in climate model runs starting in 2001 could be off for a number of reasons. These could include the parameterizations resulting in an incorrect response time for the ocean, modelers applying incorrect forcings for periods prior to 2001 or a number of other difficulties.
d) Other.
Usually, any individual data comparison can only tell you a fraction of what you’d like to know. What this data is telling us is 2 C/century looks too high compared to the data. That’s all it says. It doesn’t say the current trend is zero, and it doesn’t say the trend can’t be 1.3 C/century.

Basil…I do my numbers in Excel, using Linest, and I do things with years automatically.
I’m not sure what the the format for your numbers means!
Using Linest for the slope, for Temperature vs time, for GISS,
I get m= 7.35444E-05 C/year.
For the standard error in m, Linest gives sm=0.006 C/year.
But that value is uncorrected for autocorrelation which is large.
Then, to correct using the method in Lee and Lund, I determine the lag 1 correlation in the residuals which I find to be rho= 0.493.
I calculate a ratio “number of effective deg. freedom” to “actual degrees of freedom” using a formula
Neff/N = (1- rho -0.68/sqrt(N))/(1+rho+0.68/sqrt(N))
Where N is the original number of degrees of freedom, and happens to be 87. (That’s 89-2).
Then, I compute Neffect = N * (Neff/N) I get 24.135.
I then adjust the standard error using
sm(Adjusted) = sm sqrt( N/Neff)
This gives me 0.011 C/year.
Then I look up the “T” for alpha=0.05 and Neff=24.135. I get t=2.064.
The 95% confidence intervals is 2.064 * 0.011 C/year = 0.023
C/year or 2.3 C/century.
By the way, with respect to this:
Neff/N = (1- rho -0.68/sqrt(N))/(1+rho+0.68/sqrt(N))
The 0.68/sqrt(N) part is a special bit in Lee and Lund, and I sincerely doubt if it’s generally applicable. But, Tamino seems to be a big fan of this and I think went so far as to call it an exact solution in one of his posts. (It’s not an exact solution. I’ve compared it to a few different autocorrelations for noise. The only thing that can be said of that extra term results in larger error bars than not using it.)

Anthony– there is a typo in the table. RSS , OLS should read -1.3 ±2.3 C/century not -0.1 ±2.3 C/century.
REPLY: Fixed, Anthony

I won’t pretend to understand statistics, computer codes, and most of what was written there, so perhaps my post is not very helpful. Nonetheless, I don’t think it is wise to throw out data either. I am pro-NCC (natural climate change) in my views about the majority of forces affecting our short, mid, and long term climate forecasts. The climate models floating around seem pre-engineered to fit the designers political preconceptions about the validity of ACC (anthropocentric climate change). Even the most even-keeled of them suffers violently from a lack of data and understanding on such things as aerosols and cloud formation. Suffice it to say this: that until we are able to express mathematically a more complete and comprehensive fundamental understanding of the numerous processes that warm and cool the planet we cannot measure our own effects therein by comparison with respect to lengthy periods of time and thus reach any conclusion yea or nay. We simply do not know enough. For example: how much do the flawed seatings of the surface stations affect the raw GISS data? What problems does Fortran introduce in trying to adjust the data to compensate? Also, the entire AGW or ACC hypothesis suffers from what I deem is a fatal flaw: the assumption that trends will remain the same in perpetuity. They do not. If a given period of time is warmer, that does not necessitate the next period of time also being warmer. And there are divergent causes too. The Sun may start back up, but if the oceans (Pacific and Atlantic) remain in their cool phase, then the effects of the Sun will be mitigated. Likewise if one or both oceans warm, but the Sun is remains quiet in terms of activity, then that too will be mitigated. But so far nobody can tell me how those two fit together, interact with each other, and affect the total outcome. That is what I am waiting on myself.

Lucia: Very well put. I also accept there is most likely some AGW, at about the level shown by the numbers to date, and (as I understand it, which isn’t well) predicted by the basic physics: ~1 C/century. I can also see how natural variability (largely PDO) could flatten such a fairly small trend for a decade or so, and my SWAG is a return to trend over the next 10-20 years.
The problem I have – like you – is with the modelled positive feedbacks which predict 2-6 C/century and trigger all the disaster scenarios. Unless there’s some massive well-insulated reservoir of heat that hasn’t yet been explained to me, I just can’t see how the last century’s data – nor the last millenniums, or geological history – can be consistent with such feedbacks.
To return to GISS: Ever since I realised how to adjust for baseline differences (http://www.woodfortrees.org/notes#baselines) I just can’t see any major divergence between the series. Sure there are short-term differences (we’re in one now), but the overall pattern is visually coherent; that’s enough for me:
http://www.woodfortrees.org/plot/hadcrut3vgl/from:1979/offset:-0.15/mean:12/plot/gistemp/from:1979/offset:-0.24/mean:12/plot/uah/mean:12/plot/rss/mean:12
Best regards
Paul

Some thoughts about “ensembles”. Maybe this should be submitted to the journal of irreproducable results…
Theory… 1 + 1 = 3
Proof…
Step 1) Start with an ensemble of models which predict anywhere from 1 + 1 = 1.9, all the way to 1 + 1 = 4.1. The average prediction is that 1 + 1 = 3.
Step 2) Sample results from 1,000 different pocket calculators and adding machines and computer spreadsheets, in each case entering “1 + 1”, and recording the result.
Since “in each and every case, the result is not inconsistent with the ensemble”, the theory that 1 + 1 = 3 is hereby considered proved. The debate is over.
=========================================
True scientific theories/models used to stand or fall on their own merits. The new-age “ensemble” approach is that a couple of dozen totally out-to-lunch, garbage theories/models, averaged out, somehow equal one valid elegant theory/model. Let’s just say I disagree with this approach.

cohenite

The observations that pre-1979 data, before the satellites geared up, is suspect because of GISS fiddling are, IMO, on the money; for AGW to be validated it is essential that early 20thC temps be lower than today; in other words there is motive for manipulation of the data to achieve that result.
The fact that there is some trend similarity between GISS and the others post 1979 is also not surprising, since all post 1979 temp data is subject to the step-up caused by the Pacific Climate Event (PCE) of approxiamately 1977-8. What this means is that all temp data after that date has a higher base and will automatically have an upward trend; but it is not an anthropogenic trend; rather it is either a response to a one-off Hurst-type phenomenon, or a product of ENSO/PDO variability, if one assumes that the PCE is part of the ENSO/PDO variability. I’m not sure whether anyone has looked at this in detail. Certainly McLean and Quirk’s paper did not, but a subsequent paper by Joe D’Aleo does allude to it;
http://icecap.us/images/uploads/More_on_The_Great_Pacific_Climate_Shift_and_the_Relationship_of_Oceans_on_Global_Temperatures.pdf
As Stewart Franks has observed, given the different climate regimes of PDO’s and their equivalents globally, there has been no upward temp trend over the 20thC.
lucia; I enjoy your work, especially when you have sly digs at his highness, Tamino. But I must cofess to be bemused by your statement that your belief in AGW is “based on some understanding of radiative physics. GHG’s should result in at least some warming.” The greenhouse concept has been subject to some conjecture lately with the Gerlich and Tscheuschner paper and its purported refutation by such efforts as by Arthur P. Smith. But it is not necessary to look at that, or the spurious concept of an average global temperature which AGW relies on, to understand that the mechanism chosen by IPCC to prove AGW, CO2, is incorrect. CO2 absorbs primarily at 15 microns; at that point there is competition from H2O by virtue of the vastly greater proportion of CO2 in the atmosphere. When CO2 does absorb some of the reemitted IR from the surface the excited molecule vibrates and transfers kinetic heat to surrounding molecules; but the CO2 also readmits a photon; this photon removes energy from the CO2 which by dint of this and the vibration has a lower energy state than prior to excitation; how can heating occur? The emitted photon will either go up or down. The up photon will either have an altered wavelength and will thus be not susceptible to absorbtion by higher CO2, or will have a similar wavelength and be subject to Beer’s Law saturative decline; the extinction coefficient. No heating potential there. The down photon will return to the earth and be readmitted by a slightly warmer earth with its wavelength altered by virtue of Stefan-Boltzman and Wien’s Laws; its ability to be absorbed by CO2 is reduced by virtue of that minor wavelength shift. IPCC has recognised this logarithmic decline in CO2 sensitivity and has promoted H2O as a +ve feedback. Rather than go into detail as to why that can’t work because of Spencer’s and Lindzen’s work and energy balance theories like Miskolczi’s, I’ll leave it at that and a request as to how your understanding of radiative physics makes you so confident that GHG’s should result in at least some warming?

Oops. Looks like my embedded links got FUBARd. I wish your comments section had a “preview” function.
Here’s Lubos’ skeptic blog on the modest logarithmic nature of CO2 warming:
http://motls.blogspot.com/2008/01/why-is-greenhouse-effect-logarithmic.html
Here’s another one that examines the current warming trend relative to the basic effect:
http://www.coyoteblog.com/coyote_blog/2007/07/the-60-second-c.html

Fred Nieuwenhuis

4) Other….i.e. not accounting for
A) Negative phase of the PDO
B) Undersea volcanoes (http://www.sciencedaily.com/releases/2008/06/080619093259.htm)
C) Above ground volcanos
D) Solar minimums
E) etc. etc. etc.

Pofarmer

that until we are able to express mathematically a more complete and comprehensive fundamental understanding of the numerous processes that warm and cool the planet we cannot measure our own effects therein by comparison with respect to lengthy periods of time and thus reach any conclusion yea or nay
Absolutely.

blcjr

lucia,
I didn’t realize you had adjusted your OLS standard errors, so that’s going to be the major explanation for the differences. I’ll go back and do it that way, and will probably come up with something much closer to your confidence intervals. I usually dispense with OLS in time series like this, and just do Cochrane Orcutt, or one of the other methods that is designed specifically for autoregressive series.
I don’t know what is hard to understand about the output I posted, unless it is because of the formatting:
VARIABLE COEFFICIENT 95% CONFIDENCE INTERVAL
const 0.532084 0.480818 0.583350
time 6.12870E-06 -0.000983234 0.000995492
There are four columns here. The first is the variable name, “const” for the constant, and “time” for the trend variable. The second is the coefficient, or value, for the variable. Our interest is in the coefficient for time, i.e. the slope, which is 6.12870E-06. Now this is estimated from the monthly data, so if I want to convert it to a yearly figure, I multiply by 12. That gives me
0.000073544400
which agrees exactly with your 7.35444E-05. The last two numbers in that row are the lower and upper limits of the 95% confidence interval. Once I rerun the numbers using robust standard errors, I’ll probably come up with something closer to the results you posted.
Basil

counters

Lucia, I have a comment or two about your analysis, but I’ll need to come back to it perhaps later today after I’ve finished some work; in the meantime, I want to briefly address some of Bobby Lane’s comments in the preceding comment:
Even the most even-keeled of them suffers violently from a lack of data and understanding on such things as aerosols and cloud formation.
This is a point which is commonly brought up, and is a very valid criticism of the current generation of climate models. A caveat – aerosols are much better understood than they used to be, and with the 4th AR (and especially since then), their behaviors have been further clarified. I’d argue that the effects of black carbon (“soot”) versus lighter aerosols are well incorporated into the current generation of models, and will continue to be further refined by the time the 5th AR comes out in two years. As for cloud formation, this is a common claim – that the effects of clouds are some panacea that might counter-act a great deal of CO2-influenced warming. However, there is little concrete evidence to support this. Certainly, the primitive cloud modeling which goes in to most of the current generation of climate models doesn’t get it “right,” but that claim isn’t enough to nullify the entire system of projections.
Until there is a concrete, accepted theory on the positive or negative feedback due to clouds, this point simply isn’t enough to throw out the current projections. Modern computing should allow us to incorporate more complicated cloud physics into the models, however we need to a good way to do so beforehand.
how much do the flawed seatings of the surface stations affect the raw GISS data?
There is likely a systematic bias. That’s why the data is “corrected.” No doubt, the corrections won’t match up precisely with the exact temperature history. However, in the grand scheme of things, if we’re interested in trends, then it doesn’t matter if there is a systematic bias – trends are determined by the distribution of data in a dataset, not necessarily the magnitude of each individual data point. Besides, while it doesn’t match precisely with third-party data sets, GISS correlates well to UAH and others, especially where the long-term trends are concerned.
What problems does Fortran introduce in trying to adjust the data to compensate?
Why do you think there should be? There shouldn’t be any flaws from FORTRAN itself – maybe from the programmers, but not from the computation.
the assumption that trends will remain the same in perpetuity. They do not. If a given period of time is warmer, that does not necessitate the next period of time also being warmer. And there are divergent causes too.
That is not the assumption. I think you’re referring to the SRES, which are forcing scenarios, not warming scenarios. No one is expecting a straightforward, monotonic trend upwards (well, except those who are using the past few years of a local minima to disprove AGW). The atmosphere is a chaotic system, and thus the climate has a great deal of variance. As it is expected that there will be long-term warming, this does not mean in any way shape or form that a short-term period of cooling could not occur. I don’t want to hypothesize on the scenarios you stipulate in your post (although I’m of the opinion that the Sun argument will have a negligible effect), but there are several theories of long-term oscillations which could show up and cause artifacts in the temperature record. However, the presence of micro-trends in the data does not comment on the overall trend and the hypothesis behind it – that CO2 is the catalyst for the long-term, increasing temperature trend.
The climate models floating around seem pre-engineered to fit the designers political preconceptions about the validity of ACC
You’re entitled to your own opinions. However, that doesn’t make them true. Logically speaking, you break parsimony here because you make the assumption that the designers of the climate models are not impartial. Bear in mind that there are many different models from many different parties. Also bear in mind that scientists are trained to be objective and it is a serious violation of scientific ethics to do what you’re suggesting has been done with the models.
I’m no psychologist, but let’s spin the question the other way around: It would seem to me that the questions you’ve floated seem pre-engineered to fit your own political preconception about the validity of AGW. However, this statement is null because I don’t know anything about your “political preconceptions” or your motives for positing these questions. In essence, my statement here is a dumb statement.
You and other denialists are entitled to let political and emotional motives enter your arguments. But if you’re dealing with the science, then at some point you have to take good faith that the scientists are doing impartial work. There are always exceptions, but in general, science is an impartial, impersonal process.

MarkW

JohnV’s analysis, besides being premature, did not deal with the issue of UHI contamination.

MarkW

Knowing that more CO2 must result in some warming, is not the same as saying that this warming is going to be 2C/century. It could just as easily be 0.2C, or even 0.02C.
The warming of the last century is only 0.6C, and that’s before subtracting out the known natural and other man made causes.

blcjr

lucia,
Just a quick update. I can come up with something close to your confidence limits for OLS using the “HAC with prewhitening” option in gretl. And using it, I would “fail to reject” the 2C/century hypothesis with GISS. But I like what the gretl reference has to say about all of the things one can go through to get consistent estimators in OLS when there is autocorrelation:
“The issue of HAC estimation is treated in more technical terms in chapter 18. Here we try to convey some of the intuition at a more basic level. We begin with a general comment: residual autocorrelation is not so much a property of the data, as a symptom of an inadequate model. Data may be persistent though time, and if we fit a model that does not take this aspect into account
properly, we end up with a model with autocorrelated disturbances. Conversely, it is often possible to mitigate or even eliminate the problem of autocorrelation by including relevant lagged variables in a time series model, or in other words, by specifying the dynamics of the model more fully. HAC
estimation should not be seen as the first resort in dealing with an autocorrelated error process.”
“HAC” refers to “heteroskedasticity autocorrelation corrected” and is the process in gretl of trying to get robust standard errors for OLS. The emphasis on that “not” is in the original. I gather that your reporting OLS results adjusted for autocorrelation is related somehow to your dustup with tamino, which I’ve just briefly gone back and looked over. Apart from the issue of statistical technique, and “pumped up error bars” I’m sure you appreciate that there’s a more fundamental problem with scientific methodology in tamino’s approach: he’s looking for confirming evidence. Karl Popper, of course, is horrified, even from the grave. Given a choice between two techniques, both valid in some sense, with one confirming a hypothesis, and the other rejecting it, proper scientific methodology will hold with the result that rejects the hypothesis over the one that confirms it. In that sense, it should be enough that the 2C/century hypothesis is rejected in all cases using CO.
But if you want to examine this matter even more critically, try to model the lag structure explicitly. For example, with GISS, I find a significant autocorrelation at both -1 and -2 months. Using OLS with robust standard errors and explicit lags at -1 and -2, I get:
VARIABLE COEFFICIENT 95% CONFIDENCE INTERVAL
const 0.226491 0.0917475 0.361235
time -0.000278904 -0.00126461 0.000706806
In your notation, this is -0.33 ± 1.2 C/century. Putting the best face on it, from the standpoint of using the upper limit of the CI, the maximum growth that can be inferred from GISS using these results is +0.85C/century. So even using OLS and robust standard errors I can reject the 2C/century hypothesis using GISS.
And if anybody but lucia is still with me, note well that here even with OLS, we have a negative growth rate for GISS using this time frame. In other words, a positive or zero correlation for GISS stems from a poorly or inadequately specified model, one that fails to take into account the autoregressive lag structure. Cochrane-Orcutt is closer to the mark, but we can do better. Contra tamino, the objective is to get the narrowest confidence limits possible, so as to reduce the probability of failing to reject a hypothesis that should be rejected. At -0.33 ± 1.2 C/century, we’ve got an even better (narrower CI) estimate than you obtained with CO.
I don’t think we can “fail to reject” the 2C/century hypothesis with any of the data, when the autoregressive lag structure is adequately accounted for.
Basil

counters

The big problem with this analysis is that the data is not temporally constrained. In other words, the same problem that was going on in Basil’s latest analysis is going on here – this data is a subset of an even larger dataset.
You neglect to mention that all the hypothesis tests you did are merely ways of analyzing population statistics through sample statistics. The thing is, you’ve got better than a sample here – you have the entire population’s data!
While this analysis isn’t worthless, so to speak, your conclusions are, because the experiment is intrinsically flawed. You’ve got decades of data to work with, yet you’re only using the latest decade of measurements to perform your hypothesis tests. Furthermore, you’re working with a decade which most people agree exhibits an anomalous temperature signal, possibly even a short-term cooling one. This experiment is another exercise in cherry-picking statistical analysis.
If you want to do a serious analysis, then start changing the temporal dimensions of your sample size. Furthermore, don’t look for “X degree trend” – you need to look at a wide range of trends. Optimally, you should be looking at whether there is a trend period, but if you looked at trends in 1 degree increments for starters, adding 1 or 2 years at a time to the dataset, you might have an interesting experiment on your hands.

BobW in NC

Not having any background in meteorology or climate science, some of my questions may not be relevant, but I’m just curious…
It seems to me that there are several lines of discussion/debate that are being confounded in Lucia’s post and comments to it (in no particular order). Thus, at a strict science level:
1. How should global temperature data be analyzed to determine trends? This point seems to be an absolutely critical and is clearly an ongoing issue,a s presented in this blog. I understand it, but with my limited math skills can add nothing to it. The input to this blog is incredible, and I can’t imagine that climate science will not be significantly advanced from the discussions here.
2. What are the characteristics of valid global temperature data, if measuring global temperature is valid in the first place? What standards have been or can be established? Granted, these latter may change as new information becomes available. Surfacestations.org seems to be leading advances in this area for surface temperatures.
3. Are the NASA/GISS data valid? Anthony’s opening discussion and earlier posts point out some serious shortcomings that at least need to be taken into account if NASA/GISS data are to be included in any analysis of historical temperature trends. Thus, how are these data to be used? Should they be used at all until the siting issues and appropriate adjustments are worked out? The bigger question (unanswerable definitively at present) – are they deliberately being biased, or is it just a case of very poor judgements and analyses being made? Or, both?
4. How should analyses modeling future temperature trends be used, if at all, given the unknowns surrounding future weather phenomena (solar irradiance, PDO, AMO shifts, etc). That these are unknown is one of the serious shortcomings, if not fatal flaws of the AGW hypothesis, as Bobby Lane pointed out above.
5. Lucia’s analyses are elegent, interesting, and are in the direction of point 1 above. But, at the same time, is she arguing a) for inclusion of NASA/GISS data as valid in any global temperature measurements, and/or b) that modeling of future temperature trends is a valid tool on which to base (eg) economic decisions (ie, that the AGW hypothesis is valid, and should be acted on)? Or both? If the latter, I contrast this approach with the very cautious one Anthony has taken on numerous occasions to not “predict” a period of global cooling, given the now observed overly long sunspot cycle 23, drop in the AP index, and shifts in the PDO and AMO to cool phase (eg, “let the data be the data” [paraphrase]).
6. In ANY discussion of the AGW hypotheses based on modeled future increasing temperature trends from GHG forcing of temperatures (especially CO2), shouldn’t the apparent limitations of CO2 as a GHG be prominantly featured as a counterpoint? If I understand correctly, these at least include a) the (semi-)logarithmic decline in increased warming associated with increasing concentrations of the gas, b) the limited IR absorption spectra in contrast to water vapor, and c) the relatively minor input to total global atmospheric CO2 “load” (?) from human activity.
Finally, AGW has been taken to an emotionally charged, political, ideological level and belongs in what I’ve heard described as “eco-theology.” To date, any findings, conclusions, meetings, etc, that run contrary to AGW as an established fact are minimized or ignored (or unknown?) by people with the power to make appropriate decisions, based on the best data available. This is an incredibly frustrating situation, to say the least.
Thanks, Anthony (et al), for bringing some honest, substantive discussion in this area.

DAV

I guess what enables SPLODE++ to be the best language for climate forecasting is the over-abundance of CO2 in every module.

Lucia,

There are plenty or reasons outside the basic theory underlying AGW why the magnitude predicted cold be inaccurate.

Sure, but isn’t there anything then in basic AGW theory that is testable?

Bob B

Counrters, why stop with data over decades? Why not over thousands of years?
The Northern Hemisphere briefly emerged from the last ice age some 14,700 years ago with a 22-degree-Fahrenheit spike in just 50 years, then plunged back into icy conditions before abruptly warming again about 11,700 years ago. Massive “reorganizations” of atmospheric circulation coincided with each temperature spurt, with each reorganization taking just one or two years, according to a new study.

Bob B

Counters
http://www.sciencedaily.com/releases/2008/06/080619142112.htm
The data set used for the past 30-100yrs is useless and PROVES nothing if you want to go that far