For some time I’ve been critical (and will likely remain so) of the data preparation techniques associated with NASA GISS global temperature data set. The adjustments, the errors, the train wreck FORTRAN code used to collate it, and the interpolation of data in the polar regions where no weather stations exists, have given me such lack of confidence in the data, that I’ve begun to treat it as an outlier.
Lucia however makes a compelling argument for not discarding it as such, but to treat it as part of a group data set. She also makes some compelling logical tests that give an insight into the entire collection of datasets. As a result, I’m going to temper my view of the GISS data a bit and look at how it may be useful in the comparisons she makes.
Here is her analysis:
Surface Temperatures Trends Through May: Month 89 and counting!
a guest post by Lucia Liljegren
Trends for the Global Means Surface temperature for five groups (GISS, HadCrut, NOAA/NCDC, UAH/MSU and RSS.) were calculated from Jan 2001-May 2008 using Ordinary Least Squares (OLS) using the method in Lee & Lund. to compute error bars, and Cochrane-Orcutt and compared to the IPCC AR4’s projected central tendency of 2C/century for the trend during the first few decades of this century.
The following results for mean trends and 95% confidence intervals were obtained:
- Ordinary Least Squares average of data sets: The temperature trend is -0.7 C/century ± 2.3C/century. This is inconsistent IPCC AR4 projection of 2C/century to a confidence of 95% and is considered falsified based on this specific test.
- Cochrane Orcutt, average of data sets: The temperature trend is -1.4 C/century ± 2.0 C/century. This is inconsistent with the IPCC AR4 projection of 2 C/century to a confidence of 95% and is considered falsified based on this specific test for an AR(1) process.
- OLS, individual data sets: All except GISS Land/Ocean result in negative trends. The maximum and minimum trends reported were 0.007 C/century and -1.28 C/century for GISS Land/Ocean and UAH MSU respectively. Based on this test, The IPCC AR4 2C/century projection is rejected to a confidence of 95% when compared to HadCrut, NOAA and RSS MSU data. It is not rejected based on comparison to GISS and UAH MSU.
- Cochrane-Orcutt, individual data sets: All individual data sets result in negative trends. The IPCC AR4 2C/century is falsified by each set individually.
- The null hypothesis of 0C/century cannot yet be excluded based on data collected since 2001. This, does not mean warming has stopped. It only means that the uncertainty in the trend is too large to exclude 0C/century based on data since 2001. Bar and Whiskers charts showing the range of trend falling inside the ±95% uncertainty intervals using selected start dates are discussed in Trends in Global Mean Surface Temperature: Bars and Whiskers Through May.
The OLS trends for the mean, and C-O trends for individual groups are compared to data in the figure immediately below:
Click for larger.
Figure 1: The IPCC projected trend is illustrated in brown. The Cochrane – Orcutt trend for the average of all five data sets is illustrated in orange; ±95% confidence intervals illustrated in hazy orange. The OLS trend for the average of all five data sets is illustrated in lavender, with ±95% uncertainty bounds in hazy lavender. Individual data sets were fit using Cochrane-Orcutt, and shown.
Discussion of Figure 1
The individual weather data in figure 1 are scattered, and show non-monotonic variations as a function of time. This is expected for weather data; some bloggers like to refer to this scatter as “weather noise”. In the AR4, the IPCC projected a monontonically increasing level increase in the ensemble average of the weather, often called the climate trend. For the first 3 decades of the century, central tendency of the climate trend was projected to vary approximately linearly at a rate of 2C/century. This is illustrated in brown.
The best estimates for the linear trend consistent with the noisy weather data were computed using Cochrane-Orcutt (CO), illustrated in orange, and Ordinary Least Squares (OLS) illustrated in lavender.
Results for individual hypothesis tests
Some individual bloggers have expressed a strong preference for one particular data set or another. Like Atmoz, I prefer not to drop any widely used metric from consideration. However, because some individuals prefer to examine results for each individual group seperately, I also apply the technique to describe the current results of two hypothesis tests based on each individual measurement system.
The first hypothesis tested, treated as “null” is the IPCC’s projections of a 2C/century. Currently, this is rejected at p=95% under ordinary least squares (OLS) using data from 3 of the five services, but it is not rejected for UAH or GISS. The hypothesis is rejected against all 5 servies when tested using C-O fits.
The second hypothesis tested is the “denier’s hypothesis” of 0C/century. This hypothesis cannot be rejected using data starting in 2001. Given the strong rejection with historic data, and the large uncertainty in the determination of the trend, this “fail to reject” result is likely due to “type 2″ or “beta” error.
That is: The “fail to reject” is likely a false negative. False negatives, or failure to reject false results are the most common error when hypotheses are tested using noisy data.
Results for individual tests are tabulated below:
| Group | OLS Trend | Reject / Fail to Reject? | CO Trend | Reject / Fail to Reject? | ||
| (C/century) | 2C/century | 0 C/century | (C/century) | 2C/century | 0 C/century | |
| Average of 5 | -0.7
± 2.3 |
Reject | Fail to reject | -1.4 ± 2.0 | Reject | Fail to reject |
| GISS | 0.0 ± 2.3 | Fail to
Reject |
Fail to reject | -0.4 ± 2.0 | Reject | Fail to reject |
| HadCRUT | -1.2 ± 1.9 | Reject | Fail to reject | -1.6 ± 1.6 | Reject | Fail to reject |
| NOAA | -0.1 ± 1.7 | Reject | Fail to reject | -0.3 ± 1.5 | Reject | Fail to reject |
| RSS MSU | -1.3 ±2.3 C | Reject | Fail to reject | -2.1 ± 2.1 | Reject | Fail to reject |
| UAH MSU | -0.8 ± 3.6 | Fail to reject | Fail to reject | -2.0 ± 3.1 | Reject | Fail to
reject |
The possibility of False Positives
In the context of this test, rejecting a hypothesis when it is true is a false positive. All statistical test involve some assumptions, those underlying this test assume we can correct for red noise in the residuals to OLS using one of two methods: A) The method recommended in Lee&Lund or B) Cochrane-Orcutt, a well known statistical method for time series exhibiting red noise. If these methods are valid, and used to test data, we expect to incorrectly reject true hypotheses at p=95%, 5% of the time. (Note however, finding reject in February, March, April and May do not actually count separately, as the rejections themselves are correlated with each other, being largely based on the same data.)
Given the results we have found, the 2C/century projection for the first few decades of this century is not born out by the current data for weather. It appears inconsistent with underlying trends that could possibly describe the particular weather trajectory we have seen.
There are some caveats that have been raised in the blog-o-sphere. There has been some debate over methods to calculate uncertainty intervals and/or whether one can test hypotheses using short data sets. I have been examining a variety of possible reasons. I find:
- In August, 2007, in a post entitled “Garbage Is Forever”, Tamino used and defended the uses OLS adjusted for red noise to perform hypothesis tests using short data sets, going into some detail in the response to criticism by Carrick, where Tamino stated:
For a reasonable perspective on the application of linear regression in the presence of autocorrelated noise see Lee & Lund 2004, Biometrika 91, 240–245. Your claims that it’s “pretty crazy, from a statistics perspective” and “L2 is only reliable, when the unfit variability in the data looks like Gaussian white noise” raises serious doubts about your statistical sophistication.
Later posts, when this method began falsifying the IPCC AR4 projection of 2 C/century, Tamino appears to have changed his mind about the validity of this method possibly suggesting the uncertainty intervals are too high.
The results here simply show what anyone would obtain using this method: According to this method, the 2C/century is falsified. Meanwhile, re-application to the data since 2000 indicates there is no significant warming since 2000 as illustrated here.
- Gavin Schmidt suggested that “internal variability (weather!)” noise results in a standard error of 2.1C/century in 8 year trends; this is roughly twice the standard error obtained using the method of Lee & Lund, above. To determine if this magnitude of variability made any sense at all, I calculated the variability of 8 year trend in the full thermometer record including volcano eruptions, and measurement noise due to the “bucket-jet inlet transition. I also computed the variability during a relatively long historic period with no volcanic eruptions. A standard error of 2.1 C/century suggested by Gavin’s method exceeded both the variablity in the thermometer record for real earth including volcanic periods and that for periods without volcanic eruptions. (The standard error in 8 year trends computed during periods with no volcanic eruptions is approximately 0.9C/century, which is smaller than estimated for the current data).I attribute the unphysically large spread in 8 year trends displayed by the climate models to the fact that the model runs include
a) different historical forcings, some including volcanic eruptions, some don’t. This results in variability in initial conditions across model runs that do not exist on the real earth
b) different forcings during any year in the 20th century; some include solar some don’t.
c) different parameterizations across models and
d) possibly, inability of some individual models to reproduce the actual characteristics of real-earth weather noise.This is discussed Distribution of 8 Year OLS Trends: What do the data say?
- Atmoz have suggested the flat trend is either to ENSO and JohnV suggested considering the effect of Solar Cycle. The issue of ENSO and remaining correlation in lagged residuals has been discussed in previous posts and the solar cycle is explained here.
- The variability of all 8 month trends that can be computed in the thermometer record is 1.9 C/century; computing starting with a set spected at 100 month intervals resulted in a standard error of 1.4 C/century. These represent the upper bound of standard errors that can be justified based on the empirical record. Variabiity includes features other than “weather noise”– for example, volcano eruptions, non-linear variations in forcing due to GHG’s, and measurement uncertainty, including the “jet transition to bucket inlet” noise. So, these represent the upper limit on variability in experimentally determined 8 year trends.Those who adhere to these will conclude the current trends fall inside the uncertainty intervals for data. If the current measurement uncertainty is as large as experienced during the “bucket to jet inlet transition” associated with World War II, they are entirely correct.
After consideration of the various suggestions about uncertainty intervals, and the issues ENSO, solar cycles and other features, and considering the magnitude of the pre-existing trend I think over all the data indicate:
- It is quite likely the IPCC projection for an underlying climate trend of 2C/century exceeds the current underlying trend.I cannot speculate on the reasons for the over estimate; they may include some combination of poor forecast of emissions when developing the SRES, to the effect of inaccurate initial conditions for the computations of the 20th century, to inaccuracy in GCMs themselves or other factors.
- It remains likely the warming experienced over the past century will resume.While the 2C/century projection falsifies using both OLS and C-O, the flat trend is entirely consistent with the previously experienced warming trend. In fact, additional analysis (which I have not shown) would indicate the current trend is not inconsistent with the rate of warming seen during the late 90s. It is entirely possible natural factors, including volcanic eruptions depressing the temperature during the early 90s, caused a positive excursion in the 10 year trend during that period. Meanwhile, the PDO flip can be causing a negative excursion affecting the trend. These sorts of excursions from the mean trend are entrely consistent with historic data.Warming remains consistent with the data. As the theory attributing the warming to GHG’s appears sound and predates the warming from the 80s and 90s, I confident it will resume.
What will happen during over the next few years
As Atmoz warns, we should expect to see the central tendency of trends move around over the next few years. What one might expect is that, going forward, we will see the trend slowly oscillate about the mean, but eventually the magnitude of the oscillation will decay.
One of my motives in blogging this is to show this oscillation and decay over time and to permit doubters to see the positive trend resumes.
I will now set off on the sort of rampant speculations permitted bloggers. When the next El Nino arrives, we will see a period where the trends go positive. Given trends from the 70s through 90s, and current trends, it seem plausible to me that, using the methods I describe here that that we will experience some 89 month trends with OLS trends of 3C/century – 4 C/century or even greater sometime during the next El Nino. At which point, someone will likely blog about that, the moment the 89 month trend occurs. ![]()
This result will entirely consistent with the current findings. An OLS (or CO ) trend of 3-4 C/century is likely even if the true trend is less than 2C/century, and even if CO and OLS do give accurate uncertainty intervals.
What’s seems unlikely? I’d need to do more precise calculations to find a firm dividing line between consistent and inconsistent. For now, I’ll suggest that unless there is a) a stratospheric volcanic eruption, b) the much anticipated release of methane from the permafrost or c) a sudden revision in the method the agencies use to estimate GMST, I doubt we’ll see an 89 month trend greater than 4.6 C/century within the next five years. (I won’t go further because I have no idea what anyone is emitting into the atmosphere!)
Meanwhile, what is the magnitude of the trend for the first three decades of this century? That cannot be known with precision for man years. All I can say is: The current data strongly indicate the current underlying trend less than 2C/century, and likely less than 1.6 C/century!

Counters,
I find your insinuations of “cherry-picking” and “political and emotional motives” by the people doing statistical analysis here are rather unfair. We all have our biases, including yourself, and to assume that this data analysis is flawed by other motives because it shows something that you consider insignificant reveals your own biases.
Yes, 10 years is a rather short period of time to come to any conclusions regarding climate trends. However, so is 20 years, and the 20 years of warming from 1978-1998 were used COUNTLESS times by AGW advocates to illustrate how the earth was apparently warming up in response to GHGs. Now, however, that the statistical trends are not showing the same rate of warming, we are to dismiss them as “insignificant” and “anomalous”. It’s a double standard…and it comes back how much one believes in the premise of CO2-based signficant warming.
Remember, you may consider the science “solid”, and it may be so…but many scientific theories have gone down in flames through the years…most, in fact. Why? Because we don’t have all the answers yet. Therefore, to criticize those who are open to more than one answer for the climate change that has been observed is close-minded on your part.
counters,
I agree with the general sense of what I think you are saying. I chimed in on the 11 year hiatus discussion, not because I thought 11 years was the right period to be using, but because it was a good example to illustrate the use of loess regression.
If I were trying to do what I understand lucia to be doing, which is to test the falsifiability of a hypothesis made in 2001, I’d look into using data up to 2001, and then forecast from that point, determine the confidence interval of the forecast, and see whether a 2C/per century projection is falsified.
For example, here is a linear fit through 30 years of GISS (using robust errors) data through 2000, and then projected to the present (in this case, actually 2008:04):
http://s3.tinypic.com/vyp8c5.jpg
Now what lucia should do is at the beginning of the forecast period, plot a linear trend with a slope of 2C/century and see if it is outside the green error bars of the forecast. I can tell you that it isn’t. By the end of the forecast period, the projected anomaly would be ~0.55, and we can see that the upper limit of the confidence interval is well above this at the end of the forecast period.
So, have we verified the IPCC 2C/century projection? No, we’ve simply failed to reject it. And I don’t know if lucia appreciates it or not, but when we start working with forecasts, as opposed to sample means, the confidence intervals are always considerably greater.
Note, as well, though, that the forecast confidence intervals allow for the possibility of substantial negative growth, and despite that, the recent negative downturn is below the bottom of the error band. Now it is too soon to read much into this, since 5% of the time we’ll have observations falling outside of the error bands.
This has got me wondering, now, what a similar forecast would look like using one of the other metrics. More on that later.
Basil
Hey, counters!
Here’s the problem: We lay skeptics don’t hear much about anything but doom & gloom from the climate crowd. This is utterly frustrating to us, the received wisdom comes across as sparse, overstated & manipulative because it does not explain the breadth or depth of the scenarios. I find this all very frustrating, a little science presentation of negative vs. positive forcings would go a long way into so many misapprehensions. These folk aren’t climate “denialists,” most of them understand that CO2 is important to the Earth’s climate, but the are certainly rejectionist toward claims of serious warming trend. There are those who say CO2 has zero effect, but that’s not what Bob Carter, Roger Pielke and other climate moderates are saying. They’re not rejectionists or denialists, not in the least, but they doubt the worst-case scenarios.
Recent entrants into the skeptic side of the discussion come from unlikely sources:
+ V. Ramanathan & Charlie Zender, on the broader window of opportunity that lays in the offing from soot mitigation (Ramanathan & Zender are not skeptics by any measure). From the dearth of public advocacy regarding soot mitigation, it appears that soot is the carbon that must not be named. So the question of an imminent “tipping point” seems overstated. And the role of soot in the decimation of Greenland’s peripheral ice sheets is not being mentioned even though the pictures of black icebergs pretty much say it all.
+ Kevin Ternberth of NCAR, on the Argo data showing a great deal of expected heat is missing & may have radiated back out into space. Many suffer misapprehensions on the latent ocean heat content b/c all we hear about are air temperatures. We have some confusing trends here WRT sea level, boreal ice sheet loss (soot?) and ocean temperatures. But if the seas can offload heat in ways heretofore unanticipated it might provide some bigger insights into the thermodynamic capacities of Earth systems.
+ Other researchers, finding more modest water vapor levels in the middle troposphere and over Antarctica. As for the latter I must wonder if this doesn’t reflect on the interglacial record as well, that the ice ages may have in fact been much drier and the interglacials being driven by unlocked water vapor, well past the effects of CO2. If Aqua data aren’t show water vapor piling up in the atmosphere then what’s going on? ***
+ Aerosols: The big mystery bus. If it’s really conceivable that worldwide aerosols could mask as much GHG warming then everyone would be delighted to know what offset we’re looking at. I’ve read that global aerosols might be masking GHG effects by some margin, around -1.1 degr. C, but seasonal soot might be countering that masking effect within regional brown clouds with a net warming of its own of +0.3 degrC (0.9 w/m-2). The def’n of “regional” gets weird here, since an area encompassing the vast expanses of the E. Indian Ocean, E. Asia, the boreal region & the vast Pacific are truly global in scope with soot adding 40% warming to the air.
+ And finally, the sun. So far average TSI has slackened to the order of -0.1 degr/C since the early 1990’s, and is apparently due to dim even more, to the order of another -0.1 to -0.2 degrC. Understand that Drew Shindell modeled a -0.3 to -0.4 degrC effect in Maunder Minimum/LIA due to the decrease of sunspot facular UV warming the upper troposphere (stratosphere), so his model and our current solar trend are consistent in decreased TSI.
So:
If the boreal ice loss is 90 percent due to soot deposition, why can’t this be reversed? I would think a full restoration effort would stem sea level rise.
If the oceans are not heating up at a marked rate then where’s the dangerous pipeline of latent heat that’ll blindside us 50 years from now?
If the middle troposphere (around 300 to 400mb) isn’t as humid as had been modeled, then how will the middle and upper atmosphere enter into the surface warming problem? (I believe the UAH MSU data show a cooling trend in the middle troposphere, not the stratosphere).
If the dimming sun is going to net offset -0.3 degrC by 2020, won’t this only help?
These are the questions climate moderates are asking be considered.
***I’ve noted that a casual comparison of the paleo record appears to show water vapor (light oxygen availability) tracking more consistently with temperature than CO2. Would you like to comment on this? Here’s a Q&D graphic: http://i30.tinypic.com/izon5h.jpg
Lucia:
I seem to have missed the argument for GISS data. Is it simply that it is consistent with the others under the suite of tests chosen?
Red noise: I take this to be with respect to OLS where the data are not by some measure random. Could you expatiate a bit on the analogy and its importance.
This discussion reminds me of NOAA’s prediction for the 2006 hurricane season: 80% chance of above normal activity, 15% chance of normal, and only 5% chance of below normal. When the season actually came in well below “normal”, no doubt the good folks at NOAA were high-fiving each other and gloating, “Yessss! We NAILED it!”
Statistics for the win!
One of the founding arguments of the AGW viewpoint is that recent warming has not happened before (ie Mann’s hockey stick). I would suggest that has been discredited, not only by climateaudit et al, but also with recent studies of icecores in Greenland and Antartica (http://www.canada.com/topics/news/topics/technology/science/story.html?id=82fe2dc6-5c2d-430c-bd12-58fa1bbb5fc8 for one)
So if this is the case, what is the foundation for claiming the warming that we are currently experiencing is anything but a natural phenomenon? As far I have seen, there hasn’t been any convincing argument to state otherwise.
“But if you’re dealing with the science, then at some point you have to take good faith that the scientists are doing impartial work. There are always exceptions, but in general, science is an impartial, impersonal process.”
Friend, you are seriously deluded; this mythology went out with Glenn Seaborg and his tweed jacket. Are you an RC spinmeister by trade or avocation?
Lucia,
You state: “For example:
a) The IPCC could over predict the amount of GHGs in their SRES. That is, they could think CO2 or methane would rise faster than it really did.”
I know you are familiar with Roger Pielke, Jr.’s work earlier this year on “falsification” and he presented a graph showing the IPCC pretty much got the climb in GHGs right. Are you saying you disagree with Roger/IPCC on the rate of increase in GHG’s?
If you (more or less) agree that they got the GHG’s right, could you clarify what you are trying to say?
Mike
To me, an interesting question would be whether using these data sets for their entire time period falsifies a hypothesis that warming is the same as it has been since the LIA. I don’t think very many people would expect to confirm a hypothesis of no warming over the 30-year period, but the question of whether anthropogenic influences cause the temperature trends to deviate statistically from the null hypothesis that the warming pattern since the Maunder period has continued seems to be an important one.
I want to “second” jared’s comment to counters about the double standard of the alarmists. While there is an underlying trend that goes back to the LIA, there are natural climate variations around that trend. Much of the hyped warming of the 1980’s and 1990’s was just part of natural climate variation, and not evidence of AGW. To dismiss the current cooling as cyclical while hyping any warming as evidence of a secular trend driven by anthropogenic factors is hypocritical.
Basil
Basil–
I think my initial confusion was due to the lack of tables in comments. Also, you had per/month, so I couldn’t just figure the numbers until I got further down in the comment.
You are correct– the OLS is because several bloggers are insisting that OLS is the “only way”. Of course, this idea in not universally believed. . . However, the fact is, we get rejections if we do it their way or if we use Cochrane Orcutt, which is specifically valid for data that has AR(1) residuals. Other methods would be better, but I don’t know them.
I do have a couple alternate ways to look at this stuff– one will be shown sometimes next week. You’ll see it looks a little like this
“Now what lucia should do is at the beginning of the forecast period, plot a linear trend with a slope of 2C/century and see if it is outside the green error bars of the forecast. I can tell you that it isn’t.”
There is a difficulty though: How do you figure out the green error bars for weather noise in the IPCC’s forecast?
The IPCC doesn’t describe their “weather noise”. So, they don’t make a forecast in that sense. They do show spagetti diagrams, but it’s not clear they as a body consider those to describe “weather”. They don’t give any statistics for what would be “weather noise”, and only project the central tendencies for the slope. It’s not at all clear the WG1 has any agreement on how the scatter in the runs related to “weather noise”.
They leave the definition of “mean temperature in 2000” ambiguous. In my graph, I assume they will define it the way they defined “mean temperature in 1990”.
So, I’m not sure how it’s possible to do what you suggest, while still testing what the IPCC AR4 really, honestly told people in any unambiguous way.
Counters—
There is a reason for testing the subset of data. The IPCC has made a specific projection. Their projection is for GMST to display a mean trend of 2C/century during the first 3 decades of this century. Weather noise, whose characteristics are not specified, result in excursions from mean behavior.
So, the idea to test that projections. As far as I am aware, that specific hypothesis can’t be tested using data from 1900-2000. I haven’t failed to say this is the thing I’m testing: I’ve said it over and over!
If you have other questions one might ask or test using statistics, I think that’s great. It might be useful for you to illustrate what you mean, and post it somewhere. Blogs are free. In my opinion, the more interesting different analyses done to answer a range of question people might have, the better. I’d encourage you to go ahead and see if you can discover the answer to the question you are telling you is more interesting than the one I’m tesing, and communicate it to us.
Gary—
Over the long haul, the GISS data isn’t really an outlier. Also, even if you compare now, you’ll see in the table of individual trends, that while it’s got the highest trend, it’s not ridiculously high. There are five measurement groups. One is highest. One is lowest. There is scatter. GISS is right in there.
Mike
No– I’m not saying the IPCC got the climb wrong. I’m not saying they got it right. But, for all we know, Roger doesn’t have access to any and all relevant data. Mostly, those are examples of the type of thing they might, hypothetically have right or wrong. I’m trying to capture a qualitative issue: There could be things wrong that have nothing to do with the nuts and bolts of calculating general circulations.
I suspect we may not even know some of it after say, 2003. How quickly do they know aerosols? There are other questions: How much does soot on snow matter? Is the amount of soot knows? Is that coded in the models?
There could be “unknown-unknowns”. But in that case, I can’t add them to my list. So, instead, I list the types of known errors that could hypothetically happen.
Counters–
I sort of missed that at first. You think most people agree with this? Hmmm….
So, in reality you are agreeing with my conclusion: The current trend is not consistent with 2C/century warming. Your criticism appeara to be that I am wasting my time showing that the inconsistency with 2C/century is born out by a statistical test because everyone already knows the data are inconsistent with 2C/century.
Is that what you are saying? Because if you are, I suggest you go visit Eli, Tamino, RC, Annan, Stoat etc. and tell them that everyone concedes the current period is inconsistent with predicted 2C/century warming. 🙂
Counters,
“But if you’re dealing with the science, then at some point you have to take good faith that the scientists are doing impartial work. There are always exceptions, but in general, science is an impartial, impersonal process.”
Wrong. This is science: show me your data, your method, your code, etc., and I’ll try to reproduce your results. There is absolutely no “faith” involved in the process. What you’re describing is a climate science religion, which won’t pass muster around here.
Somewhat OT, sea levels have stopped rising over the last year or so. It’s too early to call a trend, but it indicates we may have crossed a cooling tipping point.
http://sealevel.colorado.edu/current/sl_ib_ns_global.jpg
lucia,
You asked “There is a difficulty though: How do you figure out the green error bars for weather noise in the IPCC’s forecast?”
Are you asking how I generated the kind of green error bars in the graph I did?
Or are you asking me how I would generate error bars for the IPCC’s actual forecast?
Let’s take the second possibility first. The short answer is I don’t know how. Actually, if I understand correctly, the IPCC’s “forecast” is more in the nature of a model run using GCM’s, and not strictly a forecast based on historical temperature data and the noise in that data. I imagine that the models have some notion of the uncertainty in them, but I don’t think that’s really what you are asking. I think you want to compare the model projections to what can be independently analyzed using actual data. The only way I know how to do that is to fit a model of some kind to the historical time series data and generate projections with error bars, and then see if the IPCC “forecast” is consistent with that.
Which brings me back to the first form of the question: Are you asking how I generated the kind of green error bars in the graph I did? I use the econometric package “gretl” which has built in routines for forecasting and developing confidence intervals (error bars) for the forecasts. A general introduction can be found here:
http://www.duke.edu/~rnau/three.htm
I call your attention to the following:
“Most forecasting software is capable of performing this kind of extrapolation automatically and also calculating confidence intervals for the forecasts. (The 95% confidence interval is roughly equal to the forecast plus-or-minus two times the estimated standard deviation of the forecast error at each period.)”
The sd of a forecast error is not the same as the sd (or standard error) of slope coefficient. Take a look at the final formula shown in this wikipedia article:
http://en.wikipedia.org/wiki/Prediction_interval
Or, if the link is valid, the image here:
http://upload.wikimedia.org/math/0/e/6/0e612bd396c703d236d84d69baa78a3e.png
See the “1” in the equation? This substantially widens the confidence interval (what wikipedia calls a prediction inverval). This is because we are predicting a single event, not the mean of repeated events. This means that confidence intervals for forecasts are going to be much wider than what you would imagine from a confidence interval for the slope of a trend line (which is a mean estimator, not an estimate of a single event).
All of which is to say that you’ll have an order of magnitude greater difficulty rejecting the 2C/century forecast than you may realize.
Basil
Basil–
I’m asking this one.
My interest is comparing information and projections the IPCC actually makes. To the extent they are vague, it is difficult to do. It also means that later one, the working group for the next report has to decide, to some extent, what precisely the projections really was.
That means, one can pick interpretations to make the data comparison seem better or worse– depending on how one is inclined to make the projections look.
I understand how you get your forecasts projected from time series and the difference between the error in the slope and error in the forecast.
lucia,
I kinda figured you knew the difference (about forecast error), but just couldn’t be sure from my uncertainty about your question. (I.e., my bad, not yours.) And it seems to me that we’ve been computing CI’s based on the error of the slope rather than the error of a forecast, which is what we ought to be doing. And then further, if one could figure out how to do a CI for the IPCC “forecast” and if it was done properly, including bootstrapping, that by the time you get out 100 years the CI would be so huge as to be pretty useless. Which would actually be an intellectually honest acknowledgment that any “forecast” out 100 years must be taken with a grain of salt, and anyone who insists it be taken seriously is full of it.
Basil
Michael Hauber, if you examine the issue @ur momisugly CA more, you’ll discover that the “good” stations aren’t analyzed/corrected for previous station moves — only for their present location. IOW, the station might have an acceptable rating now, but could’ve been in a parking lot just a few yrs before. So John Vs analysis is a start, but nowhere near being able to give a reliable trend over a station’s full history.
There’s a long, long way still to go…
Ben Flurie
[…] now argues that global warming is happening faster than expected (he makes this claim even though global temperature has not increased for since at least 2000). Because of this alleged quickening of global warming Stern believes we need to spend 2 percent of […]
[…] in America in the name of global warming (nevermind the fact that temperature have inexplicably not increased since at least 2001). Because energy is an indispensable part of economic activity, if EPA’s plans go forward […]
[…] in America in the name of global warming (nevermind the fact that temperature have inexplicably not increased since at least 2001). Because energy is an indispensable part of economic activity, if EPA’s plans go forward […]