Forecasting Guru Announces: "no scientific basis for forecasting climate"

It has been an interesting couple of days. Today yet another scientist has come forward with a press release saying that not only did their audit of IPCC forecasting procedures and found that they “violated 72 scientific principles of forecasting”, but that “The models were not intended as forecasting models and they have not been validated for that purpose.” This organization should know, they certify forecasters for many disciplines and in conjunction with John Hopkins University if Washington, DC, offer a Certificate of Forecasting Practice. The story below originally appeared in the blog of Australian Dr. Jennifer Marohasy. It is reprinted below, with with some pictures and links added for WUWT readers. – Anthony

j-scott-armstrong iif-website

J. Scott Armstrong, founder of the International Journal of Forecasting

Guest post by Jennifer Marohasy

YESTERDAY, a former chief at NASA, Dr John S. Theon, slammed the computer models used to determine future climate claiming they are not scientific in part because the modellers have “resisted making their work transparent so that it can be replicated independently by other scientists”. [1]

Today, a founder of the International Journal of Forecasting, Journal of Forecasting, International Institute of Forecasters, and International Symposium on Forecasting, and the author of Long-range Forecasting (1978, 1985), the Principles of Forecasting Handbook, and over 70 papers on forecasting, Dr J. Scott Armstrong, tabled a statement declaring that the forecasting process used by the Intergovernmental Panel on Climate Change (IPCC) lacks a scientific basis. [2]

What these two authorities, Drs Theon and Armstrong, are independently and explicitly stating is that the computer models underpinning the work of many scientific institutions concerned with global warming, including Australia’s CSIRO, are fundamentally flawed.

In today’s statement, made with economist Kesten Green, Dr Armstrong provides the following eight reasons as to why the current IPCC computer models lack a scientific basis:

1. No scientific forecasts of the changes in the Earth’s climate.

Currently, the only forecasts are those based on the opinions of some scientists. Computer modeling was used to create scenarios (i.e., stories) to represent the scientists’ opinions about what might happen. The models were not intended as forecasting models (Trenberth 2007) and they have not been validated for that purpose. Since the publication of our paper, no one has provided evidence to refute our claim that there are no scientific forecasts to support global warming.

We conducted an audit of the procedures described in the IPCC report and found that they clearly violated 72 scientific principles of forecasting (Green and Armstrong 2008). (No justification was provided for any of these violations.) For important forecasts, we can see no reason why any principle should be violated. We draw analogies to flying an aircraft or building a bridge or performing heart surgery—given the potential cost of errors, it is not permissible to violate principles.

2. Improper peer review process.

To our knowledge, papers claiming to forecast global warming have not been subject to peer review by experts in scientific forecasting.

3. Complexity and uncertainty of climate render expert opinions invalid for forecasting.

Expert opinions are an inappropriate forecasting method in situations that involve high complexity and high uncertainty. This conclusion is based on over eight decades of research. Armstrong (1978) provided a review of the evidence and this was supported by Tetlock’s (2005) study that involved 82,361 forecasts by 284 experts over two decades.

Long-term climate changes are highly complex due to the many factors that affect climate and to their interactions. Uncertainty about long-term climate changes is high due to a lack of good knowledge about such things as:

a) causes of climate change,

b) direction, lag time, and effect size of causal factors related to climate change,

c) effects of changing temperatures, and

d) costs and benefits of alternative actions to deal with climate changes (e.g., CO2 markets).

Given these conditions, expert opinions are not appropriate for long-term climate predictions.

4. Forecasts are needed for the effects of climate change.

Even if it were possible to forecast climate changes, it would still be necessary to forecast the effects of climate changes. In other words, in what ways might the effects be beneficial or harmful? Here again, we have been unable to find any scientific forecasts—as opposed to speculation—despite our appeals for such studies.

We addressed this issue with respect to studies involving the possible classification of polar bears as threatened or endangered (Armstrong, Green, and Soon 2008). In our audits of two key papers to support the polar bear listing, 41 principles were clearly violated by the authors of one paper and 61 by the authors of the other. It is not proper from a scientific or from a practical viewpoint to violate any principles. Again, there was no sign that the forecasters realized that they were making mistakes.

5. Forecasts are needed of the costs and benefits of alternative actions that might be taken to combat climate change.

Assuming that climate change could be accurately forecast, it would be necessary to forecast the costs and benefits of actions taken to reduce harmful effects, and to compare the net benefit with other feasible policies including taking no action. Here again we have been unable to find any scientific forecasts despite our appeals for such studies.

6.  To justify using a climate forecasting model, one would need to test it against a relevant naïve model.

We used the Forecasting Method Selection Tree to help determine which method is most appropriate for forecasting long-term climate change. A copy of the Tree is attached as Appendix 1. It is drawn from comparative empirical studies from all areas of forecasting. It suggests that extrapolation is appropriate, and we chose a naïve (no change) model as an appropriate benchmark. A forecasting model should not be used unless it can be shown to provide forecasts that are more accurate than those from this naïve model, as it would otherwise increase error. In Green, Armstrong and Soon (2008), we show that the mean absolute error of 108 naïve forecasts for 50 years in the future was 0.24°C.

7. The climate system is stable.

To assess stability, we examined the errors from naïve forecasts for up to 100 years into the future. Using the U.K. Met Office Hadley Centre’s data, we started with 1850 and used that year’s average temperature as our forecast for the next 100 years. We then calculated the errors for each forecast horizon from 1 to 100. We repeated the process using the average temperature in 1851 as our naïve forecast for the next 100 years, and so on. This “successive updating” continued until year 2006, when we forecasted a single year ahead. This provided 157 one-year-ahead forecasts, 156 two-year-ahead and so on to 58 100-year-ahead forecasts.

We then examined how many forecasts were further than 0.5°C from the observed value. Fewer than 13% of forecasts of up to 65-years-ahead had absolute errors larger than 0.5°C. For longer horizons, fewer than 33% had absolute errors larger than 0.5°C. Given the remarkable stability of global mean temperature, it is unlikely that there would be any practical benefits from a forecasting method that provided more accurate forecasts.

8.  Be conservative and avoid the precautionary principle.

One of the primary scientific principles in forecasting is to be conservative in the darkness of uncertainty. This principle also argues for the use of the naive no-change extrapolation. Some have argued for the precautionary principle as a way to be conservative. It is a political, not a scientific principle. As we explain in our essay in Appendix 2, it is actually an anti-scientific principle in that it attempts to make decisions without using rational analyses. Instead, cost/benefit analyses are appropriate given the available evidence which suggests that temperature is just as likely to go up as down. However, these analyses should be supported by scientific forecasts.

The reach of these models is extraordinary, for example, the CSIRO models are currently being used in Australia to determine water allocations for farmers and to justify the need for an Emissions Trading Scheme (ETS) – the most far-reaching of possible economic interventions.   Yet, according to Dr Armstrong, these same models violate 72 scientific principles.

********************

1. Marc Morano, James Hansen’s Former NASA Supervisor Declares Himself a Skeptic, January 27,2009. http://epw.senate.gov/public/index.cfm?FuseAction=Minority.Blogs&ContentRecord_id=1a5e6e32-802a-23ad-40ed-ecd53cd3d320

2. “Analysis of the U.S. Environmental Protection Agency’s Advanced Notice of Proposed Rulemaking for Greenhouse Gases”, Drs. J. Scott Armstrong and Kesten C. Green a statement prepared for US Senator Inhofe for an analysis of the US EPA’s proposed policies for greenhouse gases.  http://theclimatebet.com


Sponsored IT training links:

Get guaranteed success in 312-50 exam in first try using incredible 642-374 dumps and other 310-200 training resources prepared by experts.


Get notified when a new post is published.
Subscribe today!
0 0 votes
Article Rating
335 Comments
Inline Feedbacks
View all comments
Roger
January 29, 2009 7:12 pm

MC said
“You remind me of Colombo. Persistent, methodical, constant, questioning, and always after the truth.”
Something boathas me……

Alan Wilkinson
January 29, 2009 7:22 pm

Armstrong’s ‘naive model’ is not a null hypothesis, since it is “successively updated”

You could equally describe it as being successively backdated. As I understand it, it is simply finding the average prediction accuracy for any randomly chosen pair of years over 150 years of global temperature measurements.
Another way of putting it (using say data from here:
http://dataservice.eea.europa.eu/atlas/viewdata/viewpub.asp?id=3470) is that over 158 years of annual average surface global temperature data, 97% of data points lie with a range of 1 deg C.
This variability is insufficient to validate climate prediction models and to distinguish them from a “no change” model. Seems a fair case to me.

ian
January 29, 2009 7:34 pm

As a former avid campaigner against AGW, I can appreciate the sentiments of both Salutwineco (15:45:38) and Simon Evans . I too am appalled at the social and environmental costs of the world’s insatiable desire to controlling fossil fuel availablility (eg. Iraq, the Niger Delta, Exxon Valdez, the tragic death toll in China’s coal mines, the general destruction of habitat, China’s support for the despotic government in the Sudan to secure oil rights, Russia flexing it’s gas monopoly muscles…) I also agree with Simon Evans that hardening of minds occurs in both camps and is something sceptics should be constantly aware of.
However, since becoming a sceptic myself (that is a sceptic of the alarmist view – I believe like Pielke Sr that human kind is having an impact on climate but CO2 is not the major forcer of climate), I have also witnessed how this obsession on reducing atmospheric CO2 is creating its own nightmare scenarios (eg. the mass logging of forests for palm plantations to support an ever increasing demand for biofuels (http://ipsnews.net/news.asp?idnews=37035), this tragedy in Chad (http://www.irinnews.org/Report.aspx?ReportId=82436), and the carbon credit scams popping up everywhere (http://www.wilderness.org.au/articles/20001117_mr).
It comes as no surprise that ‘Greenwash’ is so prevelant because corporations will naturally gravitate toward the money and government’s will promise the world – as in the rhetoric on combatting global warming of our P.M. Kevin Rudd prior to the last election – and deliver little but spin. I suspect that many environmentalists view the ‘global warming’ scenario as the ticket that will finally lead to a more just and humane world. I think that is a danger and a mistake.
Apart from his scientific take on the issue, Physicist and environmental scientist makes some salient points in this regard:
worldhttp://activistteacher.blogspot.com/2007/02/global-warming-truth-or-dare.html
He suggests that ‘global warming’ is mainly a concern for the western middle class, it means very little to those billions suffering through war, persecution, immense poverty and disease. Even if we are able to convert to renewables, it will still be the same corporations in control and the poor, sick and suffering will more than likely still be poor, sick and suffering.
My two cents worth so lets keep up the interesting and polite discussions.

AnonyMoose
January 29, 2009 7:38 pm

Can you rephrase “a press release saying that not only did their audit of IPCC forecasting procedures and found that”? The desired meaning is not apparent.

Editor
January 29, 2009 7:48 pm

John Philip (05:21:27) :
Looking at actual source code can be informative, however it is frequently better to take the algorithms e.g. for adjustment (which have always been public domain, in the case of GISTEMP) and develop your own software independently to verify results.

I prefer to look at the actual source code being run, thanks. (I just downloaded ModelE and I’ll get to it just after GISTEMP). Often what folks say is in the algorithm is not what they wrote (Bugs, errors, ‘fixes’, etc.) or the description does not fully convey the truth. For example, the GHSC to USHC discontinuity adjustment sounds in the ‘readme’ like it just aligns the ends of the two curves to eliminate a jump. It doesn’t. It rewrites all of past history for that record.
From what I’ve seen so far, there are some very questionable choices made in GIStemp. (Like, exactly why would an equipment or TOB change in the last 10 years be valid for changing all the temperatures from before when that change was made? )
REPLY: […] it is not the same version (as I understand it) as the GISS model E they run in-house. Even for the Model E without the GUI front end, it seems they still haven’t matched the public version with the in-house version. […]
There’s nothing additional from Gavin past that one post he made, and why does he refer to “public code” separately from “current codes” then provide no updates to that forum comment for almost two years now? – Anthony

The general behaviour is common practice. You make a clean Q.A. tested version for public release, then work on ‘enhancing’ the private version. You put in new bugs, break new things, sometimes have a good idea; but it’s not ‘ready for prime time’. Before release, you clean it up to publication quality.
Two issues I do see:
1) 2 years is a very long cycle. One year is common. Quarterly is on the short side.
2) If they are using the ‘private’ version for more than development (i.e. making policy) then they are using the buggy version rather than the Q.A. accepted one. Bad practice. Bad answers.

January 29, 2009 7:48 pm

Joel Shore (16:53:04)
….if such negative feedbacks exist, how do you explain, for example, the ice age / interglacial cycles…..
This is a good and interesting point. We know from the geologic record that substantial changes do occur, so clearly the system isn’t unconditionally stable, but maybe it does have a set point that varies according to solar input. In other words, over the last 100 years or so, the solar constant hasn’t changed that much (see multiple posts on WUWT by Leif Svalgaard on this topic – I have no reason not to believe him on this). Now on a longer scale, the amount of energy coming into our climate system does vary via the Milankovich cycles :
http://en.wikipedia.org/wiki/Milankovitch_cycles
So, even though a 100 years seems like a long time, it is only 0.1% of a 100,00 year long glacial cycle. So, in our relatively short view of 100 +/- years, the climate system appears to be pretty stable – via negative feedbacks. Over longer periods, the solar input does vary substantially, leading to different set points. But the CO2 question is fundamentally a short term set point issue & I would still maintain that the observed data suggests that there are strong negative feedbacks at work in the system – which is why we haven’t seen temperature soar to unheard of levels despite significant increases in CO2 (again making the assumption that it is a positive forcing mechanism, which I know some would even debate that statement).

Editor
January 29, 2009 8:01 pm

Simon Evans (05:57:41) :
The comparison, then, is between a no-trend straight line (the naive model)

I think you have misunderstood the naive model. It is not ‘no-trend’ it is ‘the same trend as last time-block’

Alan Wilkinson
January 29, 2009 8:03 pm

Simon, actually the data shows no trend for 80 years (1850-1930), a strong increase 1930-45, no trend 1945-1978, strong increase 1978-1998, no trend since.
So there is a strong trend in just 35 of the last 158 years.
Is that really what climate models are telling us and if so, why and how?

Mike Bryant
January 29, 2009 8:09 pm

Today yet another scientist has come forward with a press release saying that not only did their audit of IPCC forecasting procedures find that they “violated 72 scientific principles of forecasting”, but that “The models were not intended as forecasting models and they have not been validated for that purpose.”
“find” instead of “and found”… Seems pretty obvious to me…

Pamela Gray
January 29, 2009 8:14 pm

My understanding of the null hypothesis is that the applied treatment has no affect on the status quo, which would be the control. By the way, data collected from the control should be published, along with the treatment data. Therefore, the published reports of climate models fail the first requirement of good research, let alone forecasting. Are the results significantly different than the control? In climate modeling, the control would be model runs without the anthropogenic CO2.

Alan Wilkinson
January 29, 2009 8:21 pm

Simon, to put that another way, over the last 158 years the odds that next several years will be a higher temperature than this one are about 1 in 5.
Not great odds for the AGW hypothesis.
Yes, there has been a racheting up of temperature 1930-2000 but that’s an awfully short period given the variability of “weather” events and factors.

Editor
January 29, 2009 8:22 pm

Frank K. (06:46:30) : And the FORTRAN is a jumbled mess. I URGE everyone with programming experience to download their junk and see for themselves…
You can say that again. I’ve been working my way through GIStemp. I’ve got a general overview doc written with program names, sizes, file names (still sorting out the temporary scratch files that come and go in the same directory the source code is kept at execution) and file layouts for data entry ( started, but very rough).
If anyone does decide to ‘go there’ I’m willing to send my overview to help get them started as long as they understand it’s a work in progress right now. Just a sample of how the code works, cut from the middle of my overview notes:
So we next want to make the formats of the input files closer. (“v2.mean” like) so we run the script anarc_to_v2.sh that does things like turn the missing data markers in antarc*.txt into the same kind used by the v2.mean file and putting it into the output file v2_antarct.dat giving us one combined antarctic data set.
A sidebar on antarc_comb: At this point it gets just a bit messy. The script antarc_comb.sh seems to do the same thing as the next block of this control script does (compile and run antarc_comb.f to create v2_meanx from v2_antarc.dat and v2.mean; but a grep shows that nothing else contains the text “antarc_comb.sh” so I am left to assume it’s a ‘hand tool’ to play with this step without doing an actual run.
I would have made it a working script, then just called it from do_comb_step0.sh but as it stands, antarc_comb.sh explicitly calls the FORTRAN compiler f77 and takes an explicit argument for the antarctic data set and the v2.mean data set. It also links them to two temp files name fort.1 and fort.2 with the output in fort.12 that it then moves to the v2.meanx file. At any rate, the product out is v2.meanx which is a combined v2 and antarctic data set.
The program dumpold.f is then compiled and removes data from prior to 1800 from v2.meanx to yield v2.meany.
Hold onto your hat for the next step. It is a bit convoluted, but just remember that at the end of the day the whole step is to remove the ‘3A’ type records from hcn_dow_mean_data and find the value of the most recent year containing data.
Next the USHCN2v2.f program is compiled and the script get_USHCN is executed. So what does IT do? It sucks in the hcn_doe_mean_data file and makes a file hcn_doe_mean_data_fil with the type “3A” records in it. Then it sucks in the file input_files/ushcn.tbl and sorts it (sort -n) with the output in ./ID_US_g (they are both used by ISHCN2v2 when it is run, as the two input files); at the end, the script executes the program USHCZN2v2 (which produces the output files of USHCN.v2.mean_noFIL and USHCN.last_year which is then read to load the value of the ‘last year found’ into the variable last_yr in this script. Got all that?!?
The data of interest are now in the USHCN.v2.mean_noFIL file.

Pamela Gray
January 29, 2009 8:27 pm

Furthermore, most good research that is on the ground floor of determining causation, would consider several causes, not just one, and thoroughly explore each one. That would mean that different drivers would be given different weights if you will. The models would then be run from beginning to end of a time period that has a known control, in this case, real collected temperature data. The closest matches would then be chosen for further study (and it would be a single blind step, IE the examiners would not know which model is which). My hunch is that CO2 models will look a lot like some of the other model scenarios. Why? Because CO2 is very much a part of temperature change, as any rural-experienced person knows. Animal food is hard to find in bitter cold weather patterns, and more abundant during warming trends. Warmer temperatures should produce more CO2. Increase oceanic temps such as would be the case in a warm cycle, and temp would rise. Right behind it would be CO2. The modelers could be measuring a coattail affect, not a cause. You would not know that unless different drivers are modeled and then blindly evaluated.

maksimovich
January 29, 2009 9:00 pm

foinavon (06:57:21)
FIVE. Likewise there’s an abundance of published science on cost-benefit analysis in climate change and mitigation. In relation to analyses of costs and benefits of alternative actions to combat climate change, this comes in essentially two broad flavours. The first is the direct scientific study of potential mitigating technologies. There’s a huge amount of study in this area. I opened the current issue of Nature this morning and found a very good example of the careful analysis of the likely benefits of iron-seeding of primary ocean productivity to promote ocean-uptake of CO2 (i.e. dump loads of iron into the oceans). If one wishes to assess the “costs and benefits of alternative actions” that’s the sort of info we need and it’s being published rather widely and in abundance.
Unfortunately Lawyers and bureaucrats design policy that precludes any successful experiment being undertaken say like iron fertilization that you uesed for an example
“Bearing in mind the ongoing scientific and legal analysis occurring under the auspices of the London Convention (1972) and the 1996 London Protocol, requests Parties and urges other Governments, in accordance with the precautionary approach, to ensure that ocean fertilization activities do not take place until there is an adequate scientific basis on which to justify such activities, including assessing associated risks, and a global, transparent and effective control and regulatory mechanism is in place for these activities; with the exception of small scale scientific research studies within coastal waters. Such studies should only be authorized if justified by the need to gather specific scientific data, and should also be subject to a thorough prior assessment of the potential impacts of the research studies on the marine environment, and be strictly controlled, and not be used for generating and selling carbon offsets or any other commercial purposes;”
http://www.cbd.int/decisions/cop9/?m=COP-09&id=11659&lg=0
Iron is not a limiting quality within coastal waters.Strzepek and Harrison 2004 noted that diatoms adapted to coastal regions, where iron is more available, and have a higher PSII/PSI ratio of around 9. compared to diatoms adapted to oceanic regions of around 3, where available iron is often a limiting factor for growth eg southern ocean.

Editor
January 29, 2009 9:50 pm

Neven (07:18:49) :
one should not assume that techniques developed in say, econometrics, port directly into climate science.

As an economist who trades stock for a living, I think I have some clue on this topic. As I’ve posted here several times, many of the tool used by stock traders to predict the fractal stochastic resonant movement of stock prices look highly applicable to climate science, based on fractal, stochastic and resonant processes. Averages. Moving averages. High, low, and closing price vs. high low and time of observation temps. Hiding detail with averages to see longer term trends. Moving averages of averages. First, second, and third derivatives as supportive or contrary indications of trend. “Forcings” (I hate that word, it has no formal meaning) both external and internal. Least squares fits. Avoidance of data modeling as a trap. How to deal with sudden brittle movements in a generally cyclical system. The list goes on. Math and analysis know no bounds; but stock price patterns and temperature patterns are joined at the hip in terms of analysis.
-It seems to me that much of the failure of the G&A article comes from the fact that they are economists. Economics doesn’t have anything resembling physics or thermodynamics, it only has models.
Then why did I have to take calculus through partials and statistics and linear programming? CLUE: Majors learn more than Econ1A taught you.
For a long time, they thought the velocity of money was stable. Then it changed.
I don’t know how to tell you this, but the very concept of ‘velocity of money’ means that it changes. To the best of my knowledge, the velocity of money was never thought of as ‘stable’. Did you miss in that part of Econ 1B?
Yes, at some point in economic history someone gave it a name, but it’s not like everyone was sitting around before that thinking money always moved from hand to hand to hand at exactly the same speed. Velocity of money is rather like the sun. It’s more or less constant with certain oscillations most of the time, then suddenly drops (like sunspots just did) and lays there.
Those times are called ‘financial panics’ (or banking panics or several other terms from times gone by). Interestingly enough, Stanley Jevons noticed that this tended to happen when the sun went quiet in the late 1800’s. See:
http://en.wikipedia.org/wiki/William_Stanley_Jevons
He also built one of the earliest calculating machines (his “logic piano”) and was dearly interested in weather (a common thing among economists, since so much of the economy depends on it…) Interest in the set of:{ computing machines, weather, and economics} pervades economics from it’s earliest days, as the Jevons reference points out. They run side by side through history.
The answer to financial panic has always been the same: The “sovereign” supplies more quantity of money to make up for the drop of velocity. It has been that way since the financial panic of 33 A.D. (No, that is not a typo. Time of Christ and all that…) See:
http://en.wikisource.org/wiki/The_Influence_of_Wealth_in_Imperial_Rome/The_Business_Panic_of_33_A.D.
It’s a hoot. (The more things change, the more they stay the same…)
For a long time, the P/E ratio of most stocks stayed in the range of 10 to 20. Then the range changed.
Oh please, no. P/E can be any number at all and anyone who trades knows that. It’s about the most useless metric most of the time. (It is a rear view mirror at best, a ‘value trap’ at worst).
The P/E of stocks ranges from about infinite: no earnings so anything/0=infinity in non-standard mathematics (or is undefined if you feel constrained to standard mathematics. Turn it into a limit if you like, d[anything]/d[earnings] as earnings approach zero approach infinity. More cumbersome, but some folks are not familiar with the field of non-standard math.) to near zero.
On the low side, I’ve seen P/E ratios of very small fractions. (Bought a stock for less than they had money in the bank with a P/E of about 0.8:1 a couple of weeks ago). When a company is headed for bankruptcy they often have very small P/E ratios. Why? The P => 0 while the E is from the last reporting period, when they reported earnings that are now gone. The ‘fast money’ knows the company is cooked and dying, so sell fast. This drives price toward 0. 0/[anything]=0 (again, in non-standard).
The 10 to 20 range is the number you hear as a ‘typical average’ for the S&P 500 during ‘normal times’ that is handed out on talk show news. Anyone who trades stocks knows it exists, but no serious trader gives it more than a casual glance since they know that it’s useless: It is an average based on averages of averages. (Sound familiar?)
The S&P 500 is an average of 500 stocks (kind of like an average of many geographical points – some are hot, some are not). The P and E of those stocks are averaged together (based on earnings reports from the recent earnings averaged over the reporting period – like a monthly average temperature) then divided to give the fictional number “The average P/E of the S&P 500 Average” – kind of like a ‘global average temperature’. Anyone with any clue trading stocks will not depend on this to mean anything nor take any action based on it. AT MOST it will tickle them to look at things that are useful.
Please, if you are going to use examples, pick ones you know something about.

Chuck Bradley
January 29, 2009 10:04 pm

I have not read most of the comments yet, so this might be old news.
Some time ago, a forecasting expert offered a challenge to the AGW crowd,
a bet about future temperatures. I think it was the same guy. Sorry, I do not
recall the venue or the details and I’m unsure of the time but I think it was
2002 or earlier.

Editor
January 29, 2009 10:06 pm

Simon Evans (07:49:29) : The IPCC can’t be expected to predict what humans will choose to do!
Why not? Economists and folks in Marketing do it all the time. With real money at stake and with known error bands. You wouldn’t be saying that economists and marketing types are technically more capable than climate modelers, would you …

Editor
January 29, 2009 10:17 pm

Smokey (09:15:15) : Another problem word is to “table.” IIRC, in GB tabled means to put something on the table for discussion. In the U.S. it means to postpone.
There’s a difference between those two? 😎
Dad – Iowa. Mom – Near London. That explains a lot … 😉

Wondering Aloud
January 29, 2009 10:20 pm

Is it just me or have foinavon and luis and a few others put in literally thousands of words on this thread without ever making any attempt to address the actual issue? IPCC is not a forcast… give me a break.
The efforts to attack any opponent personally rather than even consider their objuctions is just way out of hand.

François GM
January 29, 2009 10:41 pm

I’m in the medical field. I often review articles for medical journals. In medicine, conclusions must be based on evidence – it’s called evidence-based medicine (EBM). If not, the hypothesis is rejected.
There are strict criteria for the 4 levels of evidence in EBM. There are no such criteria in the field of climate science.
I have followed closely the climate debate for several years and I have read the IPCC AR4. There is no way that the conclusions of the IPCC are justified based on the evidence (or lack thereof) provided. With the IPCC, strong conclusions are based on what would be classified as type 3 or type 4 evidence in EBM (weak evidence). This would be a no-no in medicine.
In the field of climate science, hypotheses are presented as facts, associations are confused with causality and contradictions are downplayed or simply ignored. Data from various sources are routinely incorporated in the same graphs. The peer-review process is weak and heavily biased towards AGW. That was how science worked 40 years ago.
Climate science must clean up its act. It needs accountability, transparency and quality control. It must recognize that reproduciblity of results is a sine qua non for evidence. It suffers from a severe case of inbreeding of researchers resulting in narrow mindedness. Journal editors need to recruit reviewers with varied backgrounds and diverse viewpoints.

Editor
January 29, 2009 10:53 pm

Kmye (17:15:20) :
@E.M.Smith (from way, way back up there) Thanks for your help!

Happy to be of service. And glad I got it in before the flood gates opened and the tide washed me away….

Editor
January 29, 2009 11:36 pm

Jeff L (19:48:56) :
Joel Shore (16:53:04)
….if such negative feedbacks exist, how do you explain, for example, the ice age / interglacial cycles…..
This is a good and interesting point. We know from the geologic record that substantial changes do occur, so clearly the system isn’t unconditionally stable, but maybe it does have a set point that varies according to solar input.

Pick your time scale. In times shorter than thousands of years, the climate is stable. On the 10,000 year up scale planetary orbital mechanics change. On the 100,000 year time scale things even bigger are involved.
Um, think bigger. Bigger than the sun? Why yes… “the galaxy did it”:
http://www.sciencebits.com/ice-ages
So yes, on a time scale of millions of years ice epochs come and go. When we are in one of those, then the position of the earth (precession, orbital obliquity, etc.) per Milankovitch dictates the ‘interglacials’ like we are in now.
http://en.wikipedia.org/wiki/Milankovitch_cycles
So where are we now? Near the top of an ice epoch (exited a galactic arm some bit ago) headed toward warmer times and no more ice ages for about 100 million years, in a few million years… But right now in an interglacial that’s a bit long in the tooth. It might end tomorrow, or it might take another 10,000ish years (there is some randomness in the process due to something called stochastic resonance). Or it may already be happening.
Best guess is that we will have at least one more ‘glaciation’ period to go through (a better term than ice age, since folks confound ice epoch and glaciation both into ‘ice age’…) before we are clear of them. But the next one ought to be less cold and icy than the last one. Maybe ice only down to Main and not New York 😉
It is even possible that we have already entered the next glaciation. These things are terribly slow (geologic time!) and ice levels rise at a steady average rate (with surges and retreats short term) over about 100,000 years. So take the present edge of the ice cap, divide the distance to NYC by 100,000. That is how far the ice will advance, on average, each year. (It will have surges up and down (cycles) on top of that trend) So IF the Little Ice Age was the entry to the next glacial then we are already in it and just having a last surge up before the inevitable down.
Worried? Not in the slightest. My estimate of how fast the ice advances is about 800 FEET in a year. Gives a whole new meaning to ‘glacial advance’ doesn’t it? So unless you live within a few of miles of a glacier at the moment, the next (present?) glacial will not concern you, or your children.
But it is coming.

Bill DeMott
January 29, 2009 11:46 pm

My research and teaching often focus on population ecology, where the major goal is to understand and predict (forecast?) changes in population abundance over time and space. I have thought about how Dr. Armstrong’s forecasting methods might be applied to the study of populations. To use his (economic) forecasting models to study population dynamics, we first ignore what we know about the biology and science of populations, including the effects of predators, competitors, density dependence, demographics, responses to the physical environment (temperature, sunlight etc). Instead, we would use a “naive model” which assumes that the short-term tend will continue. Such a model would be quite accurate over a scale of years for long-lived organisms, such as humans and whales. It would often not capture the dynamics of short-lived organisms (e.g., bacteria, insects, algae) over time scales as short as days and weeks. Most importantly, it would provide no scientific understanding of how populations are likely to respond to ongoing or future changes in their environment. Not surprisingly, I am unaware of applications of Dr. Armstrong’s forecasting methods in text books or the primary literature on population biology.

Brendan H
January 29, 2009 11:54 pm

Paul Shanahan: “If the “Big Oil” validates the warmists papers/theories and the “Green Lobby” validates the sceptics papers…What ever theories/papers that can’t be disqualified on both sides, must be closer to the truth, surely.”
Perhaps, but I would prefer to let the climate scientists get on with the job.
M Simon: “Big oil interests you.”
Big oil holds no interest for me. Not sure where you got the idea. My main interest is small petrol. Prices, that is.
EM Smith: “…and I’m not letting go of this bone until it’s chewed down to dust and coming out the other end…”
Sounds like a painful, er, scenario. But each to his own. Mind you, the evacuation metaphor says a lot about the sceptic way of doing climate science.

Brendan H
January 30, 2009 12:40 am

Paul M: “But don’t call a scientist with 50 years experience a grouchy old git.”
I said Theon “comes across” as a grouchy old git. I didn’t write the article. As for scientific papers, I have no expertise so the exercise is pointless. However, I can critically analyse a piece of text.
Here is the lead para of the Theon story:
“NASA warming scientist James Hansen, one of former Vice-President Al Gore’s closest allies in the promotion of man-made global warming fears, is being publicly rebuked by his former supervisor at NASA.”
Two points.
1) The subject and target of the article is Hansen. Theon is just the vehicle the writer uses to launch his political attack on Hansen.
2) Hansen is supposedly being “publicly rebuked” by a “former supervisor”.
This gives the impression that:
a) Theon is Hansen’s former boss
b) Theon is making some sort of official statement
c) Theon has the institutional and scientific authority to deliver a “public[ly] rebuke to a former protégé.
These implications are highly misleading. The writer is engaging in semantic sleight-of-hand to paint a picture that is at variance with the facts: a long-retired scientific administrator with little experience in climate science has written a private letter griping about a former colleague.
The second para.
“Retired senior NASA atmospheric scientist, Dr. John S. Theon, the former supervisor of James Hansen, NASA’s vocal man-made global warming fear soothsayer, has now publicly declared himself a skeptic…
Theon’s supposedly public declaration as a climate sceptic implies that an important climate scientist was once committed to AGW, but has now resiled. There is no evidence that Theon was/is important in climate science.
“Theon joins the rapidly growing ranks of international scientists abandoning the promotion of man-made global warming fears.”
This sentence implies that Theon is the latest in a series of important climate scientists abandoning their commitment to AGW. No such exodus is occurring.
That’s a lot of misdirection in just two paragraphs. I doubt the rest of the story is much better.
As for disrespect, the writer is happy to imply that Theon once promoted “man-made global warming fears”, just like the other charlatans. Nice way to treat your source.

1 8 9 10 11 12 14