Forecasting Guru Announces: "no scientific basis for forecasting climate"

It has been an interesting couple of days. Today yet another scientist has come forward with a press release saying that not only did their audit of IPCC forecasting procedures and found that they “violated 72 scientific principles of forecasting”, but that “The models were not intended as forecasting models and they have not been validated for that purpose.” This organization should know, they certify forecasters for many disciplines and in conjunction with John Hopkins University if Washington, DC, offer a Certificate of Forecasting Practice. The story below originally appeared in the blog of Australian Dr. Jennifer Marohasy. It is reprinted below, with with some pictures and links added for WUWT readers. – Anthony

j-scott-armstrong iif-website

J. Scott Armstrong, founder of the International Journal of Forecasting

Guest post by Jennifer Marohasy

YESTERDAY, a former chief at NASA, Dr John S. Theon, slammed the computer models used to determine future climate claiming they are not scientific in part because the modellers have “resisted making their work transparent so that it can be replicated independently by other scientists”. [1]

Today, a founder of the International Journal of Forecasting, Journal of Forecasting, International Institute of Forecasters, and International Symposium on Forecasting, and the author of Long-range Forecasting (1978, 1985), the Principles of Forecasting Handbook, and over 70 papers on forecasting, Dr J. Scott Armstrong, tabled a statement declaring that the forecasting process used by the Intergovernmental Panel on Climate Change (IPCC) lacks a scientific basis. [2]

What these two authorities, Drs Theon and Armstrong, are independently and explicitly stating is that the computer models underpinning the work of many scientific institutions concerned with global warming, including Australia’s CSIRO, are fundamentally flawed.

In today’s statement, made with economist Kesten Green, Dr Armstrong provides the following eight reasons as to why the current IPCC computer models lack a scientific basis:

1. No scientific forecasts of the changes in the Earth’s climate.

Currently, the only forecasts are those based on the opinions of some scientists. Computer modeling was used to create scenarios (i.e., stories) to represent the scientists’ opinions about what might happen. The models were not intended as forecasting models (Trenberth 2007) and they have not been validated for that purpose. Since the publication of our paper, no one has provided evidence to refute our claim that there are no scientific forecasts to support global warming.

We conducted an audit of the procedures described in the IPCC report and found that they clearly violated 72 scientific principles of forecasting (Green and Armstrong 2008). (No justification was provided for any of these violations.) For important forecasts, we can see no reason why any principle should be violated. We draw analogies to flying an aircraft or building a bridge or performing heart surgery—given the potential cost of errors, it is not permissible to violate principles.

2. Improper peer review process.

To our knowledge, papers claiming to forecast global warming have not been subject to peer review by experts in scientific forecasting.

3. Complexity and uncertainty of climate render expert opinions invalid for forecasting.

Expert opinions are an inappropriate forecasting method in situations that involve high complexity and high uncertainty. This conclusion is based on over eight decades of research. Armstrong (1978) provided a review of the evidence and this was supported by Tetlock’s (2005) study that involved 82,361 forecasts by 284 experts over two decades.

Long-term climate changes are highly complex due to the many factors that affect climate and to their interactions. Uncertainty about long-term climate changes is high due to a lack of good knowledge about such things as:

a) causes of climate change,

b) direction, lag time, and effect size of causal factors related to climate change,

c) effects of changing temperatures, and

d) costs and benefits of alternative actions to deal with climate changes (e.g., CO2 markets).

Given these conditions, expert opinions are not appropriate for long-term climate predictions.

4. Forecasts are needed for the effects of climate change.

Even if it were possible to forecast climate changes, it would still be necessary to forecast the effects of climate changes. In other words, in what ways might the effects be beneficial or harmful? Here again, we have been unable to find any scientific forecasts—as opposed to speculation—despite our appeals for such studies.

We addressed this issue with respect to studies involving the possible classification of polar bears as threatened or endangered (Armstrong, Green, and Soon 2008). In our audits of two key papers to support the polar bear listing, 41 principles were clearly violated by the authors of one paper and 61 by the authors of the other. It is not proper from a scientific or from a practical viewpoint to violate any principles. Again, there was no sign that the forecasters realized that they were making mistakes.

5. Forecasts are needed of the costs and benefits of alternative actions that might be taken to combat climate change.

Assuming that climate change could be accurately forecast, it would be necessary to forecast the costs and benefits of actions taken to reduce harmful effects, and to compare the net benefit with other feasible policies including taking no action. Here again we have been unable to find any scientific forecasts despite our appeals for such studies.

6.  To justify using a climate forecasting model, one would need to test it against a relevant naïve model.

We used the Forecasting Method Selection Tree to help determine which method is most appropriate for forecasting long-term climate change. A copy of the Tree is attached as Appendix 1. It is drawn from comparative empirical studies from all areas of forecasting. It suggests that extrapolation is appropriate, and we chose a naïve (no change) model as an appropriate benchmark. A forecasting model should not be used unless it can be shown to provide forecasts that are more accurate than those from this naïve model, as it would otherwise increase error. In Green, Armstrong and Soon (2008), we show that the mean absolute error of 108 naïve forecasts for 50 years in the future was 0.24°C.

7. The climate system is stable.

To assess stability, we examined the errors from naïve forecasts for up to 100 years into the future. Using the U.K. Met Office Hadley Centre’s data, we started with 1850 and used that year’s average temperature as our forecast for the next 100 years. We then calculated the errors for each forecast horizon from 1 to 100. We repeated the process using the average temperature in 1851 as our naïve forecast for the next 100 years, and so on. This “successive updating” continued until year 2006, when we forecasted a single year ahead. This provided 157 one-year-ahead forecasts, 156 two-year-ahead and so on to 58 100-year-ahead forecasts.

We then examined how many forecasts were further than 0.5°C from the observed value. Fewer than 13% of forecasts of up to 65-years-ahead had absolute errors larger than 0.5°C. For longer horizons, fewer than 33% had absolute errors larger than 0.5°C. Given the remarkable stability of global mean temperature, it is unlikely that there would be any practical benefits from a forecasting method that provided more accurate forecasts.

8.  Be conservative and avoid the precautionary principle.

One of the primary scientific principles in forecasting is to be conservative in the darkness of uncertainty. This principle also argues for the use of the naive no-change extrapolation. Some have argued for the precautionary principle as a way to be conservative. It is a political, not a scientific principle. As we explain in our essay in Appendix 2, it is actually an anti-scientific principle in that it attempts to make decisions without using rational analyses. Instead, cost/benefit analyses are appropriate given the available evidence which suggests that temperature is just as likely to go up as down. However, these analyses should be supported by scientific forecasts.

The reach of these models is extraordinary, for example, the CSIRO models are currently being used in Australia to determine water allocations for farmers and to justify the need for an Emissions Trading Scheme (ETS) – the most far-reaching of possible economic interventions.   Yet, according to Dr Armstrong, these same models violate 72 scientific principles.

********************

1. Marc Morano, James Hansen’s Former NASA Supervisor Declares Himself a Skeptic, January 27,2009. http://epw.senate.gov/public/index.cfm?FuseAction=Minority.Blogs&ContentRecord_id=1a5e6e32-802a-23ad-40ed-ecd53cd3d320

2. “Analysis of the U.S. Environmental Protection Agency’s Advanced Notice of Proposed Rulemaking for Greenhouse Gases”, Drs. J. Scott Armstrong and Kesten C. Green a statement prepared for US Senator Inhofe for an analysis of the US EPA’s proposed policies for greenhouse gases.  http://theclimatebet.com


Sponsored IT training links:

Get guaranteed success in 312-50 exam in first try using incredible 642-374 dumps and other 310-200 training resources prepared by experts.


The climate data they don't want you to find — free, to your inbox.
Join readers who get 5–8 new articles daily — no algorithms, no shadow bans.
0 0 votes
Article Rating
335 Comments
Inline Feedbacks
View all comments
Just want truth...
January 30, 2009 12:40 am

Two YouTube titles with Roy Spencer on IPCC climate models :
Why the IPCC models are wrong – Part 1
at link :

Why the IPCC models are wrong – Part 2
at link :

Flanagan
January 30, 2009 1:35 am

Bobby Lane:
I don’t quite get it. You say the temperatures are “plateauing” and then that the antarctic peninsula is warming more rapidly than the rest?
Whatever… I find it difficult anyway to say that temperatures are “plateauing” unless you only take a peek at the last 10 years, this is if you “forget” all the years before. The temperature we are having rightnow is well within the typical fluctuations around the warming ramp – actually we’ve seen quite larger deviations before, which never meant the warming stopped: http://www.woodfortrees.org/plot/uah/plot/uah/trend
You see I’m not even taking the skeptic-banished GISS. BTW, how can you see the arctic is doing just fine as it is even lower now that it was during its record-breaking year? We still have time till the maximum, but if it’s lower than 2007 then I would not bet on a return to “normality” for th arctic ice.
Finally: sorry to say that, but the link you give is 1) not a peer-reviewed publication and 2) a 36-age long blabla without any actual science.

Flanagan
January 30, 2009 1:48 am

About the variability: Ghil and Vautard [Ghil, M., and R. Vautard, Interdecadal oscillations and the warming trend in global temperature time series, Nature, 350, 324-327, 1991. ] used singular spectrum analysis to determine if a significant warming trend could be distinguished from the natural variability in 135 years of observed data. Their analysis showed that both interannual and decadal oscillations were large enough to conceal a greenhouse signal, but for one to two decades only.
Stouffer also showed in 94 on a 1,000-year time series of global temperature, that no temperature change as large as 0.5C per century was sustained for more than a few decades. The current rate has been 2C per century since 1850

pkatt
January 30, 2009 3:21 am

ok.. I have to say I dont care if you call a model a predictor, forcaster or a oracles crystal ball. The crystal ball said as co2 rose, the warmth has a high probablility of causing cooresponding worldwide temperature increase that would cause a tipping to out of control heat… It has not. The crystal ball said tropical storms had a high probablility of increase in intensity and frequency. They did not. The crystal ball said that the ice at the poles would be dissappearing, unable to recover even with seasonal noise until they were gone entirely. The Antartic Ice increased, and the Artic ice made a considerable recovery from a non climate . Thus again proving the crystal ball wrong. (ps .. two 23 sunspots in a row.. lol)
I would say.. it is highly possible that the model makers need to seriously tweek their models. If an ocean current, wind currents or natural cycles can totally throw the thing out of whack how can we base an economic policy such as Cap and Trade on its resulted output. How can we talk about changing or ‘fixing’ our enviornment when the models we are basing that need on are inaccurately programed? When we cannot even create a workable biosphere. Something this drastic deserves a model which is beyond question and it would seem that it would be in the best interests of the modeler to work with and take critique from anyone and everyone to make their product better. Yet they are not.
I believe in clean air and clean water. But I sincerely do not subscribe to the ‘lets scare them to death and force them to comply’ science. And I certainly do not subscribe to using such a short period of time as 20, 30, 40 or even 50 years to decide what the perfect climate of the earth should be.
and
Luis Dias (06:40:00) :
“What astounds me is the sheer gullibility of almost every single one reader of this blog.”
We come here to discuss. Im sure youre welcome to join in. Perhaps insulting everyone isnt the greatest start. If you have a point to make bring it on, show us proof… if not I can see why you would have to resort to personal attacks.

Stefano
January 30, 2009 3:22 am

Scott Armstrong wrote about all this quite some time ago. I came across his analysis written up on CA.
IPCC AR4: No skill in scientific forecasting
I mentioned Armstrong’s work to some AGWers who professed to only be interested in serious scientists. Their dismissive reply was, “pah, institute of ‘forecasters’?? who the h*** are they??!”
When you’re relying on certain climate experts, it becomes hard to accept the basic idea that often experts can be wrong when forecasting the future. Curiously the one person I met recently who got it and agreed immediately, was a priest.

Solomon Green
January 30, 2009 3:34 am

This is way off track but those who are interested in “Climate Change” – once known as “Global Warming” may find the letter from James Nelson in the February edition of The Actuary http://www.the-actuary.org.uk/835539 of interest. His idea is certainly worth following up even though it will be dismissed by the fanatics because “the range says nothing about the mean”.

TomVonk
January 30, 2009 3:43 am

Stouffer also showed in 94 on a 1,000-year time series of global temperature, that no temperature change as large as 0.5C per century was sustained for more than a few decades. The current rate has been 2C per century since 1850
There exists no 1000 year time series of global temperature .
Thermometers and statistically significant surface cover of tempearture measures didn’t exist for such a period .
Proxies are unreliable and don’t cover the surface either (see dozens of examples at CA) .
There is no statistically valid method to distinguish local effects from global effects when the measure density is low .
Since 1850 there is 1 (in words ONE) point for century long statistics .
There are no statistics possible with 1 point .
One could speak about decadal trends (15 points of data what is not much anyway) but that is something completely different from century trends .
Of course everybody knows that decadal trends don’t extrapolate to century trends unless one postulates arbitrarily linear trends at some also arbitrary time scale .
The concept of “trend” is available only for a series that contains a large number of points at the scale at which the trend is analysed .
That is not possible for century “trends” for such a short period as 1850 – 2000 .
Your arguments make no sense whatsoever .

January 30, 2009 3:47 am

Stouffer also showed in 94 on a 1,000-year time series of global temperature, that no temperature change as large as 0.5C per century was sustained for more than a few decades.
The 20th century was certainly an interesting time, then. Not only did we have two 30 year periods of warming – both of them at a rate more than double that of the ” 0.5C per century” specified Stouffer, we also had a 20 year period of cooling which again was more than double the 0.5C per century rate.
What’s more these sustained periods of stable climate prior to 1850 are not evident in any of the long term temperature records, i.e. CET, Uppsala, Armagh, etc.
The current rate has been 2C per century since 1850
No it hasn’t – otherwise we’d be 3 deg C warmer than in 1850.

Flanagan
January 30, 2009 3:50 am

Tom:
They used the Vostok data for their study. Do you have any scientific paper showing that these are “unreliable” as you state? Or is it just blog-based science?
pkatt:
I couldn’t believe my eyes when I saw your post… Complete disinformation! You’re talking about effects which are supposed to appear in a century, then you reject AGW because it hasn’t happened yet? Waw!

January 30, 2009 3:51 am

Your [Flanagan’s] arguments make no sense whatsoever .
Oh yeah – I meant to say that as well.

Robert Bateman
January 30, 2009 4:05 am

The proxies, while they do not match the sensitivity of the temperature series, in their own right they demonstrate an increasingly cooler Earth and a marked tendency towards Ice Ages over Interglacials.
Otherwise, you have legend & lore.
When it comes to plans to practice Doomsday Measures to lower the Earth’s temperature, the odds of that going very wrong are high.
Due to the lack of comment on what those measures are I will assume that they are generally unknown or not talked about.
I cannot blame anyone for not wanting to think about it.

anna v
January 30, 2009 4:39 am

foinavon (06:57:21) :
SIX. It’s not obvious that Armstrong’s apparent requirement for a naïve model is really very useful in climate change forecasting. Nevertheless we already clearly do have a “naïve model” for climate forecasting. It’s the vast amount of empirical, observational and theoretical analysis that indicates that the earth under the present climate regime has a climate sensitivity near 3 oC of temperature rise per doubling of atmospheric CO2, and that the earth has various elements of “inertia” that define the rate at which a response to a new temperature equilibrium is achieved in response to enhanced forcing. The climate models are entirely consistent with that. They give further information in allowing predictions of likely geographical distributions of enhanced warming, and precipitation patterns and so on, under given emission scenarios. So they seem to fulfil Armstrong’s requirements.
I am sorry but your “vast amount” for “climate sensitivity near 3 C per doubing CO2” is not empirical evidence. It is the output of modeling programs subject to the feedback mechanism with H2O, a creative invention of the modelers.
If this sensitivity existed, we would not be here now. The icecore records show a lag of CO2 to temperature rise of at least 800 years. Were the sensitivity what you claim the response would be instantaneous, since CO2 comes out with temperature, but still needs aeons to build up in the atmosphere. The short term records also show that CO2 goes up and down with temperature with a lag of a few months from temperature, so that even short term the humidity feedback is inexistent. How can something which is a result, drive a cause and with large sensitivity at that?

Flanagan
January 30, 2009 4:55 am

“we also had a 20 year period of cooling which again was more than double the 0.5C per century rate. ”
I really wonder where you could find that without torturing data…
about the 2C per century, It’s true that 1.5 is a more accurate.

hunter
January 30, 2009 5:06 am

AGW has never been about climate or science.
AGW is a social movement that, like Eugenics, uses science, political power and media power to justify social policy changes and silence those who are skeptical of the movement.

January 30, 2009 5:34 am

E M Smith
Thanks for the interesting link regarding the great panic of 33AD.
I admit I had to google to find the source of this solar related quote which also relates to current financial circumstances as illustrated in your link. I felt it highly appropriate my quote comes from an old religion, yet is very relevant to the adherents of the new green religion. It should perhaps reminds people of the overwhelming importance of the sun, and that we have been this climatic (and financial) way many times before.
Ecclesiastes 1:9] What has been will be again, what has been done will be done again; there is nothing new under the sun.
TonyB

Simon Evans
January 30, 2009 5:46 am

G Alston (18:52:35) :
Simon Evans — A null hypothesis would assert no expectation of likely change one way or the other. A “successively updated” model simply tracks whatever change occurs. Surely this is evident?
The definition we use at my work is that a null hypothesis would assert no/little change in rates of change of a changing thing, not that there is no change at all.

Well, by that definition (which I certainly accept in those circumstances), then we have a thing with a steady rate of change. If that’s the null hypothesis in those circumstances, then Armstrong’s model doesn’t forecast it at any point, it simply simulates the outcome by tracking change through updating and the addition of all successive ‘forecasts’, thus deriving a percentage of hits that looks good enough so long as the expected accuracy is loose enough. His case is that that’s good enough, based on the bare assumption of the climate’s ‘remarkable stability’ (talk about a circular argument!).
The same process could be applied to forecasting temperatures for the year ahead, with equivalent levels of supposed ‘accuracy’ depending on the margin chosen. Are we really to accept that a model which forecasts now that the temperature in July will be the same as it is today is as useful as any other model? If I wanrt to plan my summer holiday I need something better than that!
Alan Wilkinson (19:22:13) :
Another way of putting it (using say data from here:
http://dataservice.eea.europa.eu/atlas/viewdata/viewpub.asp?id=3470) is that over 158 years of annual average surface global temperature data, 97% of data points lie with a range of 1 deg C.
This variability is insufficient to validate climate prediction models and to distinguish them from a “no change” model. Seems a fair case to me.

I agree with your first part (and think it would have been better put that way). However, in respect of your second, the variability is evidently not random. If an alternative model is notably closer to projecting a trend in that variability, then why is it not distinguishable?
E.M.Smith (20:01:40) :
Simon Evans (05:57:41) :
The comparison, then, is between a no-trend straight line (the naive model)
I think you have misunderstood the naive model. It is not ‘no-trend’ it is ‘the same trend as last time-block’

To express my quoted statement more fully, I meant that the comparison of long-term projection from a given point is as stated (and after all, that is the intended usefulness of climate models in this context). I don’t think the naive model is ‘same trend as last time-block’ but simply same temperature as.
Alan Wilkinson (20:03:27) :
Simon, actually the data shows no trend for 80 years (1850-1930), a strong increase 1930-45, no trend 1945-1978, strong increase 1978-1998, no trend since.
So there is a strong trend in just 35 of the last 158 years.
Is that really what climate models are telling us and if so, why and how?

That’s a big question for me, Alan! 😉 You could break it down further, of course – so, for example, equivalents of your ‘no trend since 1998’ period could be picked out for a few years from the 80s and the 90s. I think there are two questions here, really – 1)once knowable inputs are known, do the models represent, by hindcasting, the temperature changes that occurred? Any answer will involve a judgment of what’s ‘good enough’. I’d say yes on a decadal basis, generally no on an interannual basis. The ‘why and how’ is surely answered as whatever inputs the models have? This includes known natural changes (TSI, vulcanism, etc) and known anthropogenic, along with the theoretical calculation of their effects. 2) Are the models telling us these patterns in advance? No, I don’t think so on the resolution you propose. The longer the view, the more useful they are, IMV.
E.M.Smith (22:06:05) :
Simon Evans (07:49:29) : The IPCC can’t be expected to predict what humans will choose to do!
Why not? Economists and folks in Marketing do it all the time. With real money at stake and with known error bands. You wouldn’t be saying that economists and marketing types are technically more capable than climate modelers, would you …

– but the IPCC’s brief was to put options before policy makers. I don’t think it would have gone down well if they told them what they were going to do! 😉 . Actually I think that Hansen did use the word ‘prediction’ in presenting his 1988 scenarios and, in describing B as ‘most likely’ he was making a call on how human activities would develop (and thus it was a prediction).
I’m off for the weekend, so have fun everyone!

Flanagan
January 30, 2009 5:48 am

Just in case you were wondering where your money ACTUALLY goes:
http://www.washingtonpost.com/wp-dyn/content/article/2009/01/30/AR2009013001069.html
Everybody’s losing money but these guys.. Mmmm…
did you also hear about the numerous reports (including one from McKinsey) that fighting global warming will actually cost… nothing? Because of the substantial cuts in energetic costs?

Jon Jewett
January 30, 2009 6:57 am

Ron (Tex) McGowan (22:25:05) :
The rich, the Republicans and big business would be loving this.
Hey Tex,
You seem to have an anipithy for “Big Business”.
A simple unfortunate truth:
When you woke up this morning, the alarm clock was made by big business and it was powered by big business (if electric).
Your coffee was brought to you by big business and heated by big business.
Your house was brought to you by big business and your clothes are by big business.
If you are old like me, the medicines that make your life better are created and brought to you by big business. If your children have ever had a serious illnesses, you owe their lives to big business.
Look around you. Your ability to buy all of the stuff that you have?
If you are not on the government dole, your wealth was created by businesses big or small.
If you are on the dole, the money that the government gives to you—–was confiscated from business.
If business does well, you do well. Without business, you and your family would die.
(I seriously doubt that you could live completely “off the grid”. I certainly couldn’t and 99.9% of the people in the US couldn’t. Without the goods and services provided by big business, most of us would die from starvation and disease in less than a year.)
I worked Union all my life. When my “big business” did well, I did well. And when they went bankrupt, I was unemployed for eight months. I am grateful to “big business” because through them I was able to garner more wealth than I ever thought possible. On Union wages.
May God Bless big business and allow them to make a better life for us all!
Regards,
Steamboat Jack
PS
And my God Bless Anthony and the others for bringing intellectual diversity to this subject.

gary gulrud
January 30, 2009 6:59 am

Sorry if someone beat me to the obvious:
“forecast is a forecast is a forecast”
I guess that depends on what your definition of ‘is’ is.

gary gulrud
January 30, 2009 7:13 am

“Do you have any scientific paper showing that these are “unreliable” as you state? ”
Well since glaciers at Alaska, Greenland and the Alps are now growing I’ll bet we start getting the first reliable data ever in my daughter’s lifetime.
Is Janikowski a blogger or the kicker? Sorry, despite shots I’ve contracted the flu. Interferon has me out of sorts.

juan
January 30, 2009 7:13 am

(Ever further OT, but unable to control myself….) To TonyB and E.M.Smith: I think we share an interest in pursuing historical sources, and perhaps taking them more seriously. Back in December I began a thread on CA with the following:

“In 1834, Richard Henry Dana sailed up and down the California Coast collecting cattle hides from the California ranchos. After returning to New England he published a book, Two Years Before the Mast. A recurring topic in the account was that none of the sailing vessels would anchor close to land, because of sudden recurring storms from the southwest that would drive ships aground. So they would anchor way offshore and row small boats to carry heavy loads of hides out to the ship. This was the universal practice and had been for some time.
“About 25 years later, Dana returned to California. In an epilogue to the original book, he notes that the ships were now coming right up to land, because the weather pattern had changed.
“I have been wondering for some time if any students of climate have written anything on what appears to have been a significant documentable change in Pacific weather patterns.”
Pliny and Sam Urbinto replied with useful links referring to known oscillations in the Pacific. Sam also provided a link to an online version of Dana’s book. The epilogue comments appear on page 443 at
http://books.google.com/books?id=WEQ5YL1OADgC&dq=richard+henry+dana+two+years+before+the+mast&pg=PP1&ots=HVFAKQoX7r&source=bn&sig=mlmr_-b_G59xcwruAVwZ7t6I7Hk&hl=en&sa=X&oi=book_result&resnum=4&ct=result#PPA443,M1
After mulling it over, it seems to me that oscillations such as the PDO are too short to fit with the record here. The Dalton minimum seems about the right length and its dating is close. But it ends a little too early, so it may be close, but no cigar.
At any rate the whole discussion suggests that there might be interesting Pacific Coast weather information in ship’s logs from the New England sailing period. That’s as far as I have gotten at this point.
Even before this, and prior to the Dalton minimum, Spain for many, many years conducted regular galleon trade that passed northward along our west coast. (Or so said the fourth grade social studies books that I used to teach from.) I think Spain was notorious for keeping detailed shipping records; whether there are extant logs with weather information is an interesting question. Anyone out there in Barcelona reading this?

Niels A Nielsen
January 30, 2009 7:33 am

Flanagan to Tom V: “They used the Vostok data for their study. Do you have any scientific paper showing that these are “unreliable” as you state?”
You present statements about global decadal trends based on _one_ temperature _proxy_ series from _one_ geographical spot. And you ask for a scientific paper showing that this particular proxy is “unreliable”??? LOL
Why do you think the absence of such a paper would in any way refute Tom’s argument???
I can show you data from more than one weather station with real thermometer temperature measurements that shows pretty strong cooling for the last several decades. That data are much more reliable than the Vostok-data. Well, if you don’t need more than that to arrive at your conclusions, Flanagan, go scare people with the prospects of a new ice age.

Stefan
January 30, 2009 7:51 am

Bill DeMott wrote:
My research and teaching often focus on population ecology […]
To use his (economic) forecasting models to study population dynamics, we first ignore what we know about the biology and science of populations, including the effects of predators, competitors, […]
[Dr. Armstrong’s forecasting methods] would provide no scientific understanding of how populations are likely to respond to ongoing or future changes in their environment. Not surprisingly, I am unaware of applications of in text books or the primary literature on population biology.

Are there examples of successful forecasts made based on that scientific understanding? And examples of unsuccessful forecasts?

Sekerob
January 30, 2009 8:00 am

Brendan H, and Flanagan, thanks for your language analysis and contributions, which are right on.
Anthony, so when I speak of money trail, you think of a direct line to big oil. Well Exxon-Mobil did a significant about face. They’ve known for very a very long time about the CO2 effects, same as the tobacco lobby has known for a very long time that smoking kills.
Anyway, Scott Armstrong links well to James Inhofe as a first pointer on where this “professional” witness comes from: http://www.desmogblog.com/scott-armstrong-james-inhofe-polar-bear-alaska on a blog dating back to May 2008, where mention is made of Armstrong and his position on the IPCC “forecasting”.
Thus Anthony, the opening lines “It has been an interesting couple of days. Today yet another scientist has come forward…” implies homework was not done before hosting this article by Jennifer Marohasy. It appears to be an old and chewed hat being handed around.
PS: In case you’ve heard of the “Inhofe 400”. Armstrong and Ross Hays are on it.
http://www.thedailygreen.com/environmental-news/latest/inhofe-global-warming-deniers-scientists-46011008. Ross Hays a prominent climate scientist?

January 30, 2009 8:58 am

Flanagan
I said

“we also had a 20 year period of cooling which again was more than double the 0.5C per century rate. ”

To which you replied
I really wonder where you could find that without torturing data…
Why would I need to “torture” the data? Are you suggesting there wasn’t a period of cooling in the mid-20th century?
about the 2C per century, It’s true that 1.5 is a more accurate.
Well it might be true that 1.5C is more accurate than 2C per century – it’s still nowhere near correct, though.