New Methodology Improves Winter Climate Forecasting

From NC State Press Releases

(Note: the  actual paper was not included with this press release)

It’s hot out right now, but new research from North Carolina State University will help us know what to expect when the weather turns cold. Researchers have developed a new methodology that improves the accuracy of winter precipitation and temperature forecasts. The tool should be valuable for government and utility officials, since it provides key information for use in predicting energy consumption and water availability.

“Predicting winter precipitation is extremely useful, because winter is the most important season in terms of re-charging water supplies in the United States, ensuring water will be available in the summer,” says Dr. Sankar Arumugam, author of the study and an assistant professor of civil, construction and environmental engineering at NC State. The study was co-authored by Naresh Devineni, a Ph.D. student at NC State.

Researchers were able to reduce uncertainty in winter climate predictions by developing a methodology that incorporates multiple climate forecast models and also accounts for the activity of El Nino conditions in the Pacific.

“Predicting temperature is also important, because temperature determines energy consumption,” Arumugam says. “When it is very cold, people use more energy to heat their homes.”

The researchers were able to reduce uncertainty in winter climate predictions over the United States by developing a methodology that incorporates multiple general climate forecast models (GCMs) and also accounts for the activity – or inactivity – of El Nino conditions in the Pacific.

Winter precipitation and temperature over many regions of the continental United States are predominantly determined by the El Nino Southern Oscillation (ENSO), which denote hot (El Nino) or cold (La Nina) sea surface temperature conditions in the tropical Pacific.

Most  GCMs are better at predicting the winter climate when ENSO is quite active, and are less accurate under neutral ENSO conditions. The methodology developed by the researchers accounts for the skill of the models under active and neutral ENSO conditions in combining multiple GCMs, resulting in reduced uncertainty in predicting the winter climate.

“Improving precipitation and temperature predictions should help government, water and energy utility officials plan more effectively,” Arumugam says, “because they will have a better idea of what conditions to expect.”

The study, “Improving the Prediction of Winter Precipitation and Temperature over the continental United States: Role of ENSO State in Developing Multimodel Combinations,” was published online this month by Monthly Weather Review. The research was funded by the North Carolina Water Resources Research Institute.

NC State’s Department of Civil, Construction and Environmental Engineering is part of the university’s College of Engineering.

-shipman-

Note to editors: The study abstract follows.

“Improving the Prediction of Winter Precipitation and Temperature over the continental United States: Role of ENSO State in Developing Multimodel Combinations”

Authors: Naresh Devineni, A. Sankarasubramanian, North Carolina State University

Published: June 2010 (made available in July), Monthly Weather Review

Abstract: Recent research in seasonal climate prediction has focused on combining multiple atmospheric General Circulation Models (GCMs) to develop multimodel ensembles. A new approach to combine multiple GCMs is proposed by analyzing the skill of candidate models contingent on the relevant predictor(s) state. To demonstrate this approach, we combine historical simulations of winter (December-February, DJF) precipitation and temperature from seven GCMs by evaluating their skill – represented by Mean Square Error (MSE) – over similar predictor (DJF Nino3.4) conditions. The MSE estimates are converted into weights for each GCM for developing multimodel tercile probabilities. A total of six multimodel schemes are considered that includes combinations based on pooling of ensembles as well as based on the long-term skill of the models. To ensure the improved skill exhibited by the multimodel scheme is statistically significant, we perform rigorous hypothesis tests comparing the skill of multimodels with individual models’ skill. The multimodel combination contingent on Nino3.4 show improved skill particularly for regions whose winter precipitation and temperature exhibit significant correlation with Nino3.4. Analyses of weights also show that the proposed multimodel combination methodology assigns higher weights for GCMs and lesser weights for climatology during El Nino and La Nina conditions. On the other hand, due to the limited skill of GCMs during neutral conditions over the tropical Pacific, the methodology assigns higher weights for climatology resulting in improved skill from the multimodel combinations. Thus, analyzing GCMs’ skill contingent on the relevant predictor state provide an alternate approach for multimodel combination such that years with limited skill could be replaced with climatology.

0 0 votes
Article Rating
51 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
harrywr2
July 20, 2010 9:00 am

Sounds like broken watch science.
I’ve got a broken watch stuck at 12. To determine whether the time is closer to noon or midnight I installed a solar powered indicator light on my broken watch.
Statistical analysis shows that if the solar powered indicator light is lit, then the time is closer to noon rather then midnight.
I can now more accurately predict whether it is day or night using my broken watch.

Pamela Gray
July 20, 2010 9:16 am

Well, well, well. A model that increases weighting on ENSO events. Hansen, are you reading this? You might want to tinker with your model code by adding a bit of weight to the oceans (GCM’s), and subtracting a bit of weight off your AGW climatological code. If you won’t listen to us, maybe you will listen to colleagues. I won’t hold my breath.

DirkH
July 20, 2010 9:18 am

Does it beat Joe Bastardi?

Slabadang
July 20, 2010 9:23 am

Very soon, maby within two decades they understand what Bastardi and Corbyn allready know?

latitude
July 20, 2010 9:33 am

Improves the accuracy from what? zero to 1?
This is real cutting edge science, they just figured out that ENSO might have something to do with it, and wrote it into a computer game.
They are still hind-casting. Trying to predict the future by what’s happened, when no one knows enough about it to know what happened.
and screaming because no one believes them about global warming

James Sexton
July 20, 2010 9:33 am

Uhmm, anyone else notice they state “New Methodology Improves Winter Climate Forecasting”, but we see no quantification of how much it is improved or if it actually improves forecasting at all. Giving weight to already known climate related phenomena is something I thought was already being done. (Anthony, isn’t this already being done in meteorology?) Using multiple GCM’s isn’t new either. What am I missing?

Neo
July 20, 2010 9:38 am

Department of Civil, Construction and Environmental Engineering
“Engineers” ? Not Scientists … Heretics !! … LOL

Henry chance
July 20, 2010 9:42 am

Deadly cold
http://en.trend.az/regions/world/ocountries/1723309.html
At least 175 people have died in the coldest winter in South America in recent years, officials in six affected countries said, dpa reported.
The cold was worst in southern Peru, where temperatures in higher altitudes of the Andes dropped to minus 23 degrees Celsius. Officials said Monday that since the beginning of last week 112 people died of hypothermia and flu.
The Met Office has been wrong 10 of 10 years. They have forecasted too hot for 9 of 10 years.
Last winter Joe Bastardi forecasted a huge storm around New Years Day hitting England. His forecast was over a week in advance. It was noted in The Met Forecast at the time it hit.

BillN
July 20, 2010 9:47 am

Hey, lighten up guys! This is the work of a student learning to be a research scientist. Throttle back on the ‘tude. At least the student will go out into the world with an appreciation for ENSO and the ocean effect on climate.
Cheers,
BillN

jim hogg
July 20, 2010 9:49 am

Improves the accuracy of forecasting . . . . .? Doesn’t say how many years it’s been tested over – that I can see . . surely the abstract or summary would have referred to historical facts if this claim could be substantiated . . . or is this retrospective model comparisons . . . Why don’t they come back when they’ve got five or more years of real data that supports their improved accuracy claims . . . .Is this what science has come to?

Pamela Gray
July 20, 2010 9:50 am

I know for some of us this seems like such a “duh” moment. But Hansen style climate models place little weight on changing short and long term ENSO parameters and instead rely quite heavily on the mathematically calculated “assumed” effects of increased greenhouse gas forcing as the primary long term pattern of concern.
Here is my position. In essence, if GCM’s predict long term climate better than either a mixed, or an AGW climatological model, those of us who understand the oscillating heat holding and heat letting capacity of oceans will be vindicated. That is not to say that greenhouse gasses do not cause warming (they do), or that increased amounts will not cause more warming (they probably do). But the oscillating power of the oceans just buries this tiny change in temperature.

Sean Peake
July 20, 2010 9:52 am

Golly! Another model. [snip]

Enneagram
July 20, 2010 9:54 am

Kids still insisting on working for computer games, in the realm of the twilight zone,dark black holes and devilish antimatter, not really researching for real causes, though there are a lot of researchers, all over the world, applying real science not IPCC-like witchcraft.
Researchers were able to reduce uncertainty in winter climate predictions by developing a methodology that incorporates multiple climate forecast models</B. and also accounts for the activity of El Nino conditions in the Pacific
This is politicized and convenient meteorology.

latitude
July 20, 2010 10:07 am

BillN says:
July 20, 2010 at 9:47 am
Hey, lighten up guys!
==========================================================
Nope, the field of main stream climate science has become a business of preying on victims. The more victims they create, the more money they make, and the better their job security.
This paper was peer reviewed by people that have been in this business long enough to know the game and work the game.
They have not been able to predict squat, and this paper does not improve that one bit. It’s just another smoke and mirror to make it seem like they can.
robust, new, improved,
No slack is coming from me at all.

Stop Global Dumbing Now
July 20, 2010 10:08 am

I’m all confused. Is this weather or climate? Seems to me (as mentioned above) Joe Bastardi and other good weather forecasters already do this.

jack mosevich
July 20, 2010 10:09 am
RichieP
July 20, 2010 10:09 am

Excuse me, but shouldn’t the title read “New Methodology Improves Winter Weather Forecasting”? I get so mixed up these days. Personally, I’ll stick with Joe. He’s also got a sense of humour – which is more than you can say about most of the new puritans.

John Blake
July 20, 2010 10:22 am

Cyclic peaks and troughs by definition are predictable in terms of a phenomenon’s amplitude, frequency, and wavelength. As activity bends high and low about a central mode, or trend, regression-to-the-mean is mathematically a given, regardless of internal dynamics driving the effect.
Climate studies are not an empirical, experimental discipline, but rather classification schemes akin to botany. Just as evolutionary biology is absolutely incapable of forecasting mutant variations, so climatologists (sic) deal only in hindsight, their circular “models” necessarily built on more or less unrealistic preconceptions.
For mathematical and physical reasons ranging from Chaos Theory to Fractal Geometry and Ilya Prigogine’s “catastrophes”, linear extrapolations of complex dynamic system such as Earth’s atmosphere are impossible in principle. This leaves climate hysterics either to admit pervasive ignorance or sneakily fabricate conclusions in the guise or objective, rational scientific inquiry.
Since no “peer review” can replicate non-empirical results, corrupted panels merely accede to knowing fraud for purposes of agenda-driven propaganda, refusing access to base-data or manipulative computations while mounting vicious ad hominem attacks on all competing theses. From Pachauri’s grubby little REDD-based IPCC to once-respected institutions such as GISS/NASA, Penn State, CRU/UEA, the rot is far advanced. Next up: EU debate-suppression statutes akin to Canada’s astoundingly maleficent “hate speech” panels, Star Chamber exercises in (as Sr. AW has said) “CYA BS” on every level.
What is to be done?

Jim
July 20, 2010 10:26 am

I tried eating one insect and it does not taste good at all. So, I will improve the flavor by mixing 12 different insects and eating that instead. Well, it’s a plan at least.

k winterkorn
July 20, 2010 10:32 am

It cannot be known whether this model improves predictions until it has made a fair number of predictions and then the actual weather is compared to the predictions.
I am regularly astonished by the aggressively self-confident claims of “modern” scientists. I cannot remember whether it is NOAA or NASA that uses the language “_____ understands the oceans and atmosphere and makes accurate predictions, etc” (paraphrasing), when scientists of proper humility would add in the words “seeks to understand……”.
This also reminds me of the language of educationist bureaucrats in their curricula stating “The student will learn and comprehend X”, when we all know darn well that most of our students will neither learn nor comprehend much of the curriculum presented to them. Perhaps this is where recent scientists have learned their verbal grandiosities.
Orwell taught us about the power of debauched language to produce debauched thinking. The apparently oxymoronic “Climate Scientist” seems to be a case in point.

Andrew30
July 20, 2010 10:38 am

“New Methodology Improves Winter Climate Forecasting”
1. Never call it a prediction, ever.
2. Never use absolute words like ‘will’ and ‘mus’t, stick to ‘could’ and ‘may’.
3. Only make projections at least 50 years out.
4.7 Get a larger dart board (or look out the window)
6. Collect some old copies copy of The Farmers Almanac.
7. Learn to cut-and-paste.
5.31 Get a good press agent.

PJB
July 20, 2010 10:45 am

Most GCMs are better at predicting the winter climate when ENSO is quite active,
Mostly because they were designed to handle ever-increasing heat content and the attendant warming that resulted.
One of the problems with short term gain for long term pain.

wayne
July 20, 2010 10:45 am

“It’s hot out right now, but new research from North Carolina State University will help us know what to expect when the weather turns cold.”
~~~~~
How about ‘may’. Seems you rarely see correct key words being applied in science literature and journalism today.

July 20, 2010 10:45 am

Anthony you might take a detailed look at what my lunar declination cyclic forecast produces, to see how the 6 year long forecast (2008 through end of 2013) on my site has scored compared to climatology over the first 30 months since it was posted.

JohnH
July 20, 2010 10:49 am

Peru is not the only country suffering from cold
http://www.guardian.co.uk/world/2010/jul/20/mongolia-nomads-livestock-winter-poverty
Somehow the Guardian couldn’t get AGW is to blame into the story but bet they tried hard.

Pamela Gray
July 20, 2010 10:57 am

I have always considered climate and weather to have fuzzy boundaries. And AGW folks have been hinting at “weather change” for quite some time under the label of climate change. So if we are able to predict weather out 6, 12, 18 months from now based on GCM’s, when does that become climate prediction? And when do GCM’s overtake AGW models in usefulness? Pretty damned quick.
What will be fun to watch is the match between these types of models as well as the mixes versions. If the pure GCM’s perform as well as the others, the null hypothesis must be accepted.
Game over.

MaxL
July 20, 2010 10:59 am

From the sounds of it this is what meteorologists call ensemble forecasts. You examine results from a group (ensemble) of various different weather models. Or you can take one model and run it numerous times with slightly tweaked initial conditions or physics schemes. The general idea is that if the models tend to converge close to a similar solution then you are more confident of that scenario. If the solutions vary widely then the results are very dependent on small possible uncertainties in initial data or model physics. Basically when the weather pattern is rather stable and moving along fairly predictably the ensemble forecasts will produce similar solutions. When the weather is rapidly changing the ensemble forecasts generally vary quite a bit. Also, generally, the further out in time you go the more the solutions diverge (we are talk days here, not months).
Ensemble forecasts are very useful when you have a client who wants to know your confidence in the forecast and not just get a back-and-white “its going to rain” type of forecast. All forecasts have some degree of uncertainty, no matter who is doing them. Some just choose to cherry pick their good results and advertise those.
For more information on the Canadian and American ensemble forecasts see:
http://wattsupwiththat.com/2010/07/20/new-methodology-improves-winter-climate-forecasting/#more-22260
http://www.weatheroffice.gc.ca/ensemble/naefs/index_e.html

Arno Arrak
July 20, 2010 11:09 am

Finally someone has realized the importance of the El Nino phenomenon for weather prediction. It should have stared them in the face if they had bothered to study satellite records that have been out for thirty years. When I analyzed satellite temperatures I immediately realized that a twenty year stretch in the eighties and nineties consisted of nothing but ENSO oscillations – five warm El Nino peaks separated by cool La Nina valleys. The temperature swing from an El Nino peak to a La Nina valley was about half a degree which is of the same order of magnitude as the 0.6 degree rise for the entire twentieth century. We are talking about global temperature, not just South America. The super El Nino of 1998 was not part of the ENSO oscillation and brought us warming that persisted after it was gone. I have shown that the ENSO oscillation involves the trade winds, the two equatorial currents and the equatorial countercurrent. The equatorial currents driven by trades pile up water near New Guinea and the Philippines and it returns east via the equatorial countercurrent. Wave resonance is involved, with resonance period determined by the dimensions of the ocean basin. It is about four or five years but oceanic conditions can modify it. It is manifested by periodic appearance of a Kelvin wave traveling east along the equatorial countercurrent. The ENSO oscillation itself has existed since the Isthmus of Panama rose from the sea. People studying El Nino were unbelievably close to discovering how the system operates when they marked out the Nino 3.4 area for observation. It happens to be an elongate stretch of water right smack in the middle of the equatorial countercurrent. When an El Nino wave arrives at the South American coast it runs ashore, spreads out in both directions, and prevailing winds pick up its heat when it has spread out. But any wave that runs ashore must also retreat. As it retreats and starts a return journey water level drops behind it by half a meter or more, cool water from below rises up, and a La Nina begins. An example of the effect of oceanic conditions is El Nino Modoki. It happens when something blocks the flow of the equatorial countercurrent and the warm water on its way to South America spreads out in the middle of the ocean. The super El Nino of 1998 is another one that no one could have predicted. A huge amount of warm water was deposited at the start of the equatorial countercurrent, apparently by a storm surge in the Indo-Pacific region. This happened between two regular ENSO waves. When this mass of warm water arrived in South America and spread out global temperature went up twice as much as any previous El Nino had been able to lift it. After it was gone global average temperature rose by 0.3 degrees and a six year warm period I have designated the “twenty-first century high” followed. This took only four years to accomplish and amounts to a non-carboniferous, step-wise warming at the start of the century. And all of this had nothing to do with carbon dioxide greenhouse effect. You will not see this effect on official temperature curves that feature the so-called “late twentieth century warming” in the eighties and nineties. It is not present in the satellite record of global temperature. To learn more, read “What Warming?” available from Amazon.com.

Steve in SC
July 20, 2010 11:21 am

Tremendous loss of face is at stake here.

Pamela Gray
July 20, 2010 11:30 am

MaxL, I was thinking along those same lines. There are meteorologists who are working on 3 month out predictions for agricultural purposes. See the following statistical analog method. Granted, he includes a wide range of possibilities, especially as you say, under unclear climate signals, IE ENSO neutral or mild events.
http://www.oregon.gov/ODA/NRD/docs/pdf/dlongrange.pdf

Pamela Gray
July 20, 2010 11:37 am

Arno Arrak, you read my mind. Tell us more about yourself.

MattN
July 20, 2010 11:44 am

Cool. Before I read a word of the entry I thought “That looks like NC State’s bell tower…”
We have a good meteorology program.
MattN<—NCSU alum….(not in meteorology though….)

Cassandra King
July 20, 2010 11:45 am

Models based on models incorporating yet more models which are then adjusted more efficiently, supposedly.
Look at my new model, its better than the old models they are based on because there are more of them? A bag of flawed models knitted together is more complex however complexity does not equal accuracy, better computing of garbage does not make that garbage any better it simply makes it more easily tampered with and adjusted. Model not working out as anticipated? just make it more complex and all will be well. The current models fed with mushroom growing medium are giving us a wholly surreal picture at odds with reality, the answer is to scrap the models and focus on real old fashioned traditional techniques that have served the science so well however that approach will be very dangerous because it would start giving answers that the CAGW industry simply do not wish to hear. The thing about reality and politicians on a mission is that the politicians cannot control reality and its vagaries can derail their ambitious plans.
BTW what happened to the UK met offices new wonder model that could predict local weather and climate out to 2100? It was claimed that this new wonder of science was put together by the finest brains the met office could assemble, I assume it is still in existence or did a flood of FOI requests kill it dead?

July 20, 2010 12:08 pm

If someone could demonstrably predict the climate appreciably better than tossing a coin in one year’s time, I might just listen to them for a 10 year forecast. If however they can demonstrably forecast 10 years in advance then I might listen to them for a 100 year forecast.
But, if they don’t even bother to try to forecast one year in advance, or worse forecast and then “hide the decline” between predicted and real … then I won’t listen to them at all.
As for this research, it is the first time in a while I’ve heard anyone approaching the subject in a real testable scientific way.

Jimbo
July 20, 2010 12:11 pm

In this video is Stephen Schneider in the 1970s predicting a coming ice age. Let’s hope he was wrong then.

—–
I’m sorry for his passing away and my sincere condolences to his family.

Enneagram
July 20, 2010 12:15 pm

Pamela Gray says:
July 20, 2010 at 10:57 am
Game over!….
Just great!, though global warming bed wetting kids are planning a sinful jamboree, next November, without “parents” (deniers) of course 🙂

Joel Hoffman
July 20, 2010 12:38 pm

Arno Arrak,
With all due respect, climatologists have known about the importance of ENSO in weather and climate prediction for decades. The problem these students are trying to address is how to create better model forecasts during active AND neutral ENSO. If anything, it is harder to predict winter temps and precip when ENSO is not a factor because of how much it influences climate when it is active. From the press release…
“Most GCMs are better at predicting the winter climate when ENSO is quite active, and are less accurate under neutral ENSO conditions. The methodology developed by the researchers accounts for the skill of the models under active and neutral ENSO conditions in combining multiple GCMs, resulting in reduced uncertainty in predicting the winter climate.”

latitude
July 20, 2010 12:56 pm

Joel Hoffman says:
July 20, 2010 at 12:38 pm
—————————————————————————————————–
Joel, aren’t they are still trying to predict the future by hind-casting?
It doesn’t matter what happened in the past, how strong or weak an ENSO was after it has happened, it’s over, and someone can then analyze it. after the fact.
In order to model the future, they would have to be able to know how strong or weak a current ENSO “will end up”, that they can’t do.
They can’t even predict when one will start or stop, much less what it will be.

nandheeswaran jothi
July 20, 2010 1:00 pm

Cassandra King says:
July 20, 2010 at 11:45 am
Models based on models incorporating yet more models which are then adjusted more efficiently, supposedly.
Look at my new model, its better than the old models they are based on because there are more of them? A bag of flawed models knitted together is more complex however complexity does not equal accuracy, better computing of garbage does not make that garbage any better it simply makes it more easily tampered with and adjusted.
The professor that taught my first modeling class, used to insist, that we should pick up the dominant factor that affects the behaviour we are studying, BEFORE we we write the first line. that seems to be now old fashioned. Just adding a whole bunch of crap models will not get them a better model.

Pamela Gray
July 20, 2010 1:10 pm

Joel, I wonder if the step changes and various lags as the cooled or heated tropical stretches of water move to other global locations have been fully understood. Just because ENSO is neutral does not mean that the step changes and lagged affects of ENSO events are not affecting weather pattern variations elsewhere. I would imagine that GCM’s have not done a good job of continuing to model these after-effects while the ENSO parameters themselves are neutral. We likely have lots of data related to weather pattern variation under current event conditions allowing for model development. But studying after-affects may not have been given the same scrutiny and modeling efforts. I wonder why. Could it be that assumptions were being made that under neutral conditions, some other forcing is at work, like CO2? And might that assumption be wrong, leading to the poor prediction the study is trying to improve?
It kind of reminds me of that show, “I didn’t know I was pregnant!” Assumptions were made that symptoms were related to something else so pregnancy was never considered. Big oops.

Joe D'Aleo
July 20, 2010 1:46 pm

No suprise here.
We have been using ENSO and QBO for over a decade to predict snowfall at BOS Logan with rather startling skill even with all the local issues near the coast. Early on used/considered other factors such as solar, AMO, PDO, but they did not add to skill.
http://icecap.us/images/uploads/SNOWvsQBOENSO.JPG
It is presented each year at the NWS Taunton Winter Weather Workshop.

Pamela Gray
July 20, 2010 2:04 pm

Years ago I ran across a study posted on the internet by Joe. He had correlated natural parameters to temperatures. Another researcher wrote (paraphrased): Interesting result and he might be on to something here. We shall see.” I’ve been hooked ever since on this topic. Thanks Joe.

Joel Hoffman
July 20, 2010 2:09 pm

Latitude,
I agree that they are hind-casting and that they are currently unable to predict the onsets and offsets of ENSO events accurately. What I get out of this abstract though, if I’m reading this right, is that they have seen more accurate model runs during ENSO events than ENSO neutral events. It sounds like this is a fact, and that they are using this fact as a sort of take off point for their new methodology. The prediction of onset and offset times of ENSO in this case is irrelevant because ENSO active and ENSO neutral are looked at independently. Who knows, maybe looking at it this way will help them forecast onsets and offsets more accurately.
Climate models in general have made advancements in the last 10 – 20 years (they’re still not great obviously, but they are better), and hind-casting is at least partially responsible for that. Like it or not, it is still important. Students doing this kind of methodological research are partially responsible for that too.
Pamela Gray, I completely agree. I’m not sure if the residual effects of ENSO events have been extensively studied or not, but it certainly seems like CO2 is an influencing factor during ENSO neutral conditions based on some of the ridiculous forecasts made recently. I’m hoping these students will see past that. We can only hope.

latitude
July 20, 2010 2:29 pm

Thank Joel
But not more accurate model runs during ENSO events.
They say “quite active” ENSO events. I take that to mean that it’s only when the events are strong.
“Most GCMs are better at predicting the winter climate when ENSO is quite active”

Gary Pearse
July 20, 2010 2:51 pm

Didn’t the UK Met office predict snowless winters by now using billion dollar computers and a chain of bbq summers? If they didn’t plug in ENSO data, what the devil did they put in there. Whatever it was flipping a coin would have given them a 50% efficiency instead of 0%. Where have the NC researhers been during the breakdown of all the other hindcasting methodologies in C science.

nevket240
July 20, 2010 2:53 pm

http://news.xinhuanet.com/english2010/photo/2010-07/20/c_13406876.htm
excerpt..BEIJING, July 20 (Xinhuanet) — Last month was the hottest June ever recorded worldwide and the fourth consecutive month that the combined global land and sea temperature records have been broken, according to the reports of the U.S. National Oceanic and Atmospheric Administration.
Worldwide, the average temperature in June was 61.1 degrees Fahrenheit (16.2 Celsius). That was 1.22 degrees F (0.68 C) warmer than average for June.
I think NOAA is a subsidiary of Goldman Sachs. The accounting looks familiar
regards

Joel Hoffman
July 20, 2010 8:28 pm

Latitude,
Here is an exerpt from the paper…
“Analyses of weights also show that the proposed multimodel
combination methodology assigns higher weights for GCMs [General Circulation Models] and lesser weights for
climatology during El Nino and La Nina conditions. On the other hand, due to the limited skill of GCMs during neutral conditions over the tropical Pacific, the methodology assigns higher weights for climatology resulting in improved skill from the multimodel combinations.”
I don’t think it is explained as well as it could have been. If they would’ve defined active as, say, x degrees above or below average for y amount of time, that probably would’ve been clearer, but I haven’t seen that yet. I haven’t read through all of it though.

July 21, 2010 5:16 am

Winter precipitation is dependant on uplifts in temperature, the complete inverse of summer precipitation. Forecast the solar driven temperature uplifts in winter and you have it in the bag. Know how cold it will get beforehand, and be able to predict snowfall too.

Dave in Delaware
July 21, 2010 5:51 am

Looks to me like he is ‘holding their feet to the fire’.
1) The paper is evaluating each model, and the multi-model schemes, using Mean Square Error (MSE) of GCM results.
“… we combine historical simulations of winter (December-February, DJF) precipitation and temperature from seven GCMs by evaluating their skill – represented by Mean Square Error (MSE) – over similar predictor (DJF Nino3.4) conditions.”
Aha! Statistical analysis of GCM results, something that seems to be lacking in the peer restricted literature and the IPCC.
2) The paper says the models are ‘over tuned’ for ENSO.
“On the other hand, due to the limited skill of GCMs during neutral conditions over the tropical Pacific …” <– note LIMITED SKILL
Since ENSO is a natural non-CO2 warming event, doesn't that tell us something about the GCMs? Over tuned for ENSO, and not very skilled in non-ENSO periods. While you may not agree with the multi-model result, this sort of analysis begins to cast light into the dark corners of climate modeling.

Arno Arrak
July 21, 2010 2:00 pm

Pamela Gray – thanks. I put up a link to my book on Amazon. There is a “look inside” and if you are interested you can open it and look at the back cover. I am somewhat disappointed with your belief in modeling. Forget that stuff about forcing by CO2 – it does not work. The reason is that the infrared absorption band of the atmosphere is saturated and no further additions of CO2 to the atmosphere can change the already-existing greenhouse effect. This follows from the work of Ferenc Miskolczi [E&E 21(4):243-262 (2010)]. He used the NOAA weather balloon database and determined that the global average annual infrared optical thickness of the atmosphere has been unchanged for 61 years, with a value of 1.87. This is the first direct determination of the optical thickness of the atmosphere in the infrared despite the billions spent on “climate research” by “climatologists” who write peer-reviewed papers. He used to work for NASA and was told to not talk about his work, so he eventually quit. What this means is that all the carbon dioxide added to the atmosphere for the last 61 years has not changed the transparency of the atmosphere in the infrared or the optical thickness would have increased, and this did not happen. And no infrared absorption by carbon dioxide means no greenhouse effect, period.

Paul Vaughan
July 21, 2010 5:58 pm

What they are doing is ad-hoc conditional modeling. Ironic, since they appear to have no clue about how to do conditional empirical analysis. This is just the latest work-around to keep the computer fantasies looking “cutting edge”, “state of the art”, etc. It will fool most bureaucrats – and as for the others: most don’t care anyway.
From a researcher’s point of view:
Throw a bunch of computer fantasies at climate and note which ones stick under which circumstances. Call it “multimodel GCM”, sound sophisticated, & rake in some funding. Have a good laugh that the public doesn’t recognize it as a crapshoot. All in a day’s work …And the good news is climate still isn’t understood yet, so (presumably) you have job security since the authorities will still need you on staff to (continue pretending to) figure it out. No doubt there will be many ‘fascinating’ seminars.