We are constantly exposed to extended forecasts in the media and online, with predictions extending through the next month and more.
Can you rely on such predictions? Are they really worth paying attention to?
Quite honestly, probably not–and if you do consider them, do so with the knowledge that their skill is marginal at best.
Take this month (October) for example. The official NOAA Climate Prediction Forecast for October temperatures, made on Sept. 19th, was for warmer than normal conditions over the west and MUCH above normal over the southwest U.S.
What actually happened? Nearly the entire west was much colder than normal, with the northern parts MUCH, MUCH colder than normal. A miss. In fact, a big miss.
Or the official 3-4 week forecast, made on October 4th? Warmer than normal over the west.
Such poor forecasts even a month out are not unusual. UW graduate student Nick Weber and I evaluated the skill of the main U.S. long-term forecasting model (the CFSv2) and found that skill is typically lost after roughly 2 weeks (see below and published in the peer-reviewed literature). This figure shows the forecast error (root mean square error) at 500 hPa—about 18,000 ft, a good level to view atmospheric predictability. The situation is the same over Washington, the western U.S., the continental U.S. or global. Skill is rapidly lost the second week out.
While meteorologists struggle to produce improved forecast skill past two weeks, we have gained a great deal of skill at the shorter time ranges, particularly for days 3-8.
So why is our skill improving rapidly for the shorter periods, but not the longer ones?
Because the forecasting problem is very different at the different temporal scales.
For the short periods, forecasting is an initial value problem. We start with a good description of the 3D atmosphere and our models simulate how things evolve. Because of weather satellites and other new data sources, our initial description of the atmospheric has gotten MUCH better. And our models are much better: higher resolution, much better description of key physical processes, and more. That is why a plot of the skill of skill of the 1-10 day forecasts of the European Center has improved greatly over the past decades (see below)
But small errors in the initial description of the atmosphere and deficiencies in our models inevitably lead to growing errors, and by 2 weeks such errors swamp the forecast. The forecasts are not much better than simply using the average conditions (or climatology).
There is hope for some skill beyond two weeks, by taking advantage of the forecast skill available from aspects of the environment that are changing slowly (such as sea surface temperatures, sea ice extent, snow extend, soil moisture). These aspects influence the atmosphere and potentially can torque the atmosphere one way or the other. Essentially, the forecast problem has changed from an initial values problem to a boundary-forced problem (the boundary being the surface characteristics that can influence weather).
But the skill that might be available from the boundary conditions is different—not about the conditions at a specific time, but for the average conditions over a month or season. A good example of such skill is the relationship of the warmer (El Nino) or colder (La Nina) temperatures of the tropic Pacific sea surface and weather around the world. There is some skill there, but it is relatively modest.
Unfortunately, our models still have key deficiencies (such as poor description of thunderstorms) that make it difficult for us to derive all the potential skill that should be available from the slowly changing boundary conditions. A lot of work is needed, but I am hopeful that eventually forecast skill beyond two weeks will improve.