From Penn State University and the “*but we guarantee you there’s no predictability limit in climate science*” department comes this interesting study.

UNIVERSITY PARK, Pa. — In the future, weather forecasts that provide storm warnings and help us plan our daily lives could come up to five days sooner before reaching the limits of numerical weather prediction, scientists said.

“The obvious question that has been raised from the very beginning of our whole field is, what’s the ultimate limit at which we can predict day-to-day weather in the future,” said Fuqing Zhang, distinguished professor of meteorology and atmospheric science and director of the Center for Advanced Data Assimilation and Predictability Techniques at Penn State. “We believe we have found that limit and on average, that it’s about two weeks.”

Reliable forecasts are now possible nine to 10 days out for daily weather in the mid-latitudes, where most of Earth’s population lives. New technology could add another four to five days over the coming decades, according to research published online in the *Journal of the Atmospheric Sciences*.

The research confirms a long-hypothesized predictability limit for weather prediction, first proposed in the 1960s by Edward Lorenz, a Massachusetts Institute of Technology mathematician, meteorologist and pioneer of the chaos theory, scientists said.

“Edward Lorenz proved that one cannot predict the weather beyond some time horizon, even in principle,” said Kerry Emanuel, professor of atmospheric science at MIT and coauthor of the study. “Our research shows that this weather predictability horizon is around two weeks, remarkable close to Lorenz’s estimate.”

Unpredictability in how weather develops means that even with perfect models and understanding of initial conditions, there is a limit to how far in advance accurate forecasts are possible, scientists said.

“We used state-of-the-art models to answer this most fundamental question,” said Zhang, lead author on the study. “I think in the future we’ll refine this answer, but our study demonstrates conclusively there is a limit, though we still have considerable room to improve forecast before reaching the limit.”

To test the limit, Zhang and his team used the world’s two most advanced numerical weather prediction modeling systems — The European Center for Medium Range Weather Forecasting and the U.S. next generation global prediction system.

They provided a near-perfect picture of initial conditions and tested how the models could recreate two real-world weather events, a cold surge in northern Europe and flood-inducing rains in China. The simulations were able to predict the weather patterns with reasonable accuracy up to about two weeks, the scientists said.

Improvements in day-to-day weather forecasting have implications for things like storm evacuations, energy supply, agriculture and wild fires.

“We have made significant advances in weather forecasting for the past few decades, and we’re able to predict weather five days in advance with high confidence now,” Zhang said. “If in the future we can predict additional days with high confidence, that would have a huge economic and social benefit.”

Researchers said better data collection, algorithms to integrate data into models and improved computing power to run experiments are all needed to further improve our understanding of initial conditions.

“Achieving this additional predictability limit will require coordinated efforts by the entire community to design better numerical weather models, to improve observations, and to make better use of observations with advanced data assimilation and computing techniques,” Zhang said.

###

The paper: (open access) **“What Is the Predictability Limit of Midlatitude Weather?”**

https://journals.ametsoc.org/doi/10.1175/JAS-D-18-0269.1

### Abstract

Understanding the predictability limit of day-to-day weather phenomena such as midlatitude winter storms and summer monsoonal rainstorms is crucial to numerical weather prediction (NWP). This predictability limit is studied using unprecedented high-resolution global models with ensemble experiments of the European Centre for Medium-Range Weather Forecasts (ECMWF; 9-km operational model) and identical-twin experiments of the U.S. Next-Generation Global Prediction System (NGGPS; 3 km). Results suggest that the predictability limit for midlatitude weather may indeed exist and is intrinsic to the underlying dynamical system and instabilities even if the forecast model and the initial conditions are nearly perfect. Currently, a skillful forecast lead time of midlatitude instantaneous weather is around 10 days, which serves as the practical predictability limit. Reducing the current-day initial-condition uncertainty by an order of magnitude extends the deterministic forecast lead times of day-to-day weather by up to 5 days, with much less scope for improving prediction of small-scale phenomena like thunderstorms. Achieving this additional predictability limit can have enormous socioeconomic benefits but requires coordinated efforts by the entire community to design better numerical weather models, to improve observations, and to make better use of observations with advanced data assimilation and computing techniques.

I will be happy when three days becomes consistent.

My local weatherman … who has designs on becoming the next David Letterman … can’t handle 2-days with any reliability or consistency

I have noticed that they often can’t get the correct temperature 6 hours in advance.

They’ll probably just take a leaf out of the climate playbook by changing the definition of a successful forecast.

The Hong Kong Observatory often has problems with same-day forecasting (at least twice this week so far)

Your mention of the forecast for Hong Kong reminds me of a funny story on myself. I was a forecaster on the USS Midway (CV41) years ago and we were in Hong Kong for a port visit. We were anchored out and it was a 45 minute boat ride to the shore. I had the mid watch (7pm-7am) and I was responsible for putting out the forecast for the day. I forecasted partly cloudy becoming mostly cloudy with thundershowers by 5pm. Getting off of watch at 7 I got changed and hopped on the liberty boat to go to shore. We were only about a hundred feet from the boat when I realized that I hadn’t prepared for the eventuality of showers and thundershowers. If I took the boat back to get my umbrella it would be over two hours before I could actually see the city. So throughout the day I kept saying to my self “maybe my forecast will go bust, I hope it goes bust.” It didn’t. Right at 5pm the skies opened up and I got soaked. I was never so disappointed about being right in my life.

I joined the US Air Force in 1978 and served in Air Weather Service as a weather observer. Accurate forecasting was critical to mission success and flight safety. I remember at that time that the time limit for accurate forecasts to the requirements of flight operations was about 12 hours. The forecasters would be updating their forecasts every 4 hours based on the weather data (synoptic charts, skew-t plots, pireps, etc.) we provided them and even more often if weather conditions were rapidly changing. Beyond that the accuracy would, of course, drop off. The requirements for flight operations were and are much more critical than for general weather forecasting.

yeah BoM got mega millions and cant get a 3 day forecast right.

what their main page the local and the synoptic charts show for the same day are ALL usually wildly differing

its somewhere between pathetically sad and enraging

I am sure the “experts” will respond that weather forecasting is different than climate forecasting.

They are different.

For weather forecasting you need to accurately know the current conditions and accurately know the physics of the atmosphere. Nothing else changes fast enough to matter.

For climate forecasting you don’t need to know the current conditions at all. However in addition to accurately knowing the physics of the atmosphere you have to accurately know how everything interacts with everything.

How does increased/decreased rainfall affect the types of plants that grow in a region?

How does increased/decreased rainfall affect the types of plants that grow in a region?

How do changes in cloud cover impact the rate at which water evaporates and how does that affect the types of plants that grow in a region?

How does changes in the types of plants that grow in a region affect the types of animals that grow in a region?

How does the changes in the types of animals that grow in a region affect the types of plants that grow in a region? (If that sounds like an infinite loop, it is.)

Getting back to what I like to call the 5 spheres

Atmosphere

Hydrosphere

Biosphere

Cryosphere

Lithosphere (rocks)

Until you can understand how all these spheres interact with each other, you don’t have a chance of understanding how climate changes.

Weather forecasting and climate forecasting are very different. Weather forecasting is by far the easier of the two.

Yes MarkW, exactly. Let’s add planetary gravitational mechanics to that list.

JVW: Amen to that. Brett

The planet isn’t a closed system so let’s also include interstellar activity and cosmic rays. Hell the Sun is a variable too.

All well and good, but the very simplest climate models from back in 1988 did a good job of predicting global average temperatures to this day. Yes, you can add bells and whistles (but don’t worry about gravity, because that is held constant, and don’t worry about cosmic rays, because they don’t matter) and you’ll get a slight improvement. Take into account volcanic eruptions, El Nino and La Nina and you can do better still.

But really, its so simple. We put CO2 in the atmosphere, and the world slowly gets hotter.

John Brookes,

That does sound so simple, too bad it’s not true. CO2 goes up at a fairly regular rate, but the “global average” temperature goes up and down. On top of that, if you look a temperature regionally it’s even more diverse. Some regions cool during a particular time period, while others warm. Look at it by latitude, and it’s not consistent either. And if you look at places where the average temperate has increased, mostly is the minimums that have gone up. So looking at the raw data, there really isn’t a nice simple relationship between CO2 and various temperature averages.

And temperature isn’t even the thing we should be looking at. According to the theory, the extra CO2 should be adding energy to the system, but we don’t have a good way of measuring that, so temperature is used. That’s like looking at a small sampling of pixels instead of an entire image – you will never know what it really looks like.

“We put CO2 in the atmosphere, and the world slowly gets hotter.”

Hotter than what?

BWTM: to get “hotter,” you must first be “hot.”

GMT is neither hot, nor getting hotter.

Since what climate modeling is most concerned about is the average temperature, there’s no need to consider anything beyond the basic physics of matter absorbing and emitting energy and which has been known physics for more than a century.

Consider that for an ideal black body and the non ideal absorber/emitter of W/m^2 called a gray body, how the system responds to variable incident energy can be predicted with near absolute precision. Since from space, the relationship between the average surface temperature and average planet emissions are so close to the average behavior of a gray body whose emissivity is 0.62, there’s no wiggle room for the planet to behave in any other manner.

I get that many consider the gray body model to be too ‘simple’, but in fact, there’s no other known physics that quantifies how matter that’s absorbing and emitting energy will behave. It’s by denying and/or obfuscating the T^4 relationship between W/m^2 and temperature that the IPCC can fabricate enough wiggle room to make what the laws of physics precludes seem plausible.

In reality, you also have to consider all of the heat transport mechanisms, plus all the the systems by which visible light can be reflected before it can get converted into heat.

If you are doing a bottom up simulation, then yes, you need to account for these things and a whole lot more which is why bottom up simulations like GCM’s have a serious divergence problem unless they are bounded and corrected in real time by a parallel top down simulation. The highly deterministic equations that should drive the top down simulation are simple and readily confirmed by measurements,

Psun(1-a) = ePs + dE/dt

Ps = oT^4

T = kE

where a is the albedo, e is the effective emissivity (Po/Ps), Po are the emissions leaving the planet, Ps are the SB emissions of a surface at T, T is the average temperature of the surface, E is the energy stored by the system and k is its specific heat, dE/dt is the rate of energy entering and leaving the thermal mass of the system and whose steady state average is zero. The data is very clear that the yearly average a’s, e’s and k’s per cell are relatively constant, so closing the loop by sanity checking the yearly average results against a top level model is relatively easy to do. It’s just not done because it would highlight just how bad the models actually are as the a’s, e’s and k’s required for the insanely high ECS presumed by the IPCC would need to be far different than their measurements suggest.

How do you explain the rapidity with which we bang into and out of interglacials?

The planet enters and exits ice ages according to how the Earth’s orbit and axis changes as it varies the effective ratio between summer and winter. When there’s more winter than summer, surface ice grows and it shrinks otherwise. BTW, the transitions in and out of ice ages are not that fast and still take many, many centuries as a small trend in the same direction accumulates. They only seem fast relative to the length of the record.

The melting and growing of ice also has a significant effect on the albedo which exaggerates the temperature changes. it’s not ‘feedback’, but a causal effect of where the 0C average isotherm is located which as surface ice accumulates migrates closer to the equator and as it gets closer, the impact gets progressively larger owing to more affected surface area.

Currently, surface ice is at it’s historic minimum, so there’s not a lot of headroom for more melted ice to have any significant effect on the average temperature.

The real curiousity is that the planet enters glacials slowly – at least during the Pleistocene. The planet then exits glacials very rapidly in contrast. Also, when comparing interglacial and glacial climate patterns, one notable point is that glacial climates see positive correlations between heat and precipitation, that is, “warmer” ice-age weather sees increased precipitation, but since the beginning of the Holocene, the correlation has tended to be inverse. Holocene patterns tend to see coller-wetter and hotter-drier linkages. The pattern can be tracked in patterns of ice and dust accumulations in both Antarctic and Greenland ice cores. So the correlation and switch are almost certainly global effects.

Duster,

I was just looking at the DomeC and Vostok ice core data covering the last 750K years and I don’t see a consistent trend of warming being significantly faster than cooling. I see both fast and slow warming and cooling phases. It seem to be the result of how orbital and axial variability aligns. The more that the fastest affecting changes in the orbit and axis align, the faster the transitions in and out of ice ages becomes and the more dramatic the temperature variations will be. None the less, it still takes on the order of 10K years to transition between cold periods and warm periods which occur with demonstrable regularity.

While there’s pronounced heating and cooing in sync with the 40K year period of variability in the Earth’s tilt, what we define as ice ages and interglacials requires the alignment of other orbital variability to either make a warm period warmer or a cold period colder.

http://www.palisad.com/co2/ic/orbit1.png

In this plot of the last 150K years of Vostok data, the magenta line is how the axis varies, the cyan line represents the precession of perihelion and the blue line is the temperature normalized to 11K year averages (half the period of precession). Notice how the warmest periods are when the maximums of each effect align and the coldest periods are when their minimums align. The predominate effect is from axial tilt and the closer its changes aligns with how the precession of perihelion changes, the more pronounced the result will be.

It’s interesting that the warmest times are when perihelion aligns with the N hemisphere summer solstice. Since it’s currently nearly aligned with the N hemisphere winter solstice, the current interglacial is a bit cooler than the one 120K years ago when the opposite was true.

Based on this, it seems like the first minimum of the next ice age will be in about 10K years when the Earth reaches minimum tilt and this cooling should start within 1K years. In 10K years, perihelion will be more closely aligned with the N hemisphere summer solstice, so it will not be a very deep ice age and just a little cooler than the first burst of cold after the last interglacial.

If climate is a chaotic system (and it is), then it cannot be predicted with accuracy over long periods of time, only assigned degrees of probability. That assumes you understand all the cycles of all the contributing influences, like the sun, the oceans, the clouds, the orbits, etc.

This also assumes we have enough data to recognize and analyze the various cycles. We don’t.

Robert,

There is no evidence to suggest that the climate is chaotic. The local weather is almost

certainly chaotic but the climate is relatively stable. No-one in their right mind would believe

me if I told them when it would rain in 100 years time but everyone would probably agree that

in 100 years time the summer would be warmer than the winter.

Climate is predictable because the weather is chaotic and there is an attractor in phase space that all trajectories end up on. Averaging the weather along the attractor gives you the climate. If the weather was unconstrained then the climate would be too.

Izaak,

It seems like the attractor is maintaining a ratio between the average surface emissions and the average planet emissions of g, where g is the golden ratio of about 1.62. Its reciprocal, 1/g, is about 0.62 and the EQUIVALENT emissivity of the planet relative to the surface temperature. Given that half of what the atmosphere absorbs from the surface is ultimately emitted into space, it looks like clouds and weather modulate the surface emissions absorbed by the atmosphere such that fraction of absorption converges to 2(1-1/g) or about 76.4%.

Izaak Walton – April 16, 2019 at 7:27 pm

Izaak W, it is always a good idea to consider one’s “point-of-reference” prior to posting contradictions.

If one only spent 3 seconds observing the “local weather” through their kitchen window, it surely wouldn’t look chaotic, ……. whereas observing said for 48 hours could prove quite different.

And the same goes for climate, ……. to wit, this proxy graph:

1,000,000 years of

chaotic“climate changing” glacial-interglacial periods.https://d32ogoqmya1dw8.cloudfront.net/images/eslabs/cryosphere/timeline_ice_ages_456.gif

Samuel,

In the context of the climate, the perceived chaos is randomness around a mean. The mean itself is not chaotic, but absolutely deterministic. If the mean was even close to as chaotic as the variability around it, life would have probably never started.

While 3 seconds of weather doesn’t look chaotic and 48 hours of weather does, 12 month averages do not. Keep in mind that even a 1C increase in the average represents a net change of only about 1.4%, increasing emissions from 390 W/m^2 to 395.4 W/m^2, which is less than the error margin of the data establishing any change.

The million year record does not represent chaos, but represents the causal response to variability in Earth’s orbit and axis. If you superimpose orbital and axial variability on the plot, the correlation is clear, although the precise mechanisms are not well understood, at least by the IPCC and its self serving consensus …

Izaak,

The IPCC says that climate is chaotic.

>>

In the context of the climate, the perceived chaos is randomness around a mean. The mean itself is not chaotic, but absolutely deterministic. If the mean was even close to as chaotic as the variability around it, life would have probably never started.

<<

I don’t understand this at all. You seem to be claiming that chaotic systems aren’t deterministic. The science of chaos is about order and disorder in non-linear, deterministic systems. It may surprise some, but chaotic systems are deterministic. The current state of a chaotic system determines the next state.

There is no reason why a chaotic mean would preclude life–it depends on the amount or magnitude of chaos.

>>

The million year record does not represent chaos, but represents the causal response to variability in Earth’s orbit and axis. If you superimpose orbital and axial variability on the plot, the correlation is clear, although the precise mechanisms are not well understood, at least by the IPCC and its self serving consensus …

<<

How do you or anybody know where the Earth’s orbit was a million years ago? As the planetary orbits appear to be chaotic, may have a horizon of predictability on the order of a million to 5 million years, and planets don’t leave their tracks in space, the actual orbit and position of the Earth a million years ago is pure speculation.

Not even long-term computer models of planetary orbits are defined. The planets either spiral inward until they crash into the Sun or spiral outward until they eventually escape altogether. The only way to stabilized these orbits long-term is to add a damping factor. A damping factor–where does that come from (except in computer models)?

Jim

Jim,

I’m saying that the mean is absolutely deterministic and that the absolute variability around that mean is not predictable as a function of time, but the distribution of possibilities is, whose average affect on the mean is zero, and this is what’s characteristic of chaos. The chaos itself is not predictable, but its mean integrated across time is, at least it is for a causal system. The current state doesn’t determine the next state, but is determines the probabilities associated with possible next states which are also a function of the time separating the two states.

Yes, whether life evolved would be dependent on the magnitude of the chaos, but if it was as anywhere near as large as the chaos of the variability around the mean response, it would be a very large magnitude.

We can calculate how the orbit and axis varies using orbital mechanics taking into effect the orbits of other planets and how they change. Other planets have a known effect on how Earth’s orbit varies and this can be predicted to within a few percent forward or backward at least a million years. No ice cores go back more than about a million years, so for the purposes of comparing against ice core data, it’s good enough.

This plot shows Vostok temp (green) DomeC temp (blue), axial tilt (red) and eccentricity (gray). The data is smoothed to 22K years in order to cancel out the effects from the precession of perihelion whose period is also 22K years.

http://www.palisad.com/co2/ic/orbit.png

The correlation gets worse the further out in time we go which is an indication of the relative uncertainty between the projected orbital and axis variability and the temporal positioning of slices of ice which gets much more difficult further back than about 100K years where the seasonal variability in the cores used for counting years like the rings in a tree are completely gone. The temporal positioning of DomeC was done with more modern techniques than Vostok and is somewhat better.

co2isnotevil,

“HA”, and cornfields in Oklahoma don’t look chaotic either. Does not the averaging of a highly random number set …….. sort of maybe remove the chaos of the specified numerical quantities? Like students being graded on a “percentile” rather than pass-fail.

“

chaotic = in a state of complete confusion and disorder.”co2isnotevil,

If the cited 1m year glacial/interglacial graph was a 10 day or 10 month surface temperature graph, would you describe it as portraying the measured temperatures as being “a state of complete normalcy and order” …… or ….. a “a state of complete confusion and disorder”?

Your post contents was technically correct but I don’t think it would have solved Izaak Walton/s quandary.

>>

The chaos itself is not predictable, but its mean integrated across time is, at least it is for a causal system. The current state doesn’t determine the next state, but is determines the probabilities associated with possible next states which are also a function of the time separating the two states.

<<

Well, I hate to rain on your parade, but chaotic system ARE deterministic. Take Edward Lorenz who developed a simplified mathematical model for atmospheric convection. That system is governed by three ordinary differential equations now known as the Lorenz equations. The Lorenz system is nonlinear, non-periodic, three-dimensional, and deterministic

Indeed, linear systems can’t be chaotic. Chaotic systems are non-linear and deterministic. The current state determines the next state in a chaotic system.

>>

We can calculate how the orbit and axis varies using orbital mechanics taking into effect the orbits of other planets and how they change.

<<

I quote from “Orbital Mechanics” by Prussing and Conway:

“Thus 6n integrals are required to solve for the absolute motion of the n-body system. Since only 10 integrals of motion exist, even the absolute motion of a system of two bodies cannot be determined in closed form.”

The ten integrals are: conservation of linear motion provides six, conservation of energy provides one, and conservation of total angular momentum provides three.

Generally you solve the two body problem by using an iterative method on Kepler’s Equation (Newton’s method would work), and using perturbation effects for any additional bodies.

As I said previously, solving planetary motion from first principles creates a chaotic system that quickly falls apart–in a few 100,000 years. To get to a million years or more, requires a damping factor to stabilize the orbits.

If you are using today’s measured orbital parameters, you aren’t really taking into consideration the chaos that’s present.

Jim

Sam,

“If the cited 1m year glacial/interglacial graph was a 10 day or 10 month surface temperature graph …”

It doesn’t matter what the time scale is, the AVERAGE surface temperature is a causal response to the AVERAGE input energy (the Sun) and in that context, it’s not chaos, but a predictable response. If the form of the response had no relationship to the form of the solar input, then the response would better characterized as chaotic, but this isn’t how the planet works.

Another point is that the dictionary definition of chaotic is not necessarily representative of the more formal mathematical definition.

Jim,

You seem to think that I’m saying that the climate system is non deterministic, while Sam seems to think I’m saying that the climate system is completely deterministic. But then again, this what makes chaos theory a bit counter intuitive as it’s a little bit of both.

In principle, even if you knew the state of every atom in the system, you can only predict the next state exactly in the limit as the time between the current state and the next state approaches zero. This is a fundamental consequence of the Heisenberg Uncertainty Principle. However; since the system is linear in the energy domain, the average response is completely deterministic.

I should also point out that the climate is a very linear system despite the common claim that it’s not. It’s only non linear relative to the relationship between W/m^2 and temperature, where W/m^2 are proportional to T^4. The delusion of non linearity only arises because of the obfuscating non linear representation of sensitivity as degrees per W/m^2. A more proper form for the sensitivity would be W/m^2 of surface emissions (BB emissions at its temperature) per W/m^2 of forcing.

In the energy domain of W/m^2 in and W/m^2 out, whether we are talking about the surface, the planet or the relationship between the surface and the planet, the system is very linear and this linearity is an obvious consequence of COE and is very testable.

The average W/m^2 entering the planet are equal to the average W/m^2 leaving the planet in order for COE to be satisfied. The same is true for W/m^2 entering the surface and W/m^2 leaving the surface. The relationship between W/m^2 in (or out) of the planet is also demonstrably linearly related to the W/m^2 in (or out) of the surface and this linear relationship is largely independent of the surface temperature.

The ratio between the average emissions of the surface and the average emissions of the planet is about 1.62 and the longer the period of time that the average spans, the closer it converges to this ratio. The fact that this ratio is within 1% of the golden ratio may be a coincidence, but then again, the golden ratio frequently appears in the steady state solution of many self organized systems.

Take a look at the data labeled ‘Demonstrations of Linearity’ at this link:

http://www.palisad.com/co2/sens/

In all of the plots., the small dots are 1 month averages for 2.5 degree slices of latitude. The larger dots are the average of 3 decades of data for each slice. The source data is the weather satellite data prepared by ISCCP where >90% of the planet’s surface is covered by 2 or more satellites with 8 samples per day spaced 3 hours apart. The vast majority of the rest of the surface (mostly near the poles) is covered by at least 2 measurements per day except for a short period of time when there was only one active polar orbiter.

>>

You seem to think that I’m saying that the climate system is non deterministic . . . But then again, this what makes chaos theory a bit counter intuitive as it’s a little bit of both.

<<

Again, for the umpteenth time, chaotic systems are deterministic. They are not, as you say, a bit of both (deterministic and non-deterministic).

>>

This is a fundamental consequence of the Heisenberg Uncertainty Principle.

<<

The term “quantum chaos” is an oxymoron. There are at least three reasons why quantum systems are not chaotic. 1) Chaotic systems are deterministic. Quantum systems are probabilistic. I refer you to the arguments between Niels Bohr and Albert Einstein on Quantum Physics. Both men are many times smarter than me, but in this case, Einstein was wrong. Quantum Physics is not deterministic, and there are no hidden variables. 2) Chaotic systems are ruled by non-linear differential equations. Quantum systems are ruled by the Schrodinger equation, which is a linear differential equation. Linear systems cannot support chaos. 3) The various states of a chaotic system are defined by precise values of position and velocity–it’s known as state space. Quantum systems are subject to the Heisenberg Uncertainty Principle which forbids accurate knowledge of both velocity and position.

>>

COE

<<

??? Conservation of energy?

>>

Take a look at the data labeled ‘Demonstrations of Linearity’ at this link:

<<

Power-out doesn’t always equal power-in. During the day, power-in is greater than power-out or the temperature wouldn’t rise. During the night, power-in is less than power-out or the temperature would decrease. During summer in the Northern Hemisphere and during summer in the Southern Hemisphere power-in is greater than power-out or it wouldn’t get hotter. During winter in the Northern Hemisphere and during winter in the Southern Hemisphere power-in is less than power-out or it wouldn’t get colder.

Then you have desert climates, tropical forest/tropical rain forest climates, mountain climates, valley climates, polar climates, temperate zone climates, island climates, ocean climates, and so on. Averaging climates is like averaging temperature–just more nonsense.

>>

I should also point out that the climate is a very linear system despite the common claim that it’s not.

<<

I have not seen (and I doubt that you or anyone else has either) a complete set of differential equations describing climate. If described, I would bet that they are non-linear, deterministic, and chaotic.

Jim

Jim,

The differential equation quantifying the climate system state (E) is a consequence of COE, where E is the energy stored by the planet and which is linearly related to its temperature, T, via a heat capacity. You might recognize this DE as the same form of the LTI that describes the charging and discharging of an RC circuit when Po is linearly related to E through a time constant.

Pi = Po + dE/dt

As you have noticed, the input forcing (Pi) is not always equal to the planet emissions (Po) and their instantaneous difference, dE/dt, quantifies the Joules added to or removed from the thermal mass of the planet. The definition of the steady state is when the average dE/dt is 0. The relationship between Po and E establishes solutions for E as exponential decays of the form e^-t/tau for step functions of forcing (Pi), where tau is the effective time constant and/or sinusoids of the form e^jwt, where j=sqrt(-1), w=2*PI*f and f is the frequency of the variability in Pi.

The equations that connect everything together and which quantify the absorption and emission by ANY matter in equilibrium with a source of radiant energy are as follows:

Pi = Pf*(1 – a)

Pf -> forcing power from the Sun, a -> albedo

Po = Ps*e

Ps -> the BB emissions of the surface corresponding to its temperature

e -> the attenuation of surface emissions resulting in planet emissions

Ps = o*T^4

T -> surface temperature

o -> Stefan-Boltzmann constant

T = k*E

k -> heat capacity

The time constant is related to k, but is not independent of the state (E) as it is for an RC circuit, making the effective time constant proportional to (kE)^-3, that is, the hotter it gets, the faster it cools. In principle, the time constant is the amount of time it would take to emit all of E at the rate Po, thus Po = E/tau. The dependence of tau on E makes this a higher order LTI, rather than the first order LTI that describes an RC circuit, where tau is equal to the constant R*C.

For an ideal BB, a = 0 and e = 1. The Earth is not an ideal BB, yet still must obey this differential equation unless we allow for violations of COE. For the Earth, a and e vary across time and space whose long term averages are a = 0.3 and e = 0.62. Decades of weather satellite data confirms these constant average relationships with near absolute certainty, moreover; when you account for the energy transferred between simulated cells, these equations work for any point on the surface. You should pay careful attention to the plots I referenced that demonstrated this linearity. These are the results of tests that unambiguously validate my position.

BTW, the apparent chaos is manifested by the co-dependent variability of a and e around their respective means, none the less, COE must always be honored. I stand by my previous assertion that the next state of a chaotic system is only deterministic from the current state in the limit as the time between the current state and the next state approaches zero. In addition, COE dictates that the climate system must be linear in the power domain, thus the mean response relative to W/m^2 in and W/m^2 out is deterministic.

>>

You might recognize this DE as the same form of the LTI that describes the charging and discharging of an RC circuit when Po is linearly related to E through a time constant.

<<

This old electrical engineer hasn’t solved RC circuit differential equations in the time domain since the first quarter of my sophomore year in college. We quickly switched to Laplace transforms. Using Laplace transforms changes a difficult problem in calculus to a less difficult problem in algebra. For instance, a convolution integral in the time domain becomes simple multiplication when using Laplace transforms.

I am confused in your use of RC terminology for power. Although voltage and current lead to differential equations that are linear, power is definitely not a linear function in RC circuits. You can’t use linear techniques to solve for power–at least not in electrical circuits.

So this RC-type differential equation you’re using is something completely different. I’m not sure it is justified.

It looks like you’re using a Kiehl and Trenberth 1997 version of climate. There’s nothing wrong with that, but it’s only a first-order approximation. I’ve made models of the climate using KT 1997, but I don’t take it beyond a degree or two of temperature change. I know it doesn’t represent reality exactly–climate is not linear.

Averaging the energy flow of the entire planet distorts the actual flows. When you average energy, the polar ice caps disappear and so do the tropical regions. A real climate model (not a GCM) would create those cold and warm regions automatically.

It would be nice to see an actual climate model and not just a weather model averaged over a period of time that attempts to simulate climate.

Jim

Jim,

The RC model uses voltages, rather than power as the inputs and outputs. An LC model would use currents. I only referenced an RC circuit as an analogy of a similarly behaving physical system whose behavior might be more intuitive. Voltage and current are linear to each other by Ohms Law, power is linear to other power by Conservation of Energy.

BTW, I only solve the DE’s by using a circuit simulation tool like Spice. None the less, it still helps to have a basic understanding of why the solutions are what they are as well as how to arrive at the same answers in the S domain.

This is not any model Trenberth would apply or understand, for if he did, he would realize that the sensitivity is dT/dPi which in the steady state when Pi = Po is reduced to 1/(4eoT^3) which for e=.62 and T-288 is about 0.3C per W/m^2 or about 1C for doubling CO2. Note that the sensitivity expressed as dPs/dPi = dPs/dPi = 1/e = 1.62 and this is demonstrably linear and temperature independent.

This is a top level model, not a first order model. A first order model implies that it’s only an approximation that doesn’t include higher order effects. A top level model is complete, where in this case, the energy flux in and out of a radiating and absorbing body must be related to the energy stored by the body by this equation as a consequence of COE.

This model works for the planet as a whole, each hemisphere independently and in fact for all points on the surface as long as flux between neighboring columns of atmosphere along that point on the surface and space is accounted for as components of Pi and Po. It can be made even more accurate by decomposing E into its constituent parts. The differential equation is exact and the only way for it to be inexact is to violate COE, thus its averages are absolutely representative of both its parts and the whole.

I’ve applied this model to slices of latitude and it matches up to the satellite data remarkably well, although I didn’t model clouds and used actual clouds instead, as both a and e are dependent on clouds.

>>

The RC model uses voltages, rather than power as the inputs and outputs. An LC model would use currents.

<<

Not really. You can use voltage for electrical circuits or current–both are linear.

>>

Voltage and current are linear to each other by Ohms Law, power is linear to other power by Conservation of Energy.

<<

Nope. Voltage and current are linear because their differential equations are linear. For example, a volt is a joule/coulomb and an ampere is a coulomb/second. The product of the two gives you a power term or (joule/coulomb)*(coulomb/second) = joule/second = watt.

Since p = v*i and Ohm’s law is v = i*R; we get p = v*i = i^2*R = v^2/R. You can’t solve for power using linear methods in an electrical circuit. You can solve for voltage v(t) and current i(t) independently using linear methods. Then p(t) = v(t)*i(t).

Your COE is only a restriction and not proof of linearity. Have you plotted a vs. a? a^2 vs. a^2? a^3 vs. a^3? a^n vs. a^n? All those plots give you a straight line. So you showing me a plot of power-in vs. power-out really tells us nothing about linearity. I would expect those plots to be linear.

The way to show linearity is if superposition applies. In fact, this may be used as a definition of linearity. Specifically, if

and

for all , , , and , then the system is linear. If this is not true, then the system is not linear.

If superposition holds for your equations, then they are linear.

>>

The differential equation is exact and the only way for it to be inexact is to violate COE, thus its averages are absolutely representative of both its parts and the whole.

<<

I doubt that your differential equation is an exact representation of climate.

>>

BTW, I only solve the DE’s by using a circuit simulation tool like Spice.

<<

I’m not impressed. I do wish we had SPICE in college. All we had back then was pencil, paper, and a slide rule.

Jim

Jim,

You are correct that recognizing how superposition applies is the key to understanding how the climate system operates. Superposition means that if 1 W/m^2 has X effect, 2 W/m^2 has 2X effect. If we quantify the output effect in W/m^2, superposition is demonstrably true (W/m^2 in and W/m^2 out) but not in the temperature domain (W/m^2 in and T out). The result is that the geometric average of temperature is meaningless relative to the average behavior, while geometrically averaged W/m^2 are absolutely representative of the whole and its parts. Note that superposition also applies to the geometric averages of T^4.

The average incremental relationship between surface emissions and the incident energy converges to within a few percent of a constant 1 W/m^2 of surface emissions per W/m^2 of forcing within about a year (the behavior of an ideal black body!). Longer term averages converge even closer to this ratio. Here’s the supporting data:

http://www.palisad.com/co2/sens/pi/se.png

Each little dot represents the the ratio between the monthly net average emissions of the surface and the net average solar energy arriving from the Sun for each 2.5 degree slice of latitude. The larger dots are the longer term averages of all 3 decades of data. Each slice has a different average temperature and different average solar input, yet the long term ratio is demonstrably constant between them. On the input side, the incremental behavior is 1 W/m^2 of emissions per W/m^2 of forcing, while on the output side, 1.62 W/m^2 of surface emissions are required to result in 1 W/m^2 of output emissions. Where these two curves intersect establishes the steady state. That the output ratio is larger than the input ratio means that the output path adapts faster than the input path and which reinforces stability.

The variability seen in the monthly data is because the planet’s seasonal response is slow, relative to the seasonal change in solar input. When we plot surface emissions vs. output emissions, the monthly variability around the mean is reduced to only a few percent. The unavoidable consequence is that the linear average relationship between the surface emissions and the planet’s emissions means that the planet’s aggregate behavior is indistinguishable from that of a gray body whose temperature is that of the surface, whose emissions are that of the planet and whose emissivity = 1/1.62 = 0.62.

In this next plot, the expected behavior of a gray body with e=0.62 is plotted in green while the dots of data represent the ratios between the surface temperature and planet emissions for each slice of latitude as extracted from 3 decades of full surface coverage from 4 hour sampled weather satellite data.

http://www.palisad.com/co2/tp/fig1.png

The next plot shows the two relationships (in W/m^2 vs surface W/m^2 and out W/m^2 vs surface W/m^2) and where these two relationships intersect defines the steady state.

http://www.palisad.com/co2/tp/fig2.png

Note that ignoring superposition in the energy domain is one of the contributing factors for why climate science is so incredibly broken. Another is inferring approximate linearity in the relationship between W/m^2 and the temperature. The reason is because the IPCC insists on quantifying the relationship as an approximately linear relationship between W/m^2 and temperature which is unconventionally invalid since superposition clearly does not apply. This provides the wiggle room to claim that the next W/m^2 of forcing can be so much more powerful at warming the surface than any other. Acknowledging superposition in the energy domain blows this assumption out of the water and falsifies everything claimed by the IPCC and its self serving ‘consensus’.

Let us add resolution to that list. Raindrops are different sizes with differing amounts of heat therein. Until you can get the resolution down to a raindrop you are just playing video games. Current resolution in full general module for the Russian model (which is by far the most accurate one) is 1.5 degrees latitude and 2 degrees longitude ~150km for the latitude and 1 degree longitude length varies from 0 to 111.32 km depending where you are in latitude. Modellers do use sub cloud programs to get high resolution in an imaginary cloud but then have to input that result into general module. However that input is from an imaginary cloud(imaginary in the sense that the general module didn’t produce the cloud). So this results in the cloud sub module producing a number of clouds of the various types of imaginary clouds, cirrus …etc to be input as an average imaginary cloud in the general module. So even if the general module calculates correctly the number of clouds of each type on the earth surface for a given time, each of the clouds (of each type of cloud) has the data inputted from the calculations for each cloud type that was calculated separately in the cloud module. Thus the general module ends up with cloud data that is generic for each type of cloud. Since no 2 snowflakes are the same,the same goes for clouds. Nevertheless the Russian module was the only one that duplicated the famous pause. However the IPCC ignores it because it has too little warming.

https://www.metoffice.gov.uk/research/modelling-systems/unified-model

Now about that identical climate model and the 2 week limit……

Need we add:

Executive Summary of Chapter 14 of the IPCC 2007 report produced by Working Group 1.

Just wait.

If they can reduce CO2 in atmospheric concentration to a stable 350 ppm they will be able to not only predict the waether out 30 years but also control it.

Just imagine:

No more Droughts

No more Hurricanes

No more Tornados

No more (wildfires) Forest Fires

No more spread of disease through vector borne transmission.

OH What a WONDERFUL world they promise.

/sarc

I have often asked this question…what is the real, repeatable accuracy of day to day forecasts?…I think that the Penn State University assertion of 10 days is fraught with qualifications. What skill do they mean?, how repeatable is that skill? Etc.

Take a look at these two links:

https://sats.nws.noaa.gov/~verification/ndfd/index.php

and

https://sats.nws.noaa.gov/~verification/ndfd/publications.html

The first link will display NOAA forecast skill for a particular month, the second is to a paper (among others) discussing skill in general. See Huntemann, T. L., D. E. Rudack, D. P Ruth, 2015: Forty Years of NWS Forecasts: Past Performance and Future Advances. Harry R. Glahn Symposium, Phoenix, AZ, Amer. Meteor. Soc., 449.

Note the graph on page 6 (you need to download the paper to see it…if anyone can figure out how to directly display here, it would be appreciated). “Figure 2. Day 1 and day 2 MaxT MAEs for (a) warm (April-September) and (b) cool (October-March) seasons based on 0000 UTC model cycle guidance. Years show when season began. Lines are 5-year moving averages.”

The graph shows a measure of skill from 1965 to the present. Basically there has been a slow increase in capability from 1965 until about 2005, then flat to 2010…and then actual degradation of skill from 2010 until the present.

It is very unclear to me how the Penn State 2 week assertion jibes with the NOAA skill data. (My own take is that it doesn’t…..)

Yet, NOAA, and the other international weather/climate forecasting agencies keep piling on computer capability and fancy weekly, monthly, seasonal, yearly, decadal, and finally centuries forecasts (sorry, “probable climate states”..:).

If the subject agencies actually provided the demonstrated skill of any given forecast, they would quickly find themselves at severe odds with any level of usefulness or credibility.

I suggest the following:

1-3 days…pretty good

3-5 days…not too bad

5-10 days…it depends…

10 days + Crap. (with a couple of exceptions…which are…???)

Note that certain aspects of weather forecasts, Hurricanes, large regional weather tendency…such as those somewhat coupled to large scale phenomena…like ENSO, Artic Oscillation Index, etc, appear to show some level of skill…but those tend to be “analog” forecasts…hence my go to site, weatherbell.com. I would love to see some detailed, objective skill analysis of those types of analog forecasts…but I have not found any. Perhaps the WUWT audience can provide some background.

Thanks,

Ethan Brand

As far as I can tell, anything out past 3 or 4 days is dartboard territory.

I assume that in the study of accurate weather forecasting did not include the forecast of “The Storm Channel”?

“The simulations were able to predict the weather patterns with reasonable accuracy up to about two weeks, the scientists said.”

I guess it all comes down to how you define “reasonable accuracy”.

The GCM’s used for climate forecasting are like those used for weather forecasting except that they run with less resolution and coarser time steps. It’s a mystery how making GCM simulations less precise can improve the limit of predictability from 2 weeks to centuries.

Another mystery is how incredibly sensitive these models are to minor changes to the initial conditions where the final results can vary widely. Yes, the system converges chaotically to a new equilibrium as conditions change, but this chaos has nothing to do with what that new equilibrium will be which is constrained solely by the available energy.

The first test that should be applied to any model of a causal system driven by available energy is that it should converge to the same final steady state regardless of the initial conditions. Averaging together erroneous results can only produce an erroneous average no matter how many results are considered. If they don’t believe that the Earth’s climate is the result of a causal system, then there can be no point in attempting to simulate it.

That’s because they aren’t making the same kind of forecasts.

Weather forecasting takes today’s conditions and tries to project those conditions into the future in order to predict the weather at a time in the future.

Climate forecasting takes a set of conditions and tries to predict what the average climate will be like under those conditions. It makes no attempt to figure out what the high and low for a day 100 years from now will be.

They don’t simulate the average temperature (climate) directly, but indirectly by averaging simulated temperatures over time. While nobody can predict the high and low temperatures for a specific day 100 years from now, they’re certainly calculating averages based on the diurnal, seasonal and geographic variability in the temperatures predicted by the simulations. This also requires accurately simulating clouds, which to date, no GCM has successfully achieved and which is a significant contributing factor to the limit of predictability.

They can’t even predict min/max/average temperatures more than a few days into the future. If the prediction is within a couple of degrees, it’s considered a success. Of course a few degrees per measurement is a massive error when trying to predict a tenuous trend on the order of a few tenths of a degree per decade.

The only thing that the calculation of “min/max/average near-surface air temperatures” are any good for …… is for use by local or regional radio/TV weather forecasters so that they can appease the curiosity of their viewing audience …… by announcing the current hour’s temperature, the daily forecasted hi/low temperature and the max/min daily temperature of bygone years for that calendar date.

Is any of the aforenoted near-surface temperature data of any importance in long term (>5 days) planning?

Absolutely not.

friend and I discussed starting forecast company for snow plowing. we could be wrong most of time and still be as accurate as every forecaster (for this area (mid maine) during this last plow season) was never right.

so….free money.

So if one can only forecast weather 2 weeks into the future, how can climate be forecast which is 30 or so years of weather?

Really? I could have told them this. Where is my phat paycheque!?!?!?!?!?

Given weather systems move from west to east as they circulate the globe, for us in Australia it is only when they begin to move towards and then across the western side of the continent that the weather outlook begins to form. I would expect that it would be similar for all other land masses.

Yeah, but Western Australia has strikes against it that Western US and even to a lesser extent western Europe have against them-Lack of upstream data. There is almost no shipping traffic in the Indian Ocean west of Australia and south of the Equator to provide a heads up about the surface weather heading their way. Western US has more shipping traffic to report the weather (but not anywhere nears what we would like) and Western Europe has lots of ships to let them know what is coming. I can remember when in the Navy only being able to find six or so ship reports for the whole Indian Ocean to plot on a surface weather chart. And at that time there was no Geostationary Satellite available. Our “customers” knew how bad the data we had available was, so if we got it right within the first 24 hours they were satisfied.

You do have weather patterns that push down from the north and up from the south, they just don’t go deep into the continent, on average. The West>East flow effects the most land area because of the assist from rotation of the planet. North America? Not so much. We get major weather formations out of the Gulf of Mexico, North Pacific, North Atlantic, South Atlantic, Central Eastern Pacific, hell, even the Sea of Cortez throws its pecker in the soup from time to time. Planetary Rotation just muddles it all together in the middle and drags it to the east coast. Yay.

It seems the NWS has made the local area forecasts larger and the forecasts more probability-like.

Thus a 50% chance of rain is not of much use if that means maybe or maybe not rain somewhere in a large area.

Reminds me of Alan Jackson’s “It’s 5 O’Clock Somewhere”

Cover Art“Thus a 50% chance of rain is not of much use if that means maybe or maybe not rain somewhere in a large area.”

No no, what it means is that if 100 things fall out of the sky, 50 of them will be rain.

If they say 90% chance of rain and it doesn’t rain then they were right. There was 10% chance of no rain. 4 years ago a NWS forecaster told me that “soon” the definition would change to 90% of models say it is going to rain and 10% say it isn’t. I never did understand that percentage garbage, even single models would come out with a percentage. As a Navy forecaster we had only 2 choices, either it was going to rain (or snow or whatever) or it wasn’t. Of course we had a higher probability of getting it right because our forecast was for a particular spot, the air station (or ship). Speaking of a ship, there were a couple of occasions where my forecast of rain showers went bust because the helmsman was directed to go around the rain showers.

South African forecasts used to say “isolated thundershowers” (Which often meant “raining on the front garden whilst sunny in the back”). These days it’s “48% chance”, which is pretty useless when you’re looking at a dried out garden while it’s raining across the valley!

I agree about the uselessness of % chance of precipitation. As a forecaster in the Navy it was as you said in the case of showers, isolated, scattered, numerous, or just plain showers (meaning you ARE going to get rained on) for rain it was binary-you either were going to get rained on or not rained on. And we even had to stick our neck out and say when. (saying when was only good for the first 24 hours-Mother Nature, like any woman, doesn’t like to be predictable)

After three days, fish and weather predictions stink.

Two weeks with favorable winds, on a good day.

2 Weeks? That’s good for a laugh. It’s not likely that our 5pm weather report will accurately tell me what the next days weather will be let alone 2 days out. The 10pm weather report has a better chance of calling most the next day right but it’s breaking down by evening. Just how in the heck can they say with a straight face it’s accurate to predict two weeks out? This is in an area where 9 month out of the year if you predict cloudy and rain you’ll be right 90% of the time. Rest of the year just call for mostly sunny to maintain your 90% accuracy as we get very little rain during the summer.

“2 Weeks? That’s good for a laugh. It’s not likely that our 5pm weather report will accurately tell me what the next days weather will be let alone 2 days out.”

Are you in the Pacific Northwest, by any chance? You echoed my experience exactly. I’d say at two weeks, they have maybe a 1% chance of getting it “right”. But I’m not sure what they consider “right”.

“Right” means that there was some weather.

You got it, western Oregon.

I didn’t add that I spent a number of years in NM for work where there’s 300 days of sunshine a year, they were still miserable at forecasting the weather.

Forget about forecasting, hindcasting is still more art then science with past temperatures in constant flux. The temperature differences caused by using metal versus wooden buckets for measuring ocean temperatures is still not precisely defined.

Until the past becomes predictable, there is no hope in ever predicting the future.

Thankfully, Tony Heller is doing a lot of the heavy lifting and has proven that CO2 has an enormous effect on past temperatures. With more powerful computer models it will soon be possible to reliably determine past temperatures based on current CO2 measurements.

Break out the dart board, the big dart board. Dec 11th of 1811 was the first of a series of 3 large quakes on the New Madrid. The second quake was on January 23rd 1812, and the 3rd and largest struck on February 7th 1812, Dalton Minimum and the solar minimum

The first major quake on the New Madrid was at 1pm on December 25th of 1699, recorded by a French missionary in a group of explorers, Maunder Minimum and start of the solar minimum.

Other years were in AD300, AD900 and AD1450. Now take a look at the JG/U 2K temp graph and see what it shows is happening to global temps for each of those 3 quakes. Other moderately strong quakes were in January 4th in 1843, at the solar minimum, and on October 31st 1895. This last one occurs after the maximum of SC 13 and 8 years prior to the solar minimum. More recent was on November 9th 1968 three years after the solar minimum. … http://www.uni-mainz.de/eng/bilder_presse/09_geo_tree_ring_northern_europe_climate.jpg

So commonality with the major quakes is they all strike mainly in the winter, at the solar minimum, and either during a solar grand minimum or during a Gleissberg cycle. Which raises the question is the next New Madrid quake now close at hand and ready to strike in this upcoming winter? If not this winter, then highly likely for the next solar minimum. It is clear to see that this year 2019 will be the heart of the solar minimum. The solar minimum is certainly low and prolonged as the last one was in 2008/09. In 2008 a moderately strong quake hit on the Wabash Fault Zone, close to the New Madrid Zone. Does this warrant issuing a warning to the proper emergency agencies to be on stand by alert, and/or to issue a general alert to the population at risk?

This applies to carefully selected earthquakes. Why don’t you include any M9+ quake?

https://earthquake.usgs.gov/earthquakes/browse/largest-world.php

Because the above is specific to the history of the New Madrid Seismic Zone.

Climate science is severely handicapped beyond confounding conditions that can be expected. With the destruction of the longterm temperature record, it makes it impossible to study phenomena relative to temperature. For example, trying to isolate natural variability (whatever its cause) to assist with the fundamental research when a significant part of the fiddling has been to reduce natural variability because they wanted it to be a negligible consideration and make the warming 80s and 90s more starkly human-caused.

They had to get rid of 35yrs of serious cooling from 1940 highs to 1975 lows because the big El Nino of 1998 didn’t (at the time) create a new high temperature record. Translated, this meant that the late 30s still were held the record highs, which further meant, all the global warming to the end of the century had occurred during the first half of the 20th Century despite galloping CO2 emissions during the second half of the century!

They didn’t seem to understand that all the fiddling would end up producing The Dreaded Pause, because the recent end of the temperature curve was constrained (policed) by the existence of satellite measurements. Some skeptics were able to see that the ~69-70yr undulation of the temperature natural variation was set to bring a cooling – The Pause, which, indeed, took the mainstream scientists completely by surprise.

A chaotic system like the climate is exquisitely sensitive to initial conditions, so much so that the flapping of a butterfly’s wings changes the weather. We can’t possibly know initial conditions to account for that.

The knowledge imparted by Lorenz has been ignored by generations of weather modellers because it is inconvenient to them. Stupid is as stupid does.

“A chaotic system like the climate is exquisitely sensitive to initial conditions, so much so that the flapping of a butterfly’s wings changes the weather. We can’t possibly know initial conditions to account for that.”

No it’s not (over the order of decades/cent).

Climate is not sensitive to initial conditions.

It is sensitive to ongoing conditions.

In that here climate is the GMST of Earth.

That is accruing net effect of energy in vs energy out.

Weather is sensitive to IC and is the reason ensemble forecast runs differ – deliberately – in order to understand the extent to which the current state is sensitive to initial conditions and therefore to evaluate a confidence level on a longer range forecast run. In the case of large variability, a reversion to average climatology may be in order.

Climate prediction – actually projection as we do not know the state of anthro-caused GHG concentration in the future) can only (at best) provide a limited lockstep of simulation ENSO state, which causes the greatest variation of GSMT about the mean warming trend of AGW.

What rules for GCM’s is the physics of SW solar absorbed vs LWIR emitted to space.

Weather (and NWP forecasts) are the “noise” – the chaos within the climate system whilst that process occurs.

Did they factor in that pesky hurricane-forming Amazonian butterfly?

Dammit Bob, you beat me to it.

They do not even know about or refuse to heed, the AMOC regular shifts. Towards colder climate at present…… Brett

Weather is not climate (I keep being told when it’s convenient to use this excuse to explain cooling)

“Weather can only be predicted a few weeks out…it’s too hard to try to get weather right because people can just wait two weeks and see that I was wrong…”

“But climate is 100% accurate out to 100 years based on our untestable climate models. No one can wait that long and besides, I’ll be dead so I won’t care.”

Anything that is iterative, and the input is based on the output of the last iteration, and is not well constrained (bound), and has more than 2 independent variables, and is non-linear, is usually unpredictable. We found this out when we tried to predict three orbiting bodies bound by gravity. Yes you can find simple solutions, but no, that is not how nature works.

“Welcome to the REAL world…”

So, if the limit of weather prediction is 2 weeks and climate is the sum of weather over 30+ years, how can we predict climate?

All things quasi-stable, it is possible to describe an envelope of a chaotic process. We don’t know all. We have characterized somethings. We can only manage a subset. We can reasonably “predict” within a limited frame of reference. Weather follows prevailing winds, known processes and geographical features, and can be forecast in the near-term, looking forward, with variable degrees of confidence.

But climate and its associated weather can move abruptly from one state to another.

5300 years ago the weather forecast that Otzi the Iceman received would have been along the lines of…”it’s early summer, the pass across the Alps will be free of snow.

So off he went, sky blue, all good. He stopped for lunch in a conifer forest then started climbing.

In came a blizzard and he ends up frozen for 5300 years.

That’s an example of Lorenz’s chaotic systems.

To quote the gentleman ..”Two states differing by imperceptible amounts may eventually evolve into two considerably different states … If, then, there is any error whatever in observing the present state—and in any real system such errors seem inevitable—an acceptable prediction of an instantaneous state in the distant future may well be impossible….In view of the inevitable inaccuracy and incompleteness of weather observations, precise very-long-range forecasting would seem to be nonexistent.”

Lorenz also went on record as cautioning researchers in the field to restrict their studies to “tractable” matters.

The lure of money, however, has clearly just proved too great for many of them.

“Improvements in day-to-day weather forecasting have implications for things like storm evacuations, energy supply, agriculture and wild fires.”

Funny, because for storms they seem to fail to have good predictions as close as 24 hours of landfall for many hurricanes.

And wild fires create their own weather which defy modeling.

The NWS/NOAA has an excellent reference page for Verification Publications. Of particular interest see this paper:

Huntemann, T. L., D. E. Rudack, D. P Ruth, 2015: Forty Years of NWS Forecasts: Past Performance and Future Advances. Harry R. Glahn Symposium, Phoenix, AZ, Amer. Meteor. Soc., 449.

Website:https://sats.nws.noaa.gov/~verification/ndfd/publications.html

The Huntemann Paper has an interesting graph on Page 6 showing the progress of skill level since 1965.

[Note to moderator…a previous more detailed post I made on this subject seems to have disappeared (?)]

Planning on weather beyond 15 hours is really risky business. As the saying goes, fuel weights less than weather.

Weather is the sum of what is happening at a given place at a given time. Climate, at its most basic, is the sum of all weather.

If weather will never be able to be predicted more than 2 weeks in advance what chance is there that climate can EVER be any better than that. Sure, it is always possible to say that temperature will go up (or down) but how much? and when? and where? is impossible.

Interesting. I have been tracking the forecast for April 22 (start of golf season) since April 1. The range through April 16 is 49º to 68º for the high temperature. This range was within the last week. I’d say the greatest accuracy is probably within 3-4 days. However, I have complete faith that our climate will increase by an average of 5º in 30 years because 97% of social scientists say so.

I think it depends on what exactly is being forecast. The temperature in a given location? Whether it will rain or not?Her in Qld there was a guy many years ago, “Cyclone Bill” Devonshire who posted long range forecasts in the annual tide times book. If he said there would be a major storm in Brisbane on the 10th of December there would be a storm within 48 hours either side of that date.

Scary reliable.

Weather does not change that fast, so the upper limit of the weather in the future being the same as today is one to two weeks. Probably closer to one. That is not prediction, that is dependence on stasis.

2 weeks – yeah nah as they say here in NZ. I rely on the weather forecasts to do parts of my job and our local MetService regularly can’t even get their predictions right a couple of hours before or even after. The number of times I’ve told my clients we’ll pencil it and we’ll see what happens on the day when we wake up, in fact I say that every single time now.

A bit late to the discussion, but I’ll add this FWIW.

I spent Saturday on the Ouachita Trail tracking runners in a 50 mile ultra running event. All week the forecast had been for showers to begin about 5AM Saturday morning. And sure enough, they arrived right on schedule. Normally one can watch fronts move in from the plains NW of us. Not hard to predict that kind of weather. This weather came up from the Gulf, and wasn’t apparent on any radar map until materialized overnight. But the forecasters knew what was going on and nailed it.

Two weeks may be the limit, but there is enough variability that I usually wait until 2-3 days before an event to start relying on a forecast. At that point, the forecasts are right more often than wrong.

But none of this has anything to do whatsoever with forecasting “climate change.” Just because WEATHER cannot be predicted accurately more than a few days (to two weeks) out does not mean that CLIMATE cannot be predicted beyond that horizon. Large scale weather patterns — on shorter time frames, ENSO; on longer time frames, PDO, AMO — do provide a basis for longer term forecasts. The accuracy window widens — months, for ENSO, to years for PDO or AMO — but changes within such natural cycles are still predictable.

I agree wholeheartedly that predicting long term climate change from CO2 increases is folly. But this isn’t proven just because we cannot predict weather more than a week or two into the future. This article was talking about predicting weather, not climate. Do not read anything else into it.

The dreaded word ‘models’ appears which means that no matter how much data they chuck in, if the model is wrong so is the outcome. The human experience seems to be no longer part of a forecast which perhaps explains why I think they have got worse. A Met Office forecaster on BBC actually admitted that with the jetstream flow meridional instead of zonal, their models don’t work. The constant changing of forecasts allow the MetO to claim they are far more accurate than they truly are.

Never say never at state universities with grant writers. They will continue the moonshot spending request for supercomputer clusters and supporting power plants to get to 2.5 weeks prediction power while across campus the quest for fusion power soldiers on with other grant funding pots.

Sounds like an open-ended, non-circular, chain-of-events: get a set of conditions, get two weeks chain of events. Climate science is not a c-o-e, but a set of forces that act upon an initial set of conditions to necessitate a series of outcomes that become a continuous set of conditions.

The weather low predictability is not in conflict with climate science. Weather forecasting is trying to predict within the noise of natural variability. Climate science is trying to predict the change in maximum, minimum and modal elements of the natural variability. Two different things.

They say that the limits to accurate daily forecasts are two weeks, yet in the same breath, climate “scientists” claim that climate models can accurately predict what will happen decades into the future??? The hubris of these “scientists” continue to astound me.

Consider the following statements I believe to be true:-

1. Modern climate models now apparently use the same differencing schemes as the weather models, (this wasn’t the case in the early days when I worked at the UK Met Office. Weather models lost energy so as to be almost useless after a few days. Climate models used differencing schemes based on the Arakawa Jacobian which preserved energy. They have now converged allowing weather models to be run out for more than a few days).

2. In spite of this, weather models are OK to 1-3 days, rarely longer.

3. With a co-author, we proved in 1976 that cyclonic patterns effectively randomise atmospheric trajectories within a few days or less even in 2-dimensions. “Computation of Horizontal trajectories based on the surface geostrophic wind”, Sykes and Hatton, Atmospheric Environment, (10) p.925-934. (1976)

4. The models are remarkably resilient to major changes. In one experiment, I dropped all the non-linear terms every other time-step and there was barely any difference after 7 days with a 6 minute timestep. This was described to me as “the effects of smoothing”. I have always doubted this.

5. Parameterisation of the physics in these models is enlightened guesswork.

6. Software is full of unquantifiable error, “The case for open computer programs”, Ince, Hatton and Graham-Cumming, Nature, doi:10.1038/nature10836 (2012).

It follows (at least to me), that predicting long-term climate behaviour and climate policy on the basis of these models alone, even when there are multiple models and multiple scenarios, is little more than guesswork.