Guest Opinion by Dr. Tim Ball
Claims that weather forecasts are reasonably accurate up to 48 hours are based on measured results for fair weather. Results for severe weather, which are really what is important for people, are very poor. The Intergovernmental Panel on Climate Change (IPCC) has a worse record for both. Every prediction/projection since their first Report in 1990 has been wrong, with claims for more severe weather part of that failure. They fail because of fundamental errors in assumptions and mechanics.
Essex and McKitrick identified the challenge of turbulence in building climate models.
Climate research is anything but a routine application of classical theories like fluid mechanics, even though some may be tempted to think it is.
Furthermore, experiment and theory have been struggling since the 19th century, literally for generations, with a complicated behavior of fluids call turbulence.
The experiments are bedeviled by the fact that a turbulent fluid is active on scales smaller than the size of the finest experimental probes. Thus the measurements themselves are not of the actual variables but of some kind of unspecified, instrument-dependent average of the variables, and only one small region of the fluid.
They are talking about turbulence at all scales. Severe weather is large-scale turbulence with basic triggers that convert laminar flow to turbulent flow. This is caused, by the following, among other conditions,
1. Rough terrain, such as the effect of the Rocky Mountains or the Andes on the Westerlies.
2. Transition of surface, such as from land to ocean, like in the Cape Hatteras area.
3. Transition of surface, such as from cool to warm ocean – this is the major driving mechanism in transformation of equatorial depressions to Tropical storms to hurricanes.
4. Different temperatures between air masses.
5. Different convergence and divergence along a Frontal zone.
Figure 1 shows a cross-section of energy balance from Pole to Pole.
Figure 1
It illustrates the average condition at present. Salient features include the areas of surplus and deficit energy and the point of zero energy balance (ZEB). It is different between the Hemispheres, 38N and 40S, because of different land-water ratios. The ZEB is coincident with some important boundaries.
· The snowline (summer and winter).
· The pole ward limit of trees.
· The location of the Circumpolar Vortex (Jet Stream).
· The major air mass boundary – the Polar Front.
Figure 2 shows the Polar Front as a simple division of the atmosphere between cold polar and warm tropical air. The pattern is the same for the Southern Hemisphere. It shows the approximate juxtaposition of the Jet Stream and the Front.
Figure 2
Because of the temperature difference across the Front, sometimes called the Zonal Index, it marks the area of most severe weather. These take the form of mid-latitude cyclones, with associated tornados. Intensity of the storm is directly related to the temperature and moisture contrast across the Front.
The IPCC argue that global warming is inevitable because CO2 levels will continue to rise from human activity. They also claim, warming will be greater in the polar region. If true, then temperature contrast across the Polar Front is lower and energy potential for severe weather reduced.
Figure 1 shows the average position of the ZEB, while Figure 4 shows the average seasonal latitudinal shift in the northern hemisphere, between approximately 35N in winter, and 65°N in summer.
Figure 4
Changes in these latitudes trigger changes in the dynamics created by rotational forces and the area of the surface affected. This is reflected in the changes to the angle of solar incidence variation caused by changing obliquity of the ecliptic (tilt). The Arctic and Antarctic Circles are at 66.5° N and S, the point at which the sun’s rays are tangential at Equinox. But this is only if you accept the angle of tilt as 23.5°. Almanacs list it currently at 23.4° and decreasing at 0.47” per century. The mean position of the ZEB shifts more as the global energy balance changes.
The mid-latitude cyclones that form as wave like patterns and migrate along the Polar Front, are major storm systems that occur more frequently and can impact much larger areas than any other severe weather system. Figure 5 shows a comparison between a mid-latitude cyclone and a hurricane.
Figure 5
A large system can cover up to 5000 km, with damaging winds, heavy rain, snow and freezing rain. Historic records of damage from these storms, is well documented for the US by David Ludlum . Similarly, details of such extreme examples for Europe include the 1588 storm that destroyed the Spanish Armada, well documented by J.A. Kington, and the storm of 1703 that hit England and Europe. Daniel Defoe traveled around England recording the damage in his book The Storm.
Systems are also important in mixing air between the surplus and deficit energy sectors, horizontally and vertically. Intensity of these systems is also defined by the temperature contrast across the Front. In the list of triggers (above) item 5 lists divergence and convergence as mechanism for development. Figure 6 shows the relationship between these and the surface development of the cyclone. As the wave like system develops a low pressure center is formed and a rotational effect is generated. The cold air dictates its momentum, because it is denser and heavier than the warm air. The Warm Front is defined by cold air retreating, and the Cold Front by cold air advancing
Figure 6
The advancing Cold Front acts like a bulldozer pushing already unstable convective cells, cumulonimbus, into extreme instability creating conditions for spawning tornados. Since the cold air is dominant then any decrease in its temperature relative to the warm air is going to have an effect.
An indicator of the difficulty, with turbulence created phenomena, is what happens with mid-latitude cyclones. A full cycle involves four stages.
1. Cyclogenesis, initiation of the wave.
2. Mature Stage with maximum low pressure and wind speeds.
3. Occluded Stage when the Cold Front advances rapidly and lifts the Warm Front above the surface.
4. Frontolysis when a small pool of warm air is trapped above the surface and the surface low pressure dissipates.
Cyclogenesis occurs quite often, but few systems go through the few cycle. An important question is how do you model a system that starts out sub grid size, but may expand to over a few grids?
A shift from Zonal to Meridional Flow in the Rossby Wave pattern of the Circumpolar Vortex will affect all the factors listed (1-5) that trigger mid-latitude cyclones. Development, track and intensity of these cyclones in the North Atlantic was a major focus of H. H. Lamb’s research beginning with his 1950 paper, “Types and spells of weather around the year in the British Isles”. Lamb also knew that a latitudinal shift in the Polar Front results in a change in the Coriolis Effect (creating an apparent force), as it decreases from zero at the Equator to maximum at the Poles.
Essex and McKitrick identified turbulence as a serious challenge for understanding climatology. They spoke to the problem at all levels,
…experiment and theory have been struggling since the 19th century, literally for generations, with a complicated behavior of fluids called turbulence. When a fluid is turbulent, (nearly all fluids are), not only are we unable to provide solutions of Navier – Stokes to confirm the behaviour theoretically, but we are also unable to experimentally measure the conditions in the fluid in such a way that we can fully capture what is going on.
The major factors inducing turbulence in laminar flow, and thereby severe weather, are the rough surface and contact zones of hot and cold air and water. One of the largest contact zones is the Polar Front between cold polar air and warm tropical air. Intensity of severe weather along the Front is a function of the temperature difference between the air masses. The IPCC claim this will decrease with global warming as the polar air warms more than the tropical air. Theoretically this creates fewer storms, but the IPCC are predicting more. So far the evidence of less severe weather seems to support the basic concepts, not the IPCC.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
@ur momisugly Tim Ball –
Your thread title is a little wordy.
How about: “IPCC Based On Fundamental Error” ?
/grin
LMAO , Thanks John.
As far as I know the behavior of solutions to the Navier-Stokes equations including turbulence is an unsolved problem. I think the Clay Institute is still offering a $1,000,000 prize related to properties of solutions to Navier-Stokes.
What? There is still unsettled science in the settled science?
“JoNovace November 3, 2014 at 5:12 am
“They got round this by …….”
Ah yes, when all other argument fails, fall back on the “worldwide conspiracy” prop. I thought it was about science?
I’m happy to address any referenced data or conclusions you want to present but I’m not interested in third-hand he said, she said.”
Those are the FACTS you complete numpty, that is exactly what they did and they don’t deny it.
I know inconvenient facts are something to ignored or ‘adjusted’ in Alarmist land but this is a forum about the real world.
Answer the question, about how warmists looking back from the future, without a temperature record and using Mann’s Paleo trees, would declare temperatures had dropped sharply from 1960 onwards.
Would they be right? How therefore do we know what has been happening with the climate for short periods in the past?
Shows your ‘settled science’ is little more than a guess at the moment.
If you are going to reply, answer the questions rather than just avoiding the matter with your ‘La La La’ replies.
Alan
Alan…I love it when alarmists claim that skeptics think there is some kind of world wide conspiracy involving scientists, and then go on to declare there absolutely is an anti-science world wide conspiracy campaign going on. The hypocrisy is delicious.
On a side note, I LOVE the word “numpty” and wish to adopt it myself. Would that be ok with you? 🙂
Aphan,
‘Numpty’ has been around quite a while. Even Pachauri has used it. So give it back to ’em… doubled and squared!
Also, what you are describing is psychological ‘projection’: imputing their own faults onto others. Alarmists do it all the time. In this case, they are the conspirators, so they try to deflect by accusing scientific skeptics of conspiring — when all we want is scientific veracity.
You are right about ‘jonovace’, too. As James Strom points out, jonovace is simply cherry-picking. It was the arch-Warmist Phil Jones who designated 1997 [in 1999] as the start year to determine if global warming has stopped. But now that global warming has never resumed, the alarmist crowd doesn’t like that year.
They need to take it up with Dr. Jones. Speaking for myself, I’m happy using 1997. My message to ‘jonovace’: tough noogies. You numpties are stuck with that year. Suck it up.
What to expect?
High confidence or low confidence?
No wonder the IPCC switched from prediction to projection.
https://www.ipcc.ch/ipccreports/far/wg_I/ipcc_far_wg_I_spm.pdf
I think you mean solstice, not equinox. In particular, they are tangent on the longest night of the year at the polar circles, the winter solstice for the hemisphere in question.
At the spring and fall equinox, the points of tangency are the axial poles themselves.
rgb
Let’s assume – hypothetically – that they are right. In that case they should lead by example: Do As I Do, instead of Do As I Say. The first step is obviously no air travel for IPCC or the UN.
JoNovace November 3, 2014 at 5:27 am
“….trees actually started to show a sharp decline in temperatures…..”
Tree rings don’t show temperature, what is your source for this? Tree ring may correlate with temperature if other factors are removed.
>>>>>>>>>>>>>>>>>>>
Seriously? The entire edifice of CAGW is built upon tree ring studies purporting to show temperature since AR4 and earlier! We’ve spent years debunking the work of Michael Mann, Keith Briffa, Phil Jones and many others. Tree rings DEFINED the “hockey stick” in the first place! There have been congressional inquiries into the work of Michael Mann on tree rings showing temperature. Tree ring graphs once adorned the front pages of IPCC and WMO publications as de facto proof of temperature increases. The whole “Hide the Decline” and “Mike’s Nature Trick” debacle from the climategate emails was about tree rings being used to show temperature!
Have we come so far in the debate that late entrants into it from the warmist side no longer are aware what their “science” was founded upon in the first place?
FWIW JoNovace, I agree with you, trees don’t show temperature. A fact which completely guts about 1/2 the CAGW literature out there all on its own.
I suggest, BTW, that you find yourself a new moniker. Trying to trade on Jo Nova’s well known name doesn’t help you one bit.
David, trading on Jo Nova’s well known name, and misspelling the word “novice” are behavior markers that help OTHERS to immediately classify him/her by agenda and intellect. His/her responses have reinforced both. GroupThinkers ALWAYS present subliminal clues about who they are. They can’t help it.
Aphan,
I expect you are correct. But the astounding thing is that s/he spit on tree rings as measuring temperature. On that, I must applaud. Someone from the warmest side of the argument has stated for the record that tree rings don’t measure temperature. That eviscerates the bulk of the alarmist literature, and I must thank JoNovace for taking our side on the matter!
Tim,
I am sorry to say that your “wave” concept is quite outdated. Only Marcel Leroux got it right with his Mobile Polar High concept.
http://twileshare.com/uploads/Leroux-Global-and-Planetary-Change-19931.pdf
Sorry Thierry, Leroux simply gave a different name, mobile polar high (MPH), to a continental Arctic (cA) air mass in the traditional air mass/frontal system climatology. I debated this with him and one of his students, after the book came out. They were also simply extending Lamb’s work on movement of mid latitude cyclones along the Front.
“Almanacs list it currently at 23.4° and decreasing at 0.47” per century. The mean position of the ZEB shifts more as the global energy balance changes.”
Yes, and with a 9.3 year periodicity which, when modulated by an 11 year solar cycle, gives us a harmonic at 60 years.
Thank you for a good, informative article Dr. Ball
. – – As I read through the various comments above; pearls, swine and cast before, come to mind.
It seems to me that many of those who have commented on; – – -I don’t know on what – – are more interested in the up and down movements of temperature (T) than they are in what causes T to vary.
Terms such as Pause, Hiatus, these are unscientific. No one knows the future. These are terms of belief – belief that temps will rise in future.
Science deals in facts. The fact is that there has been no significant rise in temps in about 18 years. Someone says there is a Pause or Hiatus, they are talking belief not science.
Wow! Those exchanges with “JoNovace” were certainly entertaining, but most of you really got sucked in by a REAL novice. I’m an old, retired teacher, so it didn’t take me long to recognize a newly baptized convert in JoNovace. I would bet my little remaining life that this person is either a high school student or college undergraduate recently introduced to “global warming”/”climate change”/AGW/CAGW/ etc., etc., etc. by either a poorly educated, so-called science teacher or a left-wing college climate alarmist. The new semester has been in session just long enough for the new initiate to have been thoroughly indoctrinated and spurred to alarmist levels to want to “save the planet” from global disaster. It’s a re-run of Al Gore. The evidence is quite clear in the complete lack of knowledge of tree-ring temperature proxies, Michael Mann, the 97% consensus nonsense, and numerous other basic elements of the CAGW hypothesis. I am ashamed, as a retired teacher, of what has come of “education”. It seems that the far-left has hijacked our educational system and turned it from encouraging critical thinking to an institution for indoctrination of anti-American, anti-capitalist, anti-science propaganda. I think you all gave this person much more real information than he/she received in class and planted seeds of truth that may yet bear fruit.
this person is either a high school student or college undergraduate recently introduced to “global warming”
I bet you are correct. But not a peep out of her (unless I missed it) since I explained the history to her. I’ve always wondered what happens to these poor souls who show up here and get clobbered with the facts. But THIS one is of special interest. If only because she evaluated the claim that tree rings are thermometers as false in a flash of logic. When the claim is made outside of the influence of confirmation bias, even a novice sees it for what it is. Remarkable is it not, that the claim is so obviously false even to someone who has swallowed the CAGW meme hook line and sinker, but hasn’t been exposed to the “story” of how trees are thermometers by the climate science cultists.
Predicting severe weather is like trying to predict the concentration of ingredients at various points in the mixture whilst beating a cake mix.
We know that the combination of various inputs when thoroughly combined give a reasonably predictable outcome eventually, but at various stages of the process, the extreme concentration of some elements, and the extreme lack of other elements at various places in the mix, would not give an accurate representation of the overall mixture, nor if extrapolated out, an accurate prediction of the final result.
Very much like it, actually, because in both cases the rate of physical mixing — basically folding the elements together in discrete unmixed layers to create something like filo dough or Damascus steel — is much larger than the rate of diffusion. There is a lovely demo of the difference, lessee, yeah here:
We do this one in the department here for our intro classes. Hurricanes produce a lovely example of this sort of folding:
http://www.noaanews.noaa.gov/stories/images/isabel091503-1215zb.jpg
This is Isabel, which did notable damage to a friend’s house on the NC coast. Note well the laminar rain bands — the cloud bands spiralling into the center while maintaining their distinct identity as they intermix with the drier air on both sides and get pulled into the center. Lateral movement and diffusive mixing is much slower than the locally coherent flow.
This kind of picture, by the way, persists on all scales where there is a transverse gradient in the wind velocity field (which means that the field has nonzero curl). Because the: http://en.wikipedia.org/wiki/Reynolds_number of the moving fluid passes various thresholds in the vicinity of any sort of inhomogeneity, even the apparently laminar flow within the rain bands is locally turbulent, and the apparently laminar flow within the much smaller scale “eddies” associated with (say) a thunderstorm turns out to be rotational and break up into smaller eddies, prompting the following little poem by L. F. Richardson:
In all of these cases the eddies produce small scale laminar mixing until one descends to a length scale where diffusion proceeds at a comparable rate to the mixing. To compare the situation to that of a truly fractal structure like the Mandelbrot set, where no matter how far you descend you can find fully fractal structure at all length scales, there does exist a scale where microdynamics of the constituent atmospheric and oceanic molecules “blurs” the structures. Really a whole spectrum of scales, as what “wins” depends on the detailed dynamics of the system — diffusive mixing is occurring at the same time that turbulent mixing at the same time that bulk laminar transport is occurring, at all scales, and different things can dominate different length scales in different structures and sub-structures of the motion.
Kolmogorov took the notions of Richardson (who originally proposed a scaling theory to describe “turbulent diffusion” as distinct from molecular diffusion as distinct from reversible laminar mixing) and turned them into a formal scaling theory that works decently but not perfectly to describe the general spectral kinetics of turbulent mixing at “high” Reynolds numbers. A key element of the theory is the so-called Kolmogorov scale.
Energy is typically input into a turbulent system at macroscopic length scales, and undergoes a scaling “cascade” as it generates large scale whorls that transfer energy to smaller whorls, etc, down to the Kolmogorov scale where the energy finally “thermalizes” and mixes at the molecular level. At all larger length scales, energy is not actually either irreversibly lost or truly thermalized, it is just being transferred down to smaller scale rotational flow.
What this means from the point of view of solving PDEs is that one cannot use larger scale kinetics to properly model the transfer of energy and momentum between adjacent cells in the medium. The energy of the system is not in any thermodynamically describable state! It doesn’t have “a temperature” — the temperature field is wound up and varies all the way down to the Kolmogorov scale, with energy stored in meso-scale macroscopic transport of bulk fluid that is commensurate with the variations in energy at the length scale of thermalization due to intermolecular interaction and mixing. Energy can be input into the system and disappear, not appearing as “heat” but rather as the integrated kinetic energy of chunks of fluid with a “bulk” kinetic energy density that changes in highly nontrivial ways as one e.g. adapts an integration stepsize. To put it another way, a cell can easily have zero bulk velocity from the point of view of transport into neighboring cells, so one is tempted to say that the kinetic energy of the cell is “zero” in a computation summing up energy balance over all the cells in a system. But if one divides the cell into (say) 8 pieces (divide all three lengths in half) one might find that all eight pieces have nonzero kinetic energy, which is positive definite! The missing kinetic energy at the larger scale has to be assumed to be “heat” (internal energy we cannot keep track of) but it is still entirely coherent if it is associated with a single rotational vortex that fits in the cell and has a dynamic effect on a neighboring cell that is NOT THERMAL (sorry for shouting, but this is important!).
To put it bluntly, the cell dynamics at a single division of cell size smaller could entirely coherently transfer momentum, energy, and angular momentum to neighboring cells through trivial bulk transport processes and could do things like increase the energy/velocity field averaged over that cell, which is completely impossible to accomplish on the same scale with energy that has been thermalized, as it essentially violates the second law of thermodynamics for “heat” to turn back into coherent motion.
One doesn’t have to wonder why they have to constantly renormalize energy in climate models because large scale cell dynamics almost instantly fails to satisfy sum rules that embody detailed energy balance (or momentum balance, or angular momentum balance, or…). They are integrating at a scale thirty orders of magnitude larger than the spatiotemporal Kolmogorov scale for atmospheric air. There are one hundred divisions of spatiotemporal length scale by two between the 100x100x1 kilometer, five minute steps being used in the better models and the scale where one can safely attribute the loss of bulk transport energies as changing temperature. There is nontrivial, non-Markovian, energy and momentum transfer at all of the scales in between.
This is why it would be quite literally a miracle if the GCMs worked. They are already chaotic at the large length scales they are using! They are already non-integrable in the specific sense that there is a Lyupanov exponent (set) such that any macroscopic value set in phase space will diverge arbitrarily from values reached in the dynamics by arbitrarily small differences in initial conditions. They are nowhere near the length scales where one can reasonably attribute average quantities to cells of that length and expect even qualitative correspondence with the true dynamics.
As is often pointed out on this list (and in papers they refer to), GCMs fail to get lots of named weather/climate phenomena — such as thunderstorms — anywhere near “correct” simply because they are literally invisible at the length scales being treated. To a GCM, a cell with “thunderstorms” is visible only as a different macroscopic average in pressure, density, temperature, humidity. It doesn’t even show up in the cell’s bulk transport velocity, as thunderstorms can easily be moving with the same mean velocity as the surrounding air mass and have zero “mean velocity” by the time you average over all of its internal turbulence. But this is only one of the more flagrant examples of the problem, which persists all the way down to the tiny dust devils produced by gentle zephyrs as they play over a street, to the way humidity tumbles off of a sun-warmed leaf as the dew melts in the morning.
Humans are pretty creative, and it is possible that after a few decades of clever ideas and with fewer than 100 divisions of cell size by a factor of two we will eventually build climate models that “kind of” work. But there are several “stigmata” of working that have yet to be accomplished:
* Demonstrable adaptable stepsize scaling. Models need to be built that compute, divide the stepsize by two, compute again, and compare the results. It would be useful to show that one single model actually converges when its algorithm is subjected to this process, in any meaningful sense of “converges”.
Non-convergence is prima facie evidence that a useful integration scale has not yet been reached. And of course the models have not converged in any useful sense. It isn’t even clear how one could measure convergence in a nonlinear chaotic problem where every tiny alteration produces a completely different outcome. At the very least, one would need to show that the distribution of outcomes is stationary with respect to variation of stepsize/integration scale.
* Detailed balance without help or per-step renormalization. In order for models to function, they will necessarily have to use ad hoc approximation of the internal cell dynamics and the coupling of that dynamics between cells, very likely per stepsize (as there is non-negligible coherent energy distribution on all length scales between the macroscopic (cell size) and the Kolmogorov scale, and even Kolmogorov’s scaling rules are approximations and limiting cases, not general derivable results). At the very least, the dynamics here has to be conservative in a sensible way that neither violates the laws of thermodynamics nor requires renormalization.
After all, here’s the problem. There are three possibilities. The models can either be conservative, they can gain energy, or they can lose energy. That is, the couple Earth-Ocean-Atmosphere system can either remain at the same temperature, or it can warm or it can cool. Empirically, whatever it does it does as a variation of climate it does very slowly, over decades. Warming over decades thus appears as a tiny, tiny energy imbalance per timestep that gradually increases the energy content of the system taken as a whole. Cooling appears as a tiny energy loss per day, month, year (on average) that cumulates to produce lower macroscopically averaged temperature. A stable climate is one that is in perfect balance. Note well, in all three cases, the imbalance per year is small, since empirically temperature oscillates up and down by several times the total change in the mean over a decade over a timescale as short as weeks! It is quite literally lost in the noise at five minute timesteps. No computer ever built could sensibly resolve it.
At least, it couldn’t possibly resolve it if one renormalizes the energy after each timestep!
This is the real joke of the thing. Suppose I’m solving a differential equation for a conservative system, such as a planetary orbit, numerically. I choose a stepsize, not on the basis of what is needed to solve the problem to a given tolerance fifty or a hundred orbital cycles out, but on the basis of what I can afford to compute using paper and an abacus to do all of the arithmetic (which is a pretty good metaphor to the required power for climate science, although it is still orders of magnitude shy of doing it full justice). I don’t have a good algorithm for solving the problem, so I use straight 1st order Euler integration with my fixed enormous stepsize, not e.g. Runge-Kutta 4th-5th order adaptive integration. I have no good feel for what an orbital period is, so I cannot even check for whether or not my timestep is a near-multiple of the orbital period without presumptively solving the problem using the stepsize I’ve got.
I proceed to take a step. Now I do know a couple of things. For example, I think that both the total energy and the total angular momentum of the orbit ought to be constant, and I know their initial values. But when I compare the energy and angular momentum of my new orbit after a single timestep, I find that they are not constant! If I integrate the system forward without correction, I will almost certainly observe not only drift, but a systematic drift due to numerical error, either to higher or lower energy and/or angular momentum. After a few hundred steps, the orbit I’m observing will be nowhere near where it should be, for this simple, deterministic, analytically solvable problem.
I can try — try, note well — to “renormalize” the orbit after each timestep, finding a state that has the right energy and angular momentum “near” the state I end up. But there is actually a whole family of possible solutions that meet that criterion — indeed, they form a kind of hypersurface. I have to choose a particular direction for the renormalization — do I want to project the solution point conserving the new value of r (the same potential energy) by adjusting the kinetic energy and hence velocity until (say) the motion remains in a plane, has the right angular momentum and the right total energy? Or do I want to keep the speed and plane, but adjust r and maybe the direction until the conservation laws are met? The resulting trajectories in the two cases (out of many possibilities) will be completely different and neither of them will probably be particularly close to the true analytic solution. I’m in serious trouble for this absolutely trivial problem.
Now imagine that I add a small nonlinear interaction to the problem — the equivalent of a cubic term that leads to precession of the perihelion — but one that perhaps averages in some way over an interference pattern with random elements in it. I know longer know a priori that the new interaction plus the old interaction still constitute a conservative system in either energy or angular momentum. Indeed, I suspect that the actual solution will experience a deterministic drift to higher or lower energy etc, but I have no way to analytically prove it.
How, exactly, can I even do what I did for the conservative problem?
If I renormalize away my errors per step by assuming conservation, I erase the answer I hope to get. If I renormalize in such a way that energy grows, I will never be able to tell if that is real growth exhibited by the actual solution or a pure accident of the way I renormalized. Ditto if I renormalize in a way that makes the energy diminish.
It takes a very strong human indeed to not pick a renormalization that causes the system to do what his or her prior beliefs say that the system should do, or to pick a renormalization on some other grounds entirely, but when that choice happens to make the system behave the way one expects conclude that this is the “right way to do it” and not examine it too carefully. Especially when the one thing you could do to find out, integrate at a much finer scale with an adaptive stepsize to some tolerance, is literally out of the question.
* And the third critical sign of success. The models have to work! They have to actually exhibit predictive skill. This is the case even for the gravity problem. In this case one might learn something from a failure. One might write a simple one planet one sun model and get a stable numerical solution that corresponds well to the analytic orbit for a while and then diverges no matter what you do!
Analyzing the failure might lead you to learn about many, many things, such as tides, orbital resonances, gravitational waves, dark matter all of which confound even your simple and nearly perfect physical model because it over-idealizes a much messier reality. Things that you would have found almost impossible to correctly include from first principles or check when implemented without the lifeline of empirical evidence to compare to.
At the moment, we lack all three.
rgb