Guest Post by Willis Eschenbach [note new Update at the end, and new Figs. 4-6]
In climate science, linearity is the order of the day. The global climate models are all based around the idea that in the long run, when we calculate the global temperature everything else averages out, and we’re left with the claim that the change in temperature is equal to the climate sensitivity times the change in forcing. Mathematically, this is:
∆T = lambda ∆F
where T is global average surface temperature, F is the net top-of-atmosphere (TOA) forcing (radiation imbalance), and lambda is called the “climate sensitivity”.
In other words, the idea is that the change in temperature is a linear function of the change in TOA forcing. I doubt it greatly myself, I don’t think the world is that simple, but assuming linearity makes the calculations so simple that people can’t seem to break away from it.
Now, of course people know it’s not really linear … but when I point that out, often the claim is made that it’s close enough to linear over the range of interest that we can assume linearity with little error.
So to see if the relationships really are linear, I thought I’d use the CERES satellite data to compare the surface temperature T and the TOA forcing F. Figure 1 shows that graph:
Figure 1. Land only, forcing F (TOA radiation imbalance) versus Temperature T, on a 1° x 1° grid. Colors indicate latitude. Note that there is little land from 50S to 65S. Net TOA radiation is calculated as downwelling solar less reflected solar less upwelling longwave radiation. Click graphics to enlarge.
As you can see, far from being linear, the relationship between TOA forcing and surface temperature is all over the place. At the lowest temperatures, they are inversely correlated. In the middle there’s a clear trend … but then at the highest temperatures, they decouple from each other, and there is little correlation of any kind.
The situation is somewhat simpler over the ocean, although even there we find large variations:
Figure 2. Ocean only, net forcing F (TOA radiation imbalance) versus Temperature T, on a 1° x 1° grid. Colors indicate latitude.
While the changes are not as extreme as those on land, the relationship is still far from linear. In particular, note how the top part of the data slopes further and further to the right with increasing forcing. This is a clear indication that as the temperature rises, the climate sensitivity decreases. It takes more and more energy to gain another degree of temperature, and so at the upper right the curve levels off.
At the warmest end, there is a pretty hard limit to the surface temperature of the ocean at just over 30°C. (In passing, I note that there also appears to be a pretty hard limit on the land surface air temperature, at about the same level, around 30°C. Curiously, this land temperature is achieved at annual average TOA radiation imbalances ranging from -50 W/m2 up to 50 W/m2.)
Now, what I’ve shown above are the annual average values. In addition to those, however, we are interested in lambda, the climate sensitivity which those figures don’t show. According to the IPCC, the equilibrium climate sensitivity (ECS) is somewhere in the range of 1.5 to 4.5 °C for each doubling of CO2. Now, there are a several kinds of sensitivities, among them monthly, decadal, and equilibrium climate sensitivities.
Monthly sensitivity
Monthly climate sensitivity is what happens when the TOA forcing imbalance in a given 1°x1° gridcell goes from say plus fifty W/m2 one month (adding energy), to minus fifty W/m2 the next month (losing energy). Of course this causes a corresponding difference in the temperature of the two months. The monthly climate sensitivity is how much the temperature changes for a given change in the TOA forcing.
But the land and the oceans can’t change temperature immediately. There is a lag in the process. So monthly climate sensitivity is the smallest of the three, because the temperatures haven’t had time to change. Figure 3 shows the monthly climate sensitivities based on the CERES monthly data.
Figure 3. The monthly climate sensitivity.
As you might expect, the ocean temperatures change less from a given change in forcing than do the land temperatures. This is because of the ocean’s greater thermal mass which is in play at all timescales, along with the higher specific heat of water versus soil, as well as the greater evaporation over the ocean.
Decadal Sensitivity
Decadal sensitivity, also called transient climate response (TCR), is the response we see on the scale of decades. Of course, it is larger than the monthly sensitivity. If the system could respond instantaneously to forcing changes, the decadal sensitivity would be the same as the monthly sensitivity. But because of the lag, the monthly sensitivity is smaller. Since the larger the lag, the smaller the temperature change, we can use the amount of the lag to calculate the TCR from the monthly climate response. The lag over the land averages 0.85 months, and over the ocean it is longer at 2.0 months. For the land, the TCR averages about 1.6 times the monthly climate sensitivity. The ocean adjustment for TCR is larger, of course, since the lag is longer. Ocean TCR is averages about 2.8 times the monthly ocean climate sensitivity. See the Notes below for the calculation method.
Figure 4 shows what happens when we put the lag information together with the monthly climate sensitivity. It shows, for each gridcell, the decadal climate sensitivity, or transient climate response (TCR). It is expressed in degrees C per doubling of CO2 (which is the same as degrees per a forcing increase of 3.7 W/m2). The TCR shown in Figure 4 includes the adjustment for the lag, on a gridcell-by-gridcell basis.
Figure 4. Transient climate response (TCR). This is calculated by taking the monthly climate sensitivity for each gridcell, and multiplying it by the lag factor calculated for that gridcell. [NOTE: This Figure, and the values derived from it, are now updated from the original post. The effect of the change is to reduce the estimated transient and equilibrium sensitivity. See the Update at the end of the post for details.]
Now, there are a variety of interesting things about Figure 3. One is that once the lag is taken into account, some of the difference between the climate sensitivity of the ocean and the land disappears, and some is changed. This is particularly evident in the southern hemisphere, compare Southern Africa or Australia in Figures 3 and 4.
Also, you can see, water once again rules. Once we remove the effect of the lags, the drier areas are clearly defined, and they are the places with the greatest sensitivity to changes in TOA radiative forcing. This makes sense because there is little water to evaporate, so most of the energy goes into heating the system. Wetter tropical areas, on the other hand, respond much more like the ocean, with less sensitivity to a given change in TOA forcing.
Equilibrium Sensitivity
Equilibrium sensitivity (ECS), the longest-term kind of sensitivity, is what would theoretically happen once all of the various heat reservoirs reach their equilibrium temperature. According to the study by Otto using actual observations, for the past 50 years the ECR has stayed steady at about 130% of the TCR. The study by Forster, on the other hand, showed that the 19 climate models studied gave an ECR which ranged from 110% to 240% of the TCR, with an average of 180% … go figure.
This lets us calculate global average sensitivity. If we use the model percentages to estimate the equilibrium climate sensitivity (ECS) from the TCR, that gives an ECS of from 0.14 * 1.1 to 0.14 *2.4. This implies an equilibrium climate sensitivity in the range of 0.2°C to 0.3°C per doubling of CO2, with a most likely value (per the models) of 0.25°C per doubling. If we use the 130% estimate from the Otto study, we get a very similar result, .14 * 1.3 = 0.2°C per doubling. (NOTE: these values are reduced from the original calculations. See the Update at the end of the post for details.]
This is small enough to be lost in the noise of our particularly noisy climate system.
A final comment on linearity. Remember that we started out with the following claim, that the change in temperature is equal to the change in forcing times a constant called the “climate sensitivity”. Mathematically that is
∆T = lambda ∆F
I have long held that this is a totally inadequate representation, in part because I say that lambda itself, the climate sensitivity, is not a constant. Instead, it is a function of T. However, as usual … we cannot assume linearity in any form. Figure 5 shows a scatterplot of the TCR (the decadal climate sensitivity) versus surface temperature.
Figure 5. Transient climate response versus the average annual temperature, land only. Note that the TCR only rarely goes below zero. The greatest response is in Antarctica (dark red).
Here, we see the decoupling of the temperature and the TCR at the highest temperatures. Note also how few gridcells are warmer than 30°C. As you can see, while there is clearly a drop in the TCR (sensitivity) with increasing temperature, the relationship is far from linear. And looking at the ocean data is even more curious. Figure 6 shows the same relationship as Figure 5. Note the different scales in both the X and Y directions.
Figure 6. As in Figure 5, except for the ocean instead of the land. Note the scales differ from those of Figure 5.
Gotta love the climate system, endlessly complex. The ocean shows a totally different pattern than that of the land. First, by and large the transient climate response of the global ocean is less than a tenth of a degree C per doubling of CO2 (global mean = 0.08°C/2xCO2). And contrary to my expectations, below about 20°C, there is very little sign of any drop in the TCR with temperature as we see in the land in Figure 5. And above about 25°C there is a clear and fast dropoff, with a number of areas (including the “Pacific Warm Pool”) showing negative climate responses.
I also see in passing that the 30°C limit on the temperatures observed in the open ocean occurs at the point where the TCR=0 …
What do I conclude from all of this? Well, I’m not sure what it all means. A few things are clear. My first conclusion is that the idea that the temperature is a linear function of the forcing is not supported by the observations. The relationship is far from linear, and cannot be simply approximated.
Next, the estimates of the ECS arising from this observational study range from 0.2°C to 0.5°C per doubling of CO2. This is well below the estimate of the Intergovernmental Panel on Climate Change … but then what do you expect from government work?
Finally, the decoupling of the variables at the warm end of the spectrum of gridcells is a clear sign of the active temperature regulation system at work.
Bottom line? The climate isn’t linear, never was … and succumbing to the fatal lure of assumed linearity has set the field of climate science back by decades.
Anyhow, I’ve been looking at this stuff for too long. I’m gonna post it, my eyeballs are glazing over. My best regards to everyone,
w.
NOTES
LAG CALCULATIONS
I used the Lissajous figures of the interaction between the monthly averages of the TOA forcing and the surface temperature response to determine the lag.
Figure N1. Formula for calculating the phase angle from the Lissajous figure.
This lets me calculate the phase angle between forcing and temperature. I always work in degrees, old habit. I then calculate the multiplier, which is:
Multiplier = 1/exp(phase_angle°/360°/-.159)
The derivation of this formula is given in my post here. [NOTE: per the update at the end, I’m no longer using this formula.]
To investigate the shape of the response of the surface temperature to the TOA forcing imbalance, I use what I call “scribble plots”. I use random colors, and I draw the Lissajous figures for each gridcell along a given line of latitude. For example, here are the scribble plots for the land for every ten degrees from eighty north down to the equator.
Figure N2. Scribble plots for the northern hemisphere, TOA forcing vs surface temperature.
And here are the scribble plots from 20°N to 20°S:
Figure N3. Scribble plots for the tropics, TOA forcing vs surface temperature.
As you can see, the areas near the equator have a much smaller response to a given change in forcing than do the extratropical and polar areas.
DATA AND CODE
Land temperatures from here.
CERES datafile requisition site
CERES datafile (zip, 58 MByte)
sea temperatures from here.
R code is here … you may need eyebeach, it’s not pretty.
All data in one 156 Mb file here, in R format (saved using the R instruction “save()”)
[UPDATE] Part of the beauty of writing for the web is that my errors don’t last long. From the comments, Joe Born identifies a problem:
Joe Born says:
December 19, 2013 at 5:37 am
My last question may have been a little obscure. I guess what I’m really asking is what model you’re using to obtain your multiplier.
Joe, you always ask the best questions. Upon investigation, I see that my previous analysis of the effect of the lags was incorrect.
What I did to check my previous results was what I should have done, to drive a standard lagging incremental formula with a sinusoidal forcing:
R[t] = R[t-1] + (F[t] – F[t-1]) * (1 – timefactor) + (R[t-1] – R[t-2]) * timefactor
where t is time, F is some sinusoidal forcing, R is response, timefactor = e ^ (-1/tau), and tau is the time constant.
Then I measured the actual drop in amplitude and plotted it against the phase angle of the lag. By examination, this was found to be an extremely good fit to
Amplitude as % of original = 1 – e ^ (-.189/phi)
where phi is the phase angle of the lag, from 0 to 1. (The phase angle is the lag divided by the cycle length.)
The spreadsheet showing my calculations is here.
My thanks to Joe for the identification of the error. I’ve replaced the erroneous figures, Figure 4-6. For Figs. 5 and 6 the changes were not very visible. They were a bit more visible in Figure 4, so I’ve retained the original version of Figure 4 below.
NOTE: THE FIGURE BELOW CONTAINS AN ERROR AND IS RETAINED FOR COMPARISON VALUE ONLY!! 
NOTE: THE FIGURE ABOVE CONTAINS AN ERROR AND IS RETAINED FOR COMPARISON VALUE ONLY!!
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.


I suspect that Willis is a lot closer to understanding the climate than a lot of these so called climate scientists are.
Next, the estimates of the ECS arising from this observational study range from 0.2°C to 0.5°C per doubling of CO2.
==========
Wow! Great work. Wondered where you’d been hiding. The color graphics are a fantastic aid to visualizing the complexity. This lower ECS estimate make much more sense and explains the “pause” much better than the climate models.
Another excellent analysis, Willis. We pay gobs of bucks to support super computer number crunching to get results I could get with a slipstick and y=mx+b? Have Decitrig, will compute.
Bottom line? The climate isn’t linear, never was … and succumbing to the fatal lure of assumed linearity has set the field of climate science back by decades.
=============
It has the advantage of allowing a whole generation of climate scientists to tackle really hard science questions armed with nothing more than liberal arts degrees.
Excellent post Willis! Of course anyone who spends a little time thinking about it would realize that trying to model complex systems with a linear model is not going to work. It explains perfectly why they fail so monstrously.
v/r,
David Riser
When I saw
I was reminded that during my Engineering studies, where I learnt (a little) computer programming. That was in the 1970’s when one still had to be “clever” to use computers and get reasonable answers to complicated questions; within a limited time.
Anyways; I’d observed that almost all the “constants” in formulae with which I was dealing, were only “constant” over a short range of conditions.
An example of this is the treatments of Young’s Modulus (E) is a linear approximation of the elasticity of a material over a limited range. For most engineering purposes, a constant is good enough as the design is well within the elastic range in most applications and dimensional variability (the ability to make stuff to the exact dimensions of the design) and material variability tend to, in most cases, swamp any “error” of using the constant for E. Using a constant is also quicker for checking stuff by hand.
But if you need to be really accurate in predicting deflection under stress, or your design approaches plasticity, then you must also consider stress vectors in magnitude, the material’s bulk modulus and other obvious factors such as temperature.
It puzzled a few of my tutors as to why I made E a function of stress in my programming assignments. I had to draw a stress-strain curve to explain. The difference in results is small, and the effort to “completely” define the function is huge.
I was discouraged from doing it the right way after a while because they thought it a waste of time even when my E() function as a place-holder, returned a constant regardless of the value of input parameters. The “waste” of programming time was a minute or two. And the documentation value was immense.
When I looked into some (a lot) of the GCM and other modelling code (mis)used by the catastrophists, It was clear that the models would never be able to produce realistic results as they had fundamentally variable parameters as constants. There was no possibility to introduce coupling and confounding factors. In the systems being modelled, it does matter that that parameter varies “only” a few percent because an iterative finite element analysis will accummulate those errors.
The climate-sensitivity parameter lambda does not take a fixed value either by place or by time. The IPCC takes its value as beginning at 0.31 Kelvin per Watt per square meter, as 0.44 after 100 years, as 0.5 over 200 years, and 0.88 over 3500 years. The curve is similar to the top left quadrant of an ellipse. It is anything but linear.
The key question is whether any value as high as 0.88 for lambda is justifiable. To obtain so high a value, the models assume that temperature feedbacks are so strongly net-positive as to triple the small direct warming from CO2, which is not much more than 1 K per doubling. Suddenly that becomes 3.26 K thanks to imagined net-positive temperature feedbacks. However, the mutual-amplification function that all of the models use is defective in that it is based on an equation from process engineering that has a physical meaning in an electronic circuit but has no physical meaning in the real climate unless one introduces a damping term. But any plausible value for a damping term would divide the IPCC’s central estimate of climate sensitivity by at least 4.
So don’t worry too much about linearity, for lambda is a Humpty-Dumpty variable: it can take any value that is likely to keep the climate scare going.
Willis,
As usual, I understood at least some of it. However, how anyone can claim that almost anything in nature, including its fauna, is linear, is beyond me. From quarks to black matter the universe is a wave. Representing a wave with a linear “anything” is almost willful misrepresentation. Like you said, it is easier to make things linear. What I mostly noticed though was the actual amount of data that disappears when using a linear function. Being a bit of a cynic, could that be the reason?
Do climate models “assume” the linear equation, or is that equation used (ex post facto) to back out a bulk lamda value based on the models derived forcings and its temperature output? My understanding would be that lamda is NOT a fixed parameter, rather something that can be derived and compared afterwords.
I’ve been pondering a “degrees of surface T per W/m^” over ocean by month by latitude as a way to show something similar. That the degrees per W/m^2 are all over the place. I think this may make that unnecessary. Nicely done.
I think you will find that Willis has shown in a previous post that the models can be reduced to a liner approximation as they all have different forcings matched with different lamda that when graphed effectively reduces all their effort to a simple liner equation, hence the starting point of this post.
If Present Trends Continue….. you can predict anything by selecting the measurement period
IIRC Idso got a ECS of 0.4 deg C for doubling of CO2 years ago in his “natural experiments”.
I like the scribble plots. I think they may be a new art form.
Yeah I’ve always had a problem with ‘assumed linearity’. I’m entirely with you on this, assumed linearity is one of the major flaws in current climate research.
One of the best examples I can think of illustrating non-linearity, which I came across in my university days and which has intrigued me ever since, is the way iron behaves in stars. From memory, as a star burns up elements during its lifetime, all the elements prior to iron behave in a more or less linear fashion during nuclear fusion, until the star starts to use up the iron in its mass. Iron just doesn’t want to behave like the others. The linear trend breaks down spectacularly, and causes a supernova, which totally destroys the star (depending on its mass). So much for a linear trend within stellar processes.
Most scientists assume linearity out of a combination of laziness and ignorance about the real world. Non-linearity is much harder to predict because the exact position of inflection points and changes in trend/rates may be unknown, meaning non linear discoveries in science usually come well after assumed linearity. (There is also a political element which I won’t go into here, as a major part of social planning assumes the human benefits of ‘evenness’ over units of space and time, which is the same sort of assumption as ‘linearity’).
In my field of mineral exploration non-linearity underpins much of the research. The distribution of mineral concentration in the earth’s crust is very non-linear, both in terms of spatial extent and in terms of time. Often one simply cannot use linear models to analyse it, yet I find that many academics and public service scientists often assume linearity in the way they conduct their research and make public policy, and this is also partly the reason that mineral exploration is largely left to the private sector; there are simply too many public policy analysts who want to ‘arrange’ the minerals evenly across the landscape, where in most cases this is simply not how they occur in nature. They are unevenly distributed, their position is often unknown or uncertain, and this also creates unevenness in social wealth and ultimately society. And this sort of unevenness in a likewise fashion with other fields, permeates right through to the stockmarket, and ultimately to wealth between individuals, cultures, and nations. Non linearity is built into the economic system, from the foundation of the raw materials that underpin economies, and is also at least partly the sort of reason that central and social planning often fails. It is difficult to tell this to various politicians and ideologues, but it is often the first mistake they make when making major policy decisions.
Bernd Felsche says: “…When I looked into some (a lot) of the GCM and other modelling code (mis)used by the catastrophists, It was clear that the models would never be able to produce realistic results as they had fundamentally variable parameters as constants. There was no possibility to introduce coupling and confounding factors. In the systems being modelled, it does matter that that parameter varies “only” a few percent because an iterative finite element analysis will accumulate those errors.”
I was added to a project back in the 60’s where they were deriving material physical properties at temperatures of about 5,000°F from measurements on very small specimens. Certain materials gave negative thermal conductivities. Something was clearly wrong. It remained a mystery until, over dinner one night, I had an “AHA moment.” The early research team, to save apparatus costs, had assumed linearity over the shortest specimen dimension, so short that any deviation from linearity was assumed similarly “small.” During finite element analysis, however, that erroneous profile smeared itself throughout the relaxation network, inverting the temperature distribution at the very center of those specimens. After running the program, the hot side became the cold side and vice versa.
This shows the Tower of Babel principle: Any computer program whose complexity exceeds the ability of any human to grasp the calculations in their entirety is doomed to mysterious failures, as is any program which has been dumbed down via simple-minded assumptions to match the ability of its creators.
Elegant Willis. Thanks for your work. Your scatter-grams are great. The flattening of some of the color’s curve at 30° is reminiscent of a log curve. But the impact of combinations? of effects are driving things in different directions. For example the cyan, 0-30°N. Latt. in figure 2 has something akin to a refraction pattern in a negative response at 12°-28°C SSTs, but scattered in a positive response with a lower limit at .1°C per W/m^2.
In climate as in the study of many natural systems, nonlinearity = enlightenment.
Those “scribble plots” look like Lorenz attractors.
Willis has nicely demonstrated the essence of nonlinearity which is that as a system e.g. an atmospheric or ocean system evolves or changes it changes the controlling parameters of that system in real time as it does so. Those parameters e.g. lambda are not fixed or linear.
Or as James Gleik put it in his book “Chaos”: “Nonlinearity means that the act of playing the game has a way of changing the rules”.
These graphs seem to indicate on a quick look, that the climate has a thermostat type effect such that far from being linear, there are limits to how hot or how cold the climate system will go.
For instance, on the hot side of things, does heat create more clouds, limiting the increase of heat as the clouds reflect more radiation? I’m not sure what the effect is on the cold side of things. That’s the direction that can get a bit “runaway” in my mind. Get cold, get ice, reflect more radiation, get colder, etc.
We have a history of the cold side running away with things in the Earth’s long history. But we seem to have some hard limits to how hot things can get.
Willis,
Thank you, I have tried and tried to bring out the point many times that treating climate sensitivity as a constant is wrong, wrong, wrong, and that as temperature rises climate sensitivity falls, to a point at which at about 30 degrees C over water it becomes 0. The gain of the system is inversely related to temperature. Taking up Monckton’s point, this is analogous in engineering to an amplifier approaching saturation. At some point increasing the drive or feedback can’t increase the output because the energy available has been exhausted. It has been my contention for some time that climate is being held in such a narrow range by saturation effects, that is Lamda being an inverse function of temperature. If this is the case (and it almost certainly is) then Climate Sensitivity is irrelevant.
It’s important however to consider physical mechanisms though. In the climate the energy isn’t limited by the power source exactly, it’s limited by the instability of evaporating too much water in one place, and the dew point in the atmosphere causing condensation. So as our “Drive” goes up, we reach a threshold – dare I say a tipping point, where there is too much water to be held in solution at a given temperature and it precipitates as clouds. That then causes the Input power from the sun to fall until the gain reaches zero (the system is saturated). This is very Non linear behaviour it rapidly onsets at particular humidity, temperature and air pressures. As average temperature rises, the region near the equator that this happens will expand, reducing the average gain across the planet. Ultimately – Venus earth is more like Miami earth, cosy and warm with Maxes around 30 and mins around 20, just like in the tropics.
WIllis,
The other problem is that lamda is also a Complex Number. The idea that feedbacks in the climate can be treated as a single lumped SCALAR is just naive in the extreme. I wanted to write a paper on that but need to find someone to help with the math. (I’m not sure how to deal with the non linearities). The only solution that allows any sort of positive feedback but is as stable as our climate is one that is approaching saturation and inherently non-linear
On the other hand such high transient sensitivity in the presence of obvious saturation means that in the past ice ages weather should have been very unstable, high gain should have lead to greater instability, but it didn’t. This probably means the gain is nothing like they say either, and that we have saturation occuring AND lower gain.
Totally off topic, but I’ve been told David Appell, after being banned here for his nonsense, brags of sneaking back under several aliases to continue his nonsensical harassment of the climate realists who post here. He apparently admitted to doing this on the following thread: http://dailycaller.com/2013/12/18/reddit-bans-comments-from-global-warming-skeptics/#comment-1169709832
I swear I’m not a fan boy. Just a big fan. I love the way you find some aspect of science, look at it, turn it upside down, study it and then come up with a plan to crunch some numbers to support what you figured out. Good to see you back.
PS – /sarc on/ With oceans as energy buffers that vary how they take up or give off energy, the feedbacks, the time constants, convection, radiations, humidity, latent heat of vaporization, and condensation, clouds, wind, pressure. They all add up to a neat linear system that won’t ever change unless we add CO2.
Mario Lento says:
December 18, 2013 at 9:48 pm
Thanks, Mario, but actually the process is much less directed than you imagine. What I do is more like play. I just graph up various combinations of variables, and think about them, and then graph up some more combinations, and consider how I might change my graphs to give more insight into some aspect or other …
After looking at lots and lots of graphs, if I’m lucky I can start to get some idea of what might be happening … but that’s never guaranteed.
In other words, it’s amateur science, which means science done for fun and the love of the game rather than for money. I just mess around with the variables until something interesting pops up.
Regards,
w.
0.2°C to 0.5°C per doubling of CO2? Uh-oh, I’m not sure that’s enough to hold back the next glaciation 🙁
Willis: Great to see you back, i pray your health is improving daily. Great post, illustrates the non-linearity much better than I have seen before, and for land and oceans.
I often chuckle when I remember the old saying I heard somewhere in the 60’s but don’t remember where. It is. “Constants aren’t and variables won’t”.
Good health to you.