Guest post by Willis Eschenbach
Based on a model, unfortunately. Behind a paywall, unfortunately. Posted without comment, emphasis and formatting mine. The abstract says:
Recovery mechanisms of Arctic summer sea ice
S. Tietsche, D. Notz, J. H. Jungclaus, J. Marotzke, Max Planck Institute for Meteorology, Hamburg, Germany
We examine the recovery of Arctic sea ice from prescribed ice-free summer conditions in simulations of 21st century climate in an atmosphere–ocean general circulation model. We find that ice extent recovers typically within two years. The excess oceanic heat that had built up during the ice-free summer is rapidly returned to the atmosphere during the following autumn and winter, and then leaves the Arctic partly through increased longwave emission at the top of the atmosphere and partly through reduced atmospheric heat advection from lower latitudes. Oceanic heat transport does not contribute significantly to the loss of the excess heat.
Our results suggest that anomalous loss of Arctic sea ice during a single summer is reversible, as the ice–albedo feedback is alleviated by large-scale recovery mechanisms. Hence, hysteretic threshold behavior (or a “tipping point”) is unlikely to occur during the decline of Arctic summer sea-ice cover in the 21st century.
Received 1 October 2010; accepted 14 December 2010; published 26 January 2011.
Citation: Tietsche, S., D. Notz, J. H. Jungclaus, and J. Marotzke (2011), Recovery mechanisms of Arctic summer sea ice, Geophys. Res. Lett., 38, L02707, doi:10.1029/2010GL045698.
[UPDATE] The paper is here. My thanks to alert readers.
[UPDATE] From the conversation below.
steven mosher says:
March 2, 2011 at 11:49 am
Since we dont have any observations of recovering from ice free conditions, it would HAVE to be based on a model. nothing unfortunate there.
Thanks for your thoughts as always, steven. It is unfortunate that we HAVE to base it on a model. It is extremely unfortunate that we have to base it on current generation AOGCMs, which are not renowned for regional accuracy.
And if we had observations we would write equations that quantified over physical entities. That set of equations would be called a “theory” ,which is nothing more than a model.
Indeed it would. I should have been more clear and said “unfortunately, it is based on one of the current generation of atmosphere ocean global climate (or circulation) models (AOGCMs)”, to distinguish it from such things as models based solidly on physics on the one hand (which GCMs are not), and models which can be tested experimentally and quantifiably (such as you describe above as a “theory”) on the other hand.
The temperature trend output of the GISSE climate model can be almost exactly replicated (0.98 correlation) by a simple one line one variable equation relating forcing and temperature. Is the real world is that simple?
If you think that a climate model like NASA’s pride and joy, the GISSE GCM, is a sufficiently accurate view or model or representation or theory of the real world that you can use it to make predictions and projections of future temperatures, I’m not sure what I can say.
The real problem with the models, though, is that the modelers rarely provide test results. For example, in this study they are using the MPI_ECHAM5 coarse resolution model . They report no test results for that model. They do say that the high-resolution version of the model “has been tested extensively and performs well in simulating Arctic climate [Chapman and Walsh, 2007].”
(A detailed description of the high-res version is here.)
A visit to their reference Chapman and Walsh tells a different story. The abstract starts out by saying:
Simulations of Arctic surface air temperature and sea level pressure by 14 global climate models used in the Fourth Assessment Report of the Intergovernmental Panel on Climate Change are synthesized in an analysis of biases and trends. Simulated composite GCM surface air temperatures for 1981–2000 are generally 1°–2°C colder than corresponding observations with the exception of a cold bias maximum of 6°–8°C in the Barents Sea. The Barents Sea bias, most prominent in winter and spring, occurs in 12 of the 14 GCMs and corresponds to a region of oversimulated sea ice. All models project a twenty-first-century warming that is largest in the autumn and winter, although the rates of the projected warming vary considerably among the models.
As an opening, that’s hardly a ringing endorsement. They can’t get the temperatures right, the trends “vary considerably among the models”, they’re only tested over 20 years, and this is described as “tested extensively and performs well”?
Too general, you say? OK, it says “the Max Planck Institute (MPI) ECHAM5 and the National Center for Atmospheric Research (NCAR) Community Climate System Model version 3(CCSM3) simulate temperatures that are warmer than observed in the Barents Sea.”
It says “The annual mean rms [temperature] errors range from about 2°C for the MPI ECHAM5, NCAR CCSM3, and GFDL CM2.1 models to as high as 7°C for the CSIRO Mk3.0 model.”
It says: “Annual SLP RMS errors range from 2 mb (MPI ECHAM5) to almost 9 mb (NCAR CCSM3), while the across-model range of winter SLP RMS errors is comparable to the across-model range of summer SLP RMS errors.”
It says: “Winter model biases of temperature for the Arctic Ocean, expressed as RMS errors from observational reanalysis fields, average 3 times larger than those for summer, but the across model scatter of the winter errors are 4 times larger than the corresponding summer errors. The MPI ECHAM5 and the NCAR CCSM3 GCMs outperform the other models for most seasons.”
Conclusion? While the models did poorly, the MPI ECHAM5 was one of the best of the poor models.
There is a deeper problem, however. This is the almost incomprehensible lack of data from the poles, especially any long time series. Even the satellites don’t generally cover the poles themselves and some surrounding area. The Arctic is a frozen ocean with a few scattered ice-mounted floating research stations, and unmanned sensors here and there. We know almost nothing about the conditions say thirty metres (100′) either up or down from the top of the ice anywhere.
Now, if we want to take our best guess about what’s happening where we have no data sensors or thermostats, we naturally turn to computers. An atmospheric GCM is used to give a “best fit” to whatever observational data we actually do have. The one most used these days is called the ERA-40. The resulting output is the set of what the ERA-40 folks call “synthetic observations” four times daily, at 0, 6, 12, and 18 Zulu that best fit whatever observations at whatever times and places that we have.
I have no problem with that, it’s our best guess. And there are many things for which it is useful.
But for reasons of modelers’ confusion about reality, the output of the climate model is usually called the “ERA-40 reanalysis data”. The “synthetic” got misplaced somewhere along the way. This is a huge misnomer.
Because when you compare the output of the MPI-ECHAM5 with the ERA-40 reanalysis data, you are not comparing model output to data. You are comparing the output of two closely related climate models, one that is “best-fit” to a set of constraints and one which is not. Neither is data.
This confusion of models with reality is in the Chapman et al. Abstract above, where they say:
Simulated composite GCM surface air temperatures for 1981–2000 are generally 1°–2°C colder than corresponding observations with the exception of a cold bias maximum of 6°–8°C in the Barents Sea.
when what they mean is that
“Simulated composite GCM surface air temperatures for 1981–2000 are generally 1°–2°C colder than corresponding simulated ERA-40 best-fit model results with the exception …”
This is particularly crucial in the Arctic, where there is so little data. Using the ERA-40 as our best guess is one thing, that’s fine. But I object strongly to the whole paradigm of comparing the output of two climate models, in an area where by and large we have no data, and saying that the results of that comparison are anything stronger than “interesting”.
Yes, we have little data in the Arctic. But for starters, how about they throw out the ERA-40 results entirely, and show us how the MPI ECHAM5 actually compares to WHATEVER FREAKIN’ DATA WE ACTUALLY HAVE! Compare models with actual data, novel idea, what?
I want to know how well it (and by “it” I mean the actual model used in the study, not some other “high resolution” version) predicted the actual sea level pressure as measured by gauges at Svalbard Luft and wherever else. I don’t care how well the model’s high-tone cousin matched some other similar model’s best guess of sea level pressure at some random spot on the ice with no pressure gauge in the nearest quarter million square miles. That’s meaningless.
So unfortunately, once again I fear it’s models all the way down™ …
The problem with their procedure is that we end up with absolutely no idea of the real uncertainties of the results. The paper is no help, the only mention is “All numbers for energy budget anomalies are rounded to ten AEU, to account for uncertainty arising from energy budget residuals and ensemble spread.”
So we don’t know whether the MPI ECHAM5 results are gold, or fools gold. Having looked at a lot of model results, I’d guess the latter. But we don’t know.
Finally, to run that un-verifiable model for what, 80 years into the future as they do? Pull the other one, it’s got bells on it as my friends say. At that long a run, no iterative model can be relied on to paint anything other than a portrait of the model programmers’ under- and misunderstandings.
I’ll add this to the head post.
w.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
Willis:
“Is the real world is that simple?”
I think that sometimes it is, but which are those sometimes?
I will try to be brief.
To the degree that the response to a forcing is linear and time invariant (LTI) there exists a class of forcing functions that map from forcing to response by a combination of scaling and time-shifting.
These are the eigenfunctions of the system. For LTI systems, the membership of this class does not vary according to the response function.
So if the system can be considered to be LTI its eigenfunctions are of the form:
A·EXP(C·t) : where C is complex.
So all of the sinusoids and all of the exponentials map to responses by scaling and time-shifting (in the case of pure exponentials either scaling or time shifting will do).
Now the the magnitude of the scaling and time-shift IS a funtion of both the forcing and the response function and in the sinusoidal case will be a function of the frequency. Here the response function can be seen as a frequency dependent filter.
In the case of thermal responses one might expect this to be a low pass filter and hence the higher frequency components attenuated more than the lower ones.
Now to the degree that the forcings resemble an exponential one would expect the response to resemble an exponential, (either scaled or shifted) and to the degree that another component, solar, resembles a sinusoid it would be scaled AND shifted, and to the degree that the volcanoes resmble smoothed pulses one would expect them NOT to fmap so well, as unlike the other two, pulses are not eigenfunctions. In particular one might expect pulses to be spread out in the response.
I think that it is quite likely that a simple scale and shift would map the forcings as we know them to the response as we know it. This would not be the case for LTI system if the forcings were more complex, e.g. where constructed of several different distinct sinusoids. These sinusoids would individually map by scaling and shifting but each with a different scale and shift factors, hence there would be no way that a simple scale and shift of the forcing function as a whole could map to the response function.
In a sense we lucked out with our forcings, in that they ability to just scale and shift means that it is diffiult to extract information about the response function and hence implied OHC, or the zero frequency scaling factor, hence the climatic sensitivity.
Volcanoes are different and they do contain useful information but they are a bit small, (too little signal to noise in the response) and a bit similar to each other the atmospheric burden tends to start low, rapidly peak, and then tail off more slowly, which also gives a response that is similar just a bit more spread out in time.
I will stop here as I am trying to be brief, I hope that I have helped highlight why it could be the particular composition of the forcings has naturally lead to the result that you have found.
Alex
Tautologies all the way down …
JC has a posting up, The Harry Potter Theory of Climate. Balanced commentary.
Really, you guys are discussing w/m^2? Nice, why don’t we talk about the energy in the rock outside my house.
Here’s part of a comment I posted at Goddard’s blog recently in a response to another about the significance of the arctic and ice.
“…Further, there has been a recent thought posited that the less ice, the more heat is allowed to be released by the ocean.(think Kelvins) You actually think a very small area of the world for less than 1/4 of time acts as a regulator for the rest of the world for 100% of the time?
14.056 million sq km/510.072 mil sq km ~2.75% of the earth’s surface.
Yeh, that much is the earth’s solar reflector. But only 25%(50% depending on view) of the time. So, .25 x .0275 = 0.006875 or 0.6875% of an impact on earth’s reflective nature………assuming the ice is 100% reflective(its not) and the ocean being 100% absorption (its not) and other regulating mechanisms (such as the above mentioned heat release) don’t kick in.
Arctic ice is about as important as my refrigerator to global temperatures.”
The give and take of energy in that small region isn’t significant.
Corey S. says:
March 2, 2011 at 3:24 pm
Willis,
RealClimate has responded on this already:
1Daniel J. Andrews says:
1 Mar 2011 at 9:51 AM
This is also off-topic. This abstract just came into my inbox today from Nature Climate Change.
It says that an irreversible decline in Arctic sea ice is unlikely this century, as the mechanisms they’ve modeled lead to a consistent recovery of sea ice in just a few years.
The abstract for the same paper at this AGU link puts a different emphasis on their findings, so now I’m wondering what the paper itself says, and what other experts think.
I also wonder how many contrarians will embrace this paper despite it’s conclusions being based on models that contrarians been decrying as unreliable all these years.
[Response: The issue is that *if* we reduced emissions and concentrations of anthropogenic greenhouse gases, sea ice would come back. i.e. what is happening is not technically irreversible. However, the recovery is predicated on reducing emissions and concentrations – it is not any kind of prediction. Rather it is a statement about how non-linear the regime is and on whether there are real and large scale tipping points in this system. The whole issue is moot in the absence of emissions reductions. – gavin]
Where does it say in the paper that the results are predicated on emissions reductions?
_____________________________________________________________
It doesn’t.
What the paper does say is that short term deficits in Arctic sea ice extent (RE: complete loss of Arctic sea ice in the summer time), will, within a few years, return to the pre-existing long term declining trend line of Arctic sea ice extent;
“We use ECHAM5/MPI-OM to perform a climate projection for the 21st century according to the IPCC-A1B emission scenario.”
In a warming world, Arctic sea ice extent continues to decline, as it has been for the past 30+ years (~3C higher mean temperature and apparently accelerating according to my own analyses of HAWS SAT data from Canada, western Greenland, and Svalbard). There paper suggests an ice free Arctic occurs around the 2070 timeframe using the IPCC-A1B emission scenario.
See Figure 1 of said paper.
Small correction to my last sentence, it should read;
There paper suggests an ice free Arctic (September minimum) occurs around the 2070 timeframe using the IPCC-A1B emission scenario.
Sorry about that, if it’s still not clear, again see their Figure 1.
it was inevitable…
models discrediting other models
none of them based in the real world
none of them predicting anything real, warmcold, wetdry, droughtflood, snowrain
and not a one of them right yet………..
A hypothetical question: would it not be possible for a scientist with a computer model to go back a two decades (or whatever amount of time) and input all the known and relevant data up to and including that chosen date – but nothing after – and then see how that model performs when predicting the following two decades where the results are already known?
No longer mr nice guy.
All they are doing is the stall game.
Call them on it, cut their $ off , save ourselves and our children and our freedom.
Make them put the ball in play . They will stand here and dribble on U.S. for a 100 yerars if we let them.
Latitude says:
March 2, 2011 at 5:57 pm
it was inevitable…
models discrediting other models
===========================================
lol, competitive AI!!!!!!!
I wish people would call them what they really are. Computer programs. They will render the results they are programed to render. I can write a “model” that says 2X2= 5……..every time! I can express it different, too! I can say (1+1)(1+1) = 5…..every time! Heck, I could even get real tricky and say 34 mod 10 = 5 !!
Programs, no matter how sophisticated will always render the results programmed.
Latitude says:
March 2, 2011 at 5:57 pm
“it was inevitable…
models discrediting other models”
Climate modeling seems to have become a welfare program for the gazillions of Phds that would be unemployed otherwise. Along with them there are gazillions of salesmen that would be unemployed otherwise. Al Gore is exhibit number one. Some talented sociologist should get to work on this material. With some assistance from a business professional, he could write a book on the costs and benefits generated by the many uses of computer modeling today. (By computer modeling, I mean simulations, the sort of thing that is so popular among climate scientists.) The activities of computer modeling and marketing the results seems to produce an inordinate amount of frenetic hysteria and paranoia. Maybe that is the only way to sell the stuff.
Why is something that happens every year in the alpine lakes of Washington State impossible at the north pole where long sunless days are the norm? Has nobody in the climate guessing industry seen ice islands form in open water? I’ve seen this happen in Valdez harbor!
Albedo
Here’s a question that perhaps one of the many experts here can answer.
Much is made of the change in Albedo as a result of the ice melting. The warming world melts the ice and then the albedo changes causing further warming and so on.
At the poles, most of the sunlight passes through the atmosphere because the surface is almost parallel to the incoming suns rays. Therefore surely the warming effect of incoming shortwave warming the surface is much diminished or alomost nil. The lack of water vapour means there is no “greenhouse effect” to warm the atmosphere and the puny quantity of CO2 makes no difference whatsover. Hence, it is cold most of the time at the poles and rises and falls in temperatures are driven by ocean currents and the wind.
Is this right or wrong? Happy to be corrected.
Paul
So let me get this straight. Someone developed a model that told them that 35 degree sea water will lose heat and freeze when the air temperature is -20 for two or three months????? wow…I am without words…did they get paid for this?
After reading this thread, it seems that Mr. Eschenbach’s calls in a previous post for the discusion to be based on the science have fallen on deaf ears. Is it possible for a thread on this site to not contain references to Mr. Al Gore or claims of conspiracy?
Ron Albertson says:
March 2, 2011 at 6:17 pm
This is often done, with half the data used for calibration, and half the data used for verification.
Unfortunately, all of the climate models have slowly been tuned over a period of years to reproduce the historical dataset. All of it. Every bit. So, having shot their wad, there’s nothing left to test against.
Go figure.
w.
Theo Goodwin says:
March 2, 2011 at 4:08 pm
steven mosher says:
March 2, 2011 at 11:49 am
“That set of equations would be called a “theory” ,which is nothing more than a model.”
You might as well claim that 2+2=3.
To begin with, when we refer to computer models, we are not referring to a set of physical hypotheses that are contained in the model. If we had physical hypotheses, the computer would be used as a calculator only, helping us to quickly discover the tiniest detail of what is implied by the hypotheses.
The computer models used in climate science are really simulations of environments and the models are constantly adjusted on the fly through hunches to attain some output that is roughly realistic. If you doubt that, ask the scientists using the models to produce the physical hypotheses that are programmed into the model. They cannot produce them because they don’t have them. If they had them, they would have trumpeted them from the highest mountain because the existence of some reasonably confirmed physical hypotheses would finally give them some credibility as scientists.
There is a branch of set theory known as model theory. Those folks love to construct models for physical theories. But notice that the physical theory must first exist before a model can be constructed for it. This approach to physical theory is interesting but it has one problem. It requires you to treat the sentences of the physical theory as neither true nor false. All of this comes from the work of Rudolf Carnap. And wonderful work it is. Needless to say, I do not find this approach helpful. I follow W. V. Quine. See the title essay of his “Theories and Things” if you are interested.
_____________________________________________________________
Wierd references to people who have absolutely nothing to do with fluid dynamics whatsoever?
I thought that general circulation models were based on the N-S equations, you know Newton’s Laws of Motion applied to fluids, real physical laws, continuity, momentum, and energy laws, PDE’s/ODE’s using either finite difference or spectral (finite element) numerical modeling.
I can assure you that there are no such inequalities, such as 2+2=3, in any such models, as you so egregiously insinuate.
These models are never “adjusted on the fly” as you so erroneously proclaim.
According to S. Tietsche, D. Notz, J. H. Jungclaus, J. Marotzke: “Arctic summer sea ice extent has decreased substantially in recent years, and it will very likely continue to decrease owing to anthropogenic climate change.”
This is total rubbish. Arctic warming is not due to any anthropogenic climate change but is caused by warm Atlantic currents that have been warming the Arctic ever since the beginning of the twentieth century. This can easily be deduced from the observations of Kaufman et al. in September 2009. They reported a two two thousand year old Arctic temperature curve based on arctic lake sediment data. For most of this period there was a slow, linear cooling that ended abruptly with a sudden warming at the turn of the twentieth century. It paused a while in mid-century, then resumed in 1970 and is still going strong. For a sudden warming you need an equally sudden cause and this rules out carbon dioxide greenhouse effect. That is because the absorptivity of carbon dioxide in the infrared is a physical property of the gas and cannot be changed. If you want it to absorb more to create warming you have to increase the amount of gas in the air and we know that this did not happen. Hence, we can rule out carbon dioxide and other unlikely causes like the sun and volcanoes as the cause of this warming. This leaves us with a rearrangement of the North Atlantic current system at the turn of the century as the only likely cause that could suddenly bring warming to a geographically large area of the ocean. It would also explain the warming pause in mid-century as due to the currents temporarily assuming their previous configuration. Of these warm currents the Gulf Stream is a major actor. It is known to keep the Russian Arctic ice free in the summer and very likely assumed its present configuration at the turn of the twentieth century. There are numerous observations of the two-part warming starting early in the twentieth century but the most convincing observations were reported by Spielhagen et al. in the January 31st issue of Science. They observed arctic water temperatures directly from research vessels and determined climate history using a foramineferal core taken near Svalbard. These observations fill in the blank spots left by older observations. Their water temperature curve also runs for two thousand years and parallels the land-based curve of Kaufman et al. They recorded not just the same sudden warming Kaufman et al. did but also the fact that it was caused by warm Atlantic currents flowing into the Arctic. This is how they put it: “…modern warm Atlantic water inflow … is anomalous and unique in the past 2000 years and not just the latest in a series of natural multidecadal oscillations.” Amen. Actually, this is nothing new to me because I published the theory of arctic warming just described in the first edition of my book “What Warming?” in 2009. The second edition (now on Amazon) has an expanded version of the theory but unfortunately it went to press before Spielhagen came out so you will have read about their work here. What it all means is that greenhouse warming as a cause of arctic warming is now dead and zillions of papers parroting arctic greenhouse warming are simply dead wrong. If Tietsche et al. had bothered to read my book they would have avoided the dead end they have worked themselves into. But that is only the start – my book has more to say about the deceptions thrown at us to prove a greenhouse effect that does not exist.
Ice is an insulator, preventing the arctic ocean from cooling. Take away the ice, the arctic will cool faster, forming ice. This negative feedabck mechanism is why the earth’s temperature has been bounded between 12C adn 22C for the past 600 million years, regardless of CO2 levels. The more ice there is, the mroe the oceans heat is trapped below. The less ice there is, the more heat is lost from the oceans.
The single most important reason this works is because ice floats, keeping the water underneath warm.
Water is the only known substance whose solid form floats in its liquid form.
http://wiki.answers.com/Q/Do_all_frozen_liquids_float_on_their_liquid_as_ice_floats_on_water
Arno Arrak says:
March 2, 2011 at 8:00 pm
_____________________________________________________________
The Science issue is actually dated 28 January 2011.
Neither of these papers supports your claims as stated above, as I’ve read both, you even quote this from the aforementioned article;
“is anomalous and unique in the past 2000 years and not just the latest in a series of natural multidecadal oscillations.”
Please read and underline the part “NOT just the latest in a series of natural multidecadal oscillations.”
Both of these papers are perfectly consistent with the current understanding of the Arctic Ocean and very recent and large increases in temperatures there.
If Arctic sea ice goes down, something must be going up, I wonder what would cause that something to go up, we know from both of these papers that what is causing xomething to go up isn’t natural, that’s for sure.
Do I need to quote mine these two papers to prove it to you?
Not buying it. Either one.
George E Smith writes “And you think the polar regions are actually cooling the planet ?”
There are significant differences between the cooling in the desert and the cooling in the polar region.
The desert only cools at that high rate (you quote 697W/m^2) during and following the “day” where the sun heated it up to that temperature. And then overnight it cools off entirely. Nights in the desert can get very cold. The net result is that the energy gained during the day is lost…not so much net loss.
The polar regions on the other hand spend half the year in darkness where their net gain from the sun is zero and any radiation is all loss to the planet (once its further radiated from the atmosphere)
Hence its fairly obvious that decreases in ice cover will increase the heat loss especially during the winter months and thus decrease the earth’s overall energy. Deserts dont behave like that. Well, not that I’m aware of anyway.
as high as 7°C for the CSIRO Mk3.0 model.”
=====
well, isnt that NOT unexpected with the insane figures the CSIRO has been touting here in Aus!
and its ALL supposition anyway but they felt the need to paywall a fantasy?
the Arctic did have that melt in the 30s, didnt anyone keep any records?
George E. Smith says:
March 2, 2011 at 4:17 pm
George, you have overlooked a critical part of the equation. It’s not about how much is radiated. It’s about the balance. For example, the tropics, as you point out, radiate much more than the poles.
But they also receive much, much more than the poles. The tropics receive so much energy that they can’t radiate it all away. The tropics radiate much less energy than they receive from the sun. So a goodly chunk of that tropical energy is transported to the poles by the atmosphere and the ocean.
Compared to the tropics, the poles receive very little energy from the sun. As a result, and unlike the tropics, the poles radiate much more energy than they receive from the sun.
So despite your correct observation that the tropics radiate more than the poles, the poles are actually very crucial as the main radiators for the planet. The poles are where the energy that the tropics can’t handle gets radiated to space … so yes, polar areas are indeed critically necessary, because despite your objection they are in fact cooling the planet.
w
Alexander Harvey says:
March 2, 2011 at 5:01 pm
I would respond to that by noting that “to the degree that the response to a forcing is linear and time invariant (LTI)”, to that same degree it is unlikely to be a real world forcing and response.
Sure, if we we assume that forcings are neat and linear and time invariant (LTI), as you point out we can do a whole host of things, a dazzling array of things … except describe the real world.
That’s exactly what the modelers have done, what you describe. They assume that the response to the forcings are linear and time invariant. That is a necessary simplification to make the real world fit their model paradigm. That’s why I can describe the GISS model’s temperature response with a single line, single variable equation.
But in the real world, responses to forcings are generally and overwhelmingly neither linear nor time invariant (LTI).
Which was my point.
w.
“… hysteretic threshold behavior…” they seem to have misspelled “hysterical”
🙂